Kinect MultiSourceFrame, how to use (C#) RRS feed

  • Question

  • Hi,

    I have some doubt about the multisource frame class.

    I want mixing two kinect sample "FaceBasic and Body(Skeleton)" using the multiframesource.

    private void MultipleFramesArrivedHandler(object sender, MultiSourceFrameArrivedEventArgs e)
        // Retrieve multisource frame reference
        MultiSourceFrameReference multiRef = e.FrameReference;
        using (MultiSourceFrame multiFrame = multiRef.AcquireFrame())
            if (multiFrame == null) return;
            // Retrieve data stream frame references
            ColorFrameReference colorRef = multiFrame.ColorFrameReference;
            BodyFrameReference bodyRef = multiFrame.BodyFrameReference;
            using (**coorinate_mapper.AcquireFrame()**)
                using (BodyFrame bodyFrame = bodyRef.AcquireFrame())
                    if (colorFrame == null || bodyFrame == null) return;
                    // Process the frames

    Is correct? in //Process Frames can i use indistinctly the bodyframe or coordinatemapper?

    For example. Can i extract two feature, one from facebasic (es. the head rotation coordinate) and one from bodyframe (es. the coordinate points of the head)?

    How can i choose between faceframe or  bodyframe, is automatic?

    Another question, with multisourceframe i acquire one frame for each source? (color, ir, depth, coordinatemapper etc?)or i receive one frame (es. color), next frame (es. ir), next frame(es. depth).

    Thank you in andavance for your reply.

    Friday, November 21, 2014 12:02 PM


  • Hi Skipper,

    The multisource frame synchronizes all the frames for you. For a single Multisource frame you can get all the source frames you asked for. These frames are all synchronized for you. For each frame you can acquire any source frame which you told the multisource frame you were interested in upon initialization should be available. If it's not that just means the frame was invalid (null) or it took you too long to process it.

    When you start the multisource frame reader you initialize the multisource frame reader with color, ir, body, etc, then when a frame is available, you can get color, ir and body frames which are somewhat sync'd.

    As far as a faceframe, it is a separate library, which uses a combination of source frames (IR/Depth etc) to provide you a calculated region containing face features. The face frame is based on a tracked body frame. When the body frame has a tracked body you can set the body tracking id to the face frame reader, which allows the face frame reader to produce separate face frames.

    You can then acquire face frames which are sync'd to the tracked body and get head rotation and etc. Once you have a face frame you can get back to the referenced body frame. You get the face frame by setting up a faceframe reader based on the tracking id. then you listen for a faceframereader event, or poll for face frames and verifying if you have a valid faceframe.

    HDFacerframe and HDFaceFrame readers follow the same pattern.

    The Coordinate mapper is based on the sensor and lives for the lifetime of the sensor. It provides you with the ability to map points between coordinate spaces - such as color, to Depth, to camera view space coordinates. It does allow you to use it separate from any frames.

    Sr. Enterprise Architect | Trainer | Consultant | MCT | MCSD | MCPD | SharePoint TS | MS Virtual TS |Windows 8 App Store Developer | Linux Gentoo Geek | Raspberry Pi Owner | Micro .Net Developer | Kinect For Windows Device Developer |blog:

    Friday, November 21, 2014 1:38 PM