none
How to align FusionPointCloudImageFrame with DepthImageFrame ? RRS feed

  • Question

  • I have build a simple Kinect application (WPF, C#) based on the "Depth Basics-WPF" example from the Kinect SDK.

    I am trying to extract the users' surface mesh, in other words, a mesh representing the surface of everything recognized as a user by the Kinect. In addition to the XYZ position of the vertices I would also like to extract the normal.

    Then, to get the normal I am using the FusionPointCloudImageFrame data.

    I am doing something wrong because it appears that the depth data are not aligned with the point cloud data (slight shift).

    Here is how I am proceeding :

    1. Init the Kinect as usual

    2. When a depth frame is received, copy the depth data in 'DepthPixels'

    3. Extract the point cloud and store the result in PointCloudData

    FusionDepthProcessor.DepthToDepthFloatFrame(
            DepthPixels,
            DepthImageWidth,
            DepthImageHeight,
            this.DepthFloatBuffer,
            FusionDepthProcessor.DefaultMinimumDepth,
            FusionDepthProcessor.DefaultMaximumDepth,
            false);
    
    FusionDepthProcessor.DepthFloatFrameToPointCloud(this.DepthFloatBuffer, this.PointCloudBuffer);
    
    PointCloudBuffer.CopyPixelDataTo(PointCloudData);

    As you can see, since I only need the point cloud, I am using a static method of FusionDepthProcessor, I do not use the Reconstruction class.

    How to align the point cloud frame with the depth frame ?

    Bonus question : Is there a better (automatic) way to extract the mesh of all users (and only user ==> PlyerId of DepthPixels is not 0) (with only one fixed Kinect) ?


    • Edited by N3ms Thursday, February 6, 2014 9:22 PM
    Thursday, February 6, 2014 9:19 PM

All replies

  • I would like to achieve something like : Reconstruction.AlignDepthFloatToReconstruction (http://msdn.microsoft.com/en-us/library/system.action(v=vs.110).aspx).

    Thanks in advance.

    Friday, February 7, 2014 4:56 PM
  • If you are already calculating the point cloud from depth, why not just generate your normal from that frame as well(cross product of vectors from the point). Fusion is based on its own reconstruction of the data/volume and the orientation of the sensor's view of that volume. These may not be exact to what you constructed.

    http://www.flipcode.com/archives/Vertex_Normals.shtml


    Carmine Sirignano - MSFT


    Friday, February 7, 2014 7:45 PM
  • Thanks for your reply Carmine.

    In fact the FusionPointCloudFrame already contains the normal, but the "pixels" are not aligned with the Depth/ColorImageFrame. I figured out that with a small debug function, here is diagram explaining it :

    Image 1 = ColorImageFrame pixel data

    Image 2 = FusionColorImageFrame <= FusionDepthProcessor.ShadePointCloud(...) <= FusionPointCloudImageFrame

    Image 2 is slightly shifted in regards to Image 1. So I can read the normal data, but I cannot match them to the correct DepthPixel.

    FusionDepthProcessor.ShadePointCloud on msdn : http://msdn.microsoft.com/en-us/library/dn189009.aspx

    Thanks in advance 

    Saturday, February 8, 2014 7:58 PM
  • I think the FusionDepthProcessor is using a non-identity transform matrix, thus inserting a shift.

    MSDN says : Construct an oriented point cloud in the local camera frame of reference from a depth float image frame.

    Is there a way to specify or clear (to identity) the transform matrix for this static method ?

    Is there a way to clear or set the local camera frame of reference ?

    Thanks
    N3m$

    Monday, February 10, 2014 9:41 PM
  • Do you need a screenshot showing the problem ?

    Should I explain more in-depth ?

    Tuesday, February 11, 2014 2:23 PM