none
Kinect Fusion and joints RRS feed

  • Question

  • Hi!

    Does anyone know how to correctly project joint positions to the kinect fusion volume?

    In my code, I create meshes for each joint and add them to the viewport based on their coordinates, but they don't align well with the fusion reconstruction.

    When using the BodyFrame and Joint positions, they seem to align very accurately when using the coordinatemapper.

    I am guessing there is some way to convert a joint position to a "fusion position" (similar to the coordinate mapper), but how?

    Searching for this topic, I could not find any examples or explanations (that I could understand).

    Any help would be greatly appreciated.


    • Edited by JFalck Thursday, February 12, 2015 8:57 PM
    Thursday, February 12, 2015 8:56 PM

All replies

  • The Fusion volume does not share an origin with the depth space coordinate system. If you look at the fusion explorer sample and tick the 'show volume' and untick the 'Kinect View' button you can see two coordinate systems and how they change as you move the Kinect through the volume. Fortunately, Fusion give you the transformation between the two as the cameraToWorld Matrix4. So you can take your bone point in depth space, multiplied by the camerToWorld Matix4, to get the point in world space. I think.

    If you are still running the Fusion algorithm in the background then the cameraToWorld Matrix4 will be changed by having the human moving around in the volume. You will want to freeze this in a static volume before you start with non-rigid motions coming into the scene.

     
    Friday, February 13, 2015 11:09 AM
  • Hi Paul!

    Thank you so much for taking the time to reply to my question.

    Unfortunately, being merely a mediocre webprogrammer, I did not understand how I would actually solve this.

    I took a couple of screenshots so you can see what the issue is (although, I believe you already know).

    In my code I just take the average positions of the joints  (such as "joints[JointType.Head].Position.X", Y, Z) over a set number of frames, making sure all joints are tracked.

    Fusion gets these same frames, and then is paused after reaching the final one.

    Lastly, I add meshes to the viewport based on the average joint positions.

    I am obviously missing the convertion step of the postions, but how do I do it?

    Given the joint position X,Y and Z, how do I get the corectly converted X,Y and Z values?

    joints[JointType.Head].Position * cameraToWorld.. something?

    IMAGE 1 ---->  postimg.org/image/ze5dfau81/ 

    IMAGE 2 -----> postimg.org/image/gqvm7t4fl/

    Saturday, February 14, 2015 11:01 PM