none
skeleton joint position in kinect fusion RRS feed

Answers

  • you cant directly. you can find the joint position in the depth stream, and this depth stream is the input for kinect fusion-->  the depth pixel should be a point in the point cloud.
    Tuesday, May 28, 2013 11:38 AM

All replies

  • you cant directly. you can find the joint position in the depth stream, and this depth stream is the input for kinect fusion-->  the depth pixel should be a point in the point cloud.
    Tuesday, May 28, 2013 11:38 AM
  • Thanks :)

    I took depthimagepixel from depthstream and pass it to kinect fusion.

    I can take pointcloud values for bulk of depthimagepixels of human like face,stomach.
    But if i try to take single point cloud for each skeleton joint in kinect fusion,it is not coming.

    Wednesday, May 29, 2013 4:15 PM
  • That is true, because you are not able to see a single point with kinect fusion. You should take more than one point in the skeleton joins area. With more than one point is kinect fusion able to make meshes from this point clouds.
    Wednesday, May 29, 2013 10:08 PM
  • Thanks. J

    Yes i accept your statement. But my doubt is whether 3D skeleton point which i got from kinect depth sensor and point cloud of skeleton joints which i got from kinect fusion are same.

    Eg:-

    3D skeleton point which i got from kinect depth sensor:-

     RightShoulder = frame.MapFromSkeletonPoint(gestureskeleton.Joints[JointType.ShoulderRight].Position);

    RightShoulder.X,RightShoulder.Y and RightShoulder.Z

    point cloud of skeleton joints which i got from kinect fusion:-

    skeleton pixel data which i pass from depthframe to kinectfusion to get pointcloud of skeleton joints

     pixelDatas[i].Depth = pixelData[i].Depth;

                                    pixelDatas[i].PlayerIndex = (short)playerIndex;

    //Final point cloud collection after passing the above skeleton joint pixels

    point3Dcollection = GetPointCloud();

    Thursday, May 30, 2013 3:42 PM
  • To map skeletal positions to depth data you have to use the CoordinateMapper functions. This will map the skeletal position to a particular point in the depth data.

    CoordinateMapper.MapSkeletonPointToDepthPoint Method
    http://msdn.microsoft.com/en-us/library/jj883696.aspx

    Friday, May 31, 2013 12:28 AM
  • Thanks  J

    It is just an example , which i mentioned in my previous reply.

    Actually i have used CoordinateMapper.MapSkeletonPointToDepthPoint Method in my code.

    But my actual question is not that.

    In general i have a doubt, whether 3D skeleton point which i got from depth frame,  and point cloud data's of skeleton joints which i got from kinect fusion are same  or different ?

    Friday, May 31, 2013 12:33 PM
  • As actorx said above, you cannot do a 1:1 relationship between a pixel and a voxel. Voxel count and how many per meter are going to factor into these calculation. A skeletal position will give you a starting point, but you will need to calculate the voxel region for this and project that into the volume coordinate space.

    Kinect Fusions has its own volume coordinate space.

    A right handed volume coordinate system is used, with the origin of the volume (i.e. voxel 0,0,0) at the top left of the front plane of the cube.

    INuiFusionReconstruction::GetCurrentWorldToVolumeTransform Method
    http://msdn.microsoft.com/en-us/library/microsoft.kinect.nuikinectfusionvolume.inuifusionreconstruction.getcurrentworldtovolumetransform.aspx

    INuiFusionReconstruction::GetCurrentWorldToCameraTransform Method
    http://msdn.microsoft.com/en-us/library/microsoft.kinect.nuikinectfusionvolume.inuifusionreconstruction.getcurrentworldtocameratransform.aspx



    Friday, May 31, 2013 5:25 PM
  • Thanks  J

    Now i understood clearly that kinect fusion 3D is different from real world coordinates of the skeleton.

    Please give me the suggestion that can i use the following site formula (http://graphics.stanford.edu/~mdfisher/Kinect.html)

    for converting 2D depth image pixels to 3D points instead of kinect fusion cloud points.

    Saturday, June 1, 2013 8:12 AM
  • Please correct me ,If anything wrong in my question.
    Tuesday, June 4, 2013 3:01 PM
  • To convert depth image pixels to 3D points, you can do this:

    SkeletonPoint[] skeletonPoints = new SkeletonPoint[depthPixels.Length];
    sensor.CoordinateMapper.MapDepthFrameToSkeletonFrame(depthFrame.Format, depthPixels, skeletonPoints);
    
    After this, skeletonPoints will contain the 3D points corresponding to every pixel in the depth frame. These points are expressed in the same coordinate space as the skeleton data.

    John | Kinect for Windows development team

    Tuesday, June 4, 2013 10:32 PM
  • Thanks John :)

    I will implement it and tell you the result.

    Wednesday, June 5, 2013 2:09 PM
  • Hi keaneleo,

    Did you have any success implementing this? We're kind of battling with this as well.

    Cheers,

    Norby

    Thursday, September 19, 2013 8:55 PM