none
Indetifing parts of the body on depth frame RRS feed

  • General discussion

  • Hi everybody

    Kinect SDK gives some separate information: depth, video, infrared and skeleton (body). The positions of the bones calculate inside the driver. In Microsoft Research documents about Kinect Skeleton recognition said that the positions of the bones calculated using depth frame. On first step, SDK divides the depth image of person on some zones. As I understand, there is somewhere information about pixels in depth frame to which part of the body these pixels are related to. This information is hidden.

    Does anybody know how retrieve this information? My aim is to get pixels of  head and hands to use them on avatar and it's really unusual task for me.

    Thanks.

     

    Thursday, June 5, 2014 10:17 PM

All replies

  • As stated this is not exposed as a function of the API. It is a common request that people ask, can you provide more details on the scenarios. What are you trying to accomplish by getting the pixel values not the joint information? You could just use the joint information and infer the depth values around that point to map out the area. based on joint distance you can do this much faster since you can restrict the search area based on that information.

    Carmine Sirignano - MSFT

    Thursday, June 5, 2014 11:39 PM
  • 3D Joint position already contains some of the depth information: X, Y and Z coordinates

    You can also use the CoordinateMapper.MapSkeletonPointToDepthPoint (http://msdn.microsoft.com/en-us/library/jj883696.aspx) to get the exact pixel information from depth stream.

    note: this is for Kinect v1 SDK


    Vincent Guigui Innovative Technologies Expert at OCTO Technology Kinect For Windows MVP award

    Friday, June 6, 2014 1:08 PM
  • I have already made an application to measure waist, shoulders and human height which are used to select correct cloth size. Of course I used joint information mapped on depth frame, but Kinect 1 gives has a bad accuracy. Now I have another task - separating head and hands images in depth frame. It's closer to what I did earlier. Head position is always on the top therefore I just scan the depth pixels from 'Neck' joint upto top most point of the body. I get satisfactory result for head.  

    Hands could have any position and angles and I don't understand how split useful and unnecessary pixels. 

    Friday, June 6, 2014 2:26 PM
  • Yes, I know this method, but the problem is the positions of the hands. They can be rotated onto any angle and it's difficult to split the pixels.  
    Friday, June 6, 2014 2:44 PM
  • With Kinect v1 SDK, you can retrieve Player Index data from the depth stream, this will give you exactly what pixel is own by which user. http://msdn.microsoft.com/en-us/library/jj131025.aspx#PlayerID_in_depth_map

    Sadly it's not giving you out-of-the-box a member segmentation but you can guess it based on the size or shape of the depth blob you get around the joint.


    Vincent Guigui Innovative Technologies Expert at OCTO Technology Kinect For Windows MVP award

    Friday, June 6, 2014 3:24 PM
  • Kinect SDK v2 has a special set of classes to get playerID from the view. Threre are BodyIndexXXXX classes in SDK. And it good documented in help file.  
    Sunday, June 8, 2014 2:36 PM