none
User body segmentation info RRS feed

  • Question

  • Hi, currently i am doing an experiment on kinect depth map. I am interested in how Kinect SDK segments the body parts.

    Based on Microsoft CVPR'11 paper, they segmented the body into several parts then use it for estimating joint position. Can we somehow get this segmentation information available using Kinect SDK?




    Monday, October 1, 2012 3:53 AM

All replies

  • No, the only outputs from the SDK are the overall player segmentation, and the joint positions. The segmentation pictured above is a simplified representation of internal state that is not accessible through the API.


    John | Kinect for Windows development team

    Tuesday, October 9, 2012 12:36 AM
  • Hi Reza,

    John is right, this information is not available externally: we summarize this and other information in the joints that we return as part of Skeletal Tracking.

    Can I ask what is the scenario/problem you are trying to solve?

    Thanks-

    Mauro Giusti.


    - Mauro Giusti. ----------------------------------------------------- There is only one way to do the job: the right way. (Gus)

    Tuesday, October 9, 2012 2:50 AM
  • Hi Reza,

    John is right, this information is not available externally: we summarize this and other information in the joints that we return as part of Skeletal Tracking.

    Can I ask what is the scenario/problem you are trying to solve?

    Thanks-

    Mauro Giusti.


    - Mauro Giusti. ----------------------------------------------------- There is only one way to do the job: the right way. (Gus)

     You are contradicting each other and you deleted my post which was  a paraphase of yours gus. I reworded my post below so it was better understood:

    Joints currently available can be mapped in depth view if done properly (if not he can draw the points himself in directx and still get what he wants). You can get position using jointtype enumeration correctly with skeleton tracking code when in skeleton tracking mode and then draw this in the depth map with the correct mapto sub or function (or you could just if you know directx and wpf draw the skeleton from skeleton tracking overtop of your custom depth image created from the points on the skeleton for better accurary):

    http://msdn.microsoft.com/en-us/library/jj131025.aspx

    http://msdn.microsoft.com/en-us/library/microsoft.kinect.jointtype.aspx

    I can pull code that would be close to the op's subject with a little bit of time. OP as stated above skeleton tracking returns joint information but you could assign those points to the depth image returned of the person and draw a different color for each joint returned from skeleton tracking.

    Its interesting that you ask this, because I will be doing a similar procedure for my kinect sign language project soon. I will want to see the joints divided in my depth map and colored similar to how you have your depth map split up into different colors for each joint in the depth map. 

    Please tell more information though so I can modify my post above as necessary OP.

    Next fall, I will be taking sign language. Then, I will build a gesture recorder and recogition program so I can record the sign and then have the program using tts technology speak out the word.   


    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://www.computerprofessions.co.nr







    • Edited by The Thinker Tuesday, October 9, 2012 1:59 PM
    Tuesday, October 9, 2012 1:47 PM