none
Performance dependent on depth of subject RRS feed

  • Question

  • Hi,

    I noticed that the performance (in terms of how long the Kinect SDK takes to process a frame to extract the skeleton) seems to be dependent on the depth of the person from the camera. For example, if I am stood close, the algorithm is much slower than if I am further away. I presume this is a result as the classification algorithm has to process more pixels and also the algortihm used to cluster the pixels will take longer as there are more pixels to consider. Has anyone else noticed this behaviour, is there any way around it as I find that when someone is stood close to the camera frames for the depth image then start to be more likley to get dropped?

    Thanks

    Ben

    Wednesday, September 21, 2011 11:01 AM

All replies

  • How close are you standing to camera? Algorithm is designed for full body tracking, so if your legs, head or arms are not fully visible in every frame, then the algorithm has to do more work to recover from this, since it tries to find where the feet, ankles and knees are for every skeleton, and if it doesn't find a good match in frame it has to infer where a good place is for these joints.

    Eddy


    I'm here to help
    • Proposed as answer by ykbharat Sunday, May 6, 2012 11:26 AM
    Wednesday, September 21, 2011 8:06 PM