Position estimate before skeletal tracking is active RRS feed

  • Question

  • I am facing a situation in which I need a reasonable estimate for the user's position in as little time as possible. It appears to me that currently I can only get or estimate the position once skeletal tracking for the user is active. In practice it often takes too long for the tracker to pick up users.

    A little more background about my setup: I am using three KinectV2s that are attached to a wall at a height of ~3m and under a ~45 degree angle (looking down at the space). I combine the observations of the three Kinects by fusing user trails in each of the cameras. This works, as long as the skeletal tracker is quick enough to give estimates for the user's position. I realise that I am likely making things harder for the tracker by placing the sensor at such a height and angle.

    In an older setup I used the KinectV1 sensor and the OpenNI/NITE libraries, with which I could mostly solve this problem. I made use of something that is similar to the data in BodyIndexFrames, allowing me to iterate over the pixels that belong to the user to estimate its position as soon as NITE was able to find a segmentation based on the depth data. This is in contrast with how (I understand) the KinectV2 SDK handles the BodyIndexFrame; it will only write user IDs in this buffer if for the user the skeletal tracking is active.

    Am I missing something here? Does the SDK provide anything that could solve this problem for me? —If not, my first thought would be to perform a depth segmentation myself and try to match my position estimates with those found by the SDK to properly keep tracking of user IDs. 

    Friday, September 11, 2015 9:45 AM