none
Skeleton detecting areas RRS feed

  • Question

  • Hi everyone.

    I am trying to make a console that detect a human and its moving.

    But... Kinect does not detect the skeleton near than 50cm .

    Why ?

    So, I am looking for the another way that kinect can detect on these areas.

    Please tell me another way.

     

    And I was told Kinect uses the structured light theory. 

    If you know details, please tell me.

     

    Thanks.

    Tuesday, July 19, 2011 12:37 AM

Answers

  • The Kinect uses the direction of a laser dot from the laser projecting it, the direction from an IR sensor viewing it and the distance between the laser and sensor to find distance. The two directions are two angles of a triangle and the distance is a side of that triangle common to those two angles. The law of sines allows it to find the length of the other two sides. One is the distance of the laser dot from the laser and the other is the distance from the sensor. The distance from the sensor is your depth.

    It works basically the same as a laser range finder. One big differance is with a laser range finder there's only one dot. With the Kinect theres about 300k of them. So it needs to be able to tell one dot from another. That's where structured light comes in. The dots are arranged in a pattern of bright and dim dots such that it can determine which dot is which. Once it identifies a dot then it works the same as a laser ranger finder for processing that dot.

    The minimum depth measurement through the SDK is 80cm so it's not going to track a skeleton under that nor is there any way to do so ignoring the SkeletalViewer executable distributed with the SDK. That has a lower minimum distance than available through the SDK. Using that demo though if you hold your hand up to the right of center of the image in front of a background in range and you'll see a shadow in the depth image. That's the main reason the minimum depth is limited. Put an open hand too close to the camera and on the right a big section is seen by the IR sensor, but not illuminated by the IR laser while on the left a big part is illuminated by the IR laser, but not seen by the IR sensor. There's little practical value to that for the intended use of the SDK which is support interaction with the computer and not being a poor man's 3D scanner.

    Tuesday, July 19, 2011 5:19 AM

All replies

  • The Kinect uses the direction of a laser dot from the laser projecting it, the direction from an IR sensor viewing it and the distance between the laser and sensor to find distance. The two directions are two angles of a triangle and the distance is a side of that triangle common to those two angles. The law of sines allows it to find the length of the other two sides. One is the distance of the laser dot from the laser and the other is the distance from the sensor. The distance from the sensor is your depth.

    It works basically the same as a laser range finder. One big differance is with a laser range finder there's only one dot. With the Kinect theres about 300k of them. So it needs to be able to tell one dot from another. That's where structured light comes in. The dots are arranged in a pattern of bright and dim dots such that it can determine which dot is which. Once it identifies a dot then it works the same as a laser ranger finder for processing that dot.

    The minimum depth measurement through the SDK is 80cm so it's not going to track a skeleton under that nor is there any way to do so ignoring the SkeletalViewer executable distributed with the SDK. That has a lower minimum distance than available through the SDK. Using that demo though if you hold your hand up to the right of center of the image in front of a background in range and you'll see a shadow in the depth image. That's the main reason the minimum depth is limited. Put an open hand too close to the camera and on the right a big section is seen by the IR sensor, but not illuminated by the IR laser while on the left a big part is illuminated by the IR laser, but not seen by the IR sensor. There's little practical value to that for the intended use of the SDK which is support interaction with the computer and not being a poor man's 3D scanner.

    Tuesday, July 19, 2011 5:19 AM
  • In addition to what LilBudyWizer said, I'd add that skeleton tracking needs to see most of a person's body in order to recognize it as a human body and decide where the skeleton joints are. Any closer than 80cm and you're not going to have enough on the screen to recognize the body.
    Tuesday, July 19, 2011 5:28 PM
  • LilBudyWizer

    Thank you for detail contents.

    I understand.

     

     

    Tuesday, July 19, 2011 11:53 PM