none
List of Methods. RRS feed

  • Question

  • Hi, guys. I just learned the basic from the KinectQuickStart guides/videos and i noticed that basically everything i need is ready, i just need to find out which methods i have to use.

    I was thinking if there's any place with a list of all implemented Methods or something like, where i could just mouse search/scroll them all looking for the ones i need. Any ideas?

    Thanks o/
    Sunday, April 29, 2012 6:55 PM

Answers

  • There is no open/closed hand detection in the SDK. You'll need to do some form of image processing on the depth image to achieve that.

    One possible approach is to first determine the location of the hand from a tracked skeleton (SkeletonPoint sp = skeleton.Joints[JointType.HandLeft].Position), map it to the corresponding pixel coordinates in the depth frame (DepthImagePoint dp = depthFrame.MapFromSkeletonPoint(sp)), then look at the other depth pixels in a region surrounding that point, to determine how many pixels in the region are also part of the hand. If this number is relatively high, consider the hand to be open; if it is low, consider the hand to be closed.

    To determine which pixels are "also part of the hand," look for pixels that at approximately the same depth as that point. (Another approach might simply look for pixels with the same player ID, but I don't recommend that because if your hand is directly between the camera and the rest of your body, nearly all pixels in the region will have the same player ID!)

    The area of the region you scan should be constant in world space, which means it should cover more pixels when the hand is closer to the camera.

    Parameters you'll need to consider are: shape and size of the region to scan (15 cm x 15 cm rectangle?), the tolerance in depth difference for pixels within the region (+/- 5 cm?), and the threshold of pixels within the region that indicate a transition from closed to open (30% of the pixels?). The values I've given are just guesses; I haven't coded this, so I don't know for sure. You'd have to try implementing it and tweaking these values to see what works best.

    John
    K4W Dev

    Wednesday, May 2, 2012 10:02 PM

All replies

  • Hi,

    after setting up your project and doing the first steps (you should get help in how to's in c++ or c#) you can extend your code with the reference in c++ or c#.

    Sunday, April 29, 2012 9:30 PM
  • I'm pretty sure you didnt understood my question, so here's an example: Which Method should i use when i want to detect if my hand is open or closed, so i can do different events based on this?
    Wednesday, May 2, 2012 7:35 PM
  • And i'm going to check your links. I saw some usefull things there.
    Wednesday, May 2, 2012 7:37 PM
  • There is no open/closed hand detection in the SDK. You'll need to do some form of image processing on the depth image to achieve that.

    One possible approach is to first determine the location of the hand from a tracked skeleton (SkeletonPoint sp = skeleton.Joints[JointType.HandLeft].Position), map it to the corresponding pixel coordinates in the depth frame (DepthImagePoint dp = depthFrame.MapFromSkeletonPoint(sp)), then look at the other depth pixels in a region surrounding that point, to determine how many pixels in the region are also part of the hand. If this number is relatively high, consider the hand to be open; if it is low, consider the hand to be closed.

    To determine which pixels are "also part of the hand," look for pixels that at approximately the same depth as that point. (Another approach might simply look for pixels with the same player ID, but I don't recommend that because if your hand is directly between the camera and the rest of your body, nearly all pixels in the region will have the same player ID!)

    The area of the region you scan should be constant in world space, which means it should cover more pixels when the hand is closer to the camera.

    Parameters you'll need to consider are: shape and size of the region to scan (15 cm x 15 cm rectangle?), the tolerance in depth difference for pixels within the region (+/- 5 cm?), and the threshold of pixels within the region that indicate a transition from closed to open (30% of the pixels?). The values I've given are just guesses; I haven't coded this, so I don't know for sure. You'd have to try implementing it and tweaking these values to see what works best.

    John
    K4W Dev

    Wednesday, May 2, 2012 10:02 PM
  • John I would like to ask about one thing from your answer

    "To determine which pixels are "also part of the hand," look for pixels that at approximately the same depth as that point. "

    By the depth to compare you mean the Z distance ?

    Friday, May 4, 2012 12:00 PM