Positionally tracking a single point (fingertip) RRS feed

  • Question

  • I have a feeling what I want to do is so simple, it's hard to find information about.  My end goal is to track my index finger as I move it around in front of the kinect.  It will probably need to be close -- 18-24 inches away.  That being said, it's not so important that its just my finger -- I could develop this application using a glove that had a unique color on the fingertip.  I'm not sure what the best way to go about it is.  

    All I want to do is be able to tell how far away my fingertip is from the sensors, which direction i'm moving it in, and be able to record data about how my finger travelled, how long it stayed in a given area, which direction it moved next, etc, and display a simple 3d arrow on a screen that is pointing to my finger.  I'm not sure if I should use the color to depth map or if there is some other way to do it.  

    Again, I don't care about orientation, joint movement, etc.  Plain and simple... how far away is a point from the camera and how does it move.  If anybody could recommend a good model or starting point for the best technique to achieve this I would be grateful! 

    Saturday, March 15, 2014 9:25 PM


All replies

  • The Kinect sensor or SDK does not provide any type of finger tracking functionality. There are third party SDK's that can consume Kinect data and parse that for you, or you can create your own computer vision techniques to develop that yourself.

    Carmine Sirignano - MSFT

    Monday, March 17, 2014 8:17 PM
  • Thanks for replying, Carmine.  But my question is really much more generic.  I don't really want to track how my finger bends even multiple fingers.  I'm really after tracking a single point -- it could even be a color on the tip of a glove, or a color on the tip of a stick.  My end goal would have that point be the tip of my finger, but again, any type of identifiable point is really what I want to be able to follow, determine distance, and if that point is moving left or right, up or down.
    Monday, March 17, 2014 10:58 PM
  • You still need to use the same technique to know where the finger is.

    If you just want rough estimates of the location, you can use the Skeleton tracking to get the hand point. That gives you the rough location of the hand and the Skeletal basics samples do that for you. The issue still is, where are the figures when looking at that position. This will require that you analyze the depth/ir data to determine the outline of the hand and more analysis to figure out the finger locations. That is where those libraries help to do that for you.

    Carmine Sirignano - MSFT

    Tuesday, March 18, 2014 5:21 PM
  • So lets forget any mention of finger(s) for a second...  is there any simple solution at all to track any specific point , however it may be defined... an LED, a specific color, etc?  Can I tell it to look for a very particular color and then extract the depth information from that?
    Tuesday, March 18, 2014 7:18 PM
  • The only Kinect built-in tracking capability exposed in the SDK is skeletal tracking. Have a look at the Skeleton basics samples.

    Carmine Sirignano - MSFT

    Thursday, March 20, 2014 6:26 PM
  • I appreciate you outlining exactly what the sdk does and does not provide.  I guess I was looking more for what type of technique I could use to achieve my goal.  I suppose using just the color stream, I could attempt to identify an x,y position that meets a certain criterion or set of criteria, then extract the corresponding x,y value from the depth stream.
    • Edited by Turick Saturday, March 22, 2014 3:17 PM
    Saturday, March 22, 2014 3:16 PM