none
Using different mapping for handPointer on screen RRS feed

  • General discussion

  • So I've tried to implement a simple interaction with Kinect that allows a box to be triggered when a hand hovers over it. I decided to use Microsoft's new toolkit which works pretty neatly but I can't figure out how to change one thing;

    At the moment, the movements of the hand cursor correspond to a relatively small area of movement on the physical space. This controlling scheme is useful for UIs where the user cannot see herself.  What I am trying to achieve though is display the userViewer and display the hand cursor in the position that his hand appears to be on the screen. I have figured out the calculations to do it without implementing the interaction features, but I was wondering if there is a way to do it and still be able to use the toolkit. I've tried changing the code in Kinect.Tollkit.Controls but with no luck.

    Any suggestions??

    Saturday, December 14, 2013 5:31 AM

All replies

  • Interactions does this for you. The Kinect Region will display the Kinect Cursor (hand) relative the area that is covered by the control. Therefore, if the region covers the full screen, then the hand will plot to that screen space. The cursor is relative to the region. You may need to review the "human interface guidelines" doc that covers the PHiZ and the areas of the right/left hands and how they map.

    You can always use the Interactions library separately from the controls to write your own Interaction layer. To do the different mapping, implement your own IInteractionClient that will provide the interaction hints to the library. The library then parses skeleton and the hint data and provides the hand states back to you in the InteractionFrameReady event. You can then verify that data with your Interaction Client data to ensure that point represents a point in the UI you care about. 


    Carmine Sirignano - MSFT

    Tuesday, December 17, 2013 7:37 PM