locked
Kinect for Windows ( Depth and Skeletal Data ) RRS feed

  • Question

  • I am working with my thesis project which is about |mimicking of the human wrist|. I used the Kinect for Windows sensor as my controller and also my input data on my micro controller unit. I have noticed that the depth value that I  had was too small for me to get the range of the wrist movement along x and y axis. Which result to an inaccurate movement of the robotic wrist because of the range value that I am getting is smaller than the normal angle of motion the human wrist can do.

    I just want to ask if is there anything that I could do to make the range of my wrist become bigger than the human wrist can do. So that I could use it to make the robotic wrist's movement more accurate that it could copy the human wrist more likely. I do need your help with this. Thank you!

     
    Monday, October 14, 2013 3:55 PM

Answers

All replies

  • Can you provide more detail on what you mean by "that the depth value that I had was too small"? Are you looking at the raw skeleton frame/joint values(ST), or are you trying to map these to the depth coordinate system?

    One of the issues with depth is the wrist can be occluded by the hand and depending on the position of the sensor and the view of the forearm/wrist/hand. This results in ST joints becoming "inferred". Is the location of the sensor in a horizontal mounted position facing the user? If so, looking at the depth, is the wrist visible in the depth stream?


    Carmine Sirignano - MSFT

    Monday, October 14, 2013 10:33 PM
  • I mean for example, I used the joint of the hand and the wrist, then in the depth coordinate system there is a corresponding value depending on the distance of the sensor from the hand (for example at 1 meter away from the sensor there is a value, let say it is 351 for X @initial or in straight position of the wrist, then when I move my hand to the left it gives me a value of 329 for X , then 368 when I move my hand to the right, then I could sum up the value from left to initial plus from the right to the initial ({351-329}+{368-351}), which will be the range of the hand let's say the sum is 39, then obviously it is less than he actual range of the hand can do, (which for the average/normal range the hand can do along x-axis was 30 @left + 25 @right = 55'). 

    In this case, I could not use the whole angle of the servo motor, because there will be an angle that could be skipped along the process, which result to my robotic wrist to not function accurately, because it is very visible on the robotic wrist that it has some angle that was being skipped.

    Is there anything that I can do for the value of the depth coordinate system in terms of tracking my hand and wrist only? All I wanna do is to had an equal or bigger range value than the average human hand can do for me to make a more accurate mimicking of human wrist. Thanks.

    Tuesday, October 15, 2013 5:08 AM
  • You can enable near mode (Kinect for Windows sensor only) to get to within 50cms and turn on seated tracking to focus on upper body ST. That would be all you can change. Otherwise it is a matter of how you are treating the data from that point.

    Mapping Skeleton point to depth is changing a real world position to a x/y coordinate in a 2D plane. Is that something you really want to do? Keeping the values in real-world distances will give you better results. The factor here would be to ensure you take into account the angle of the sensor if no level.

    SkeletonPoint Members
    http://msdn.microsoft.com/en-us/library/microsoft.kinect.skeletonpoint_members.aspx
    - keep in mind 0,0 is the center of the sensor.

    CoordinateMapper.MapSkeletonPointToDepthPoint Method
    http://msdn.microsoft.com/en-us/library/jj883696.aspx

    DepthImagePoint Structure
    http://msdn.microsoft.com/en-us/library/microsoft.kinect.depthimagepoint_members.aspx


    Carmine Sirignano - MSFT

    Tuesday, October 15, 2013 6:21 PM
  • This is what I am trying to do. 

    First, I have my hand positioned 1 meter away from the kinect sensor.

    Second, I am using the skeletal tracking for detecting my wrist and hand joints.

    Then, I made a program which get would get the position of my hand and wrist joints. Then I'll use the values that the kinect is returning to me, and make it as the range of motion of my hand. THE QUESTION IS (what was that value and what is its unit? Is that the actual range of my hand joints movements, or is it just a representation of the kinect in its coordinate system? If it is not the actual range of motion of my hand, then how could I get the exact movement/range of my hand joints? Is it possible for me to get the range of my hand in real space??? ) 

    And Lastly, where is the center of the coordinates of the kinect in terms of skeletal tracking and in depth mapping.


    Tuesday, October 15, 2013 7:21 PM
  • Skeleton is derived from depth and is in millimeter values(real world). Given depth and IR interference it may not be an exact value but it will be close. Skeleton is derived from the Depth but has its own coordinate system. Depth is flattened into a 2D plane so you have to take that into account.


    Carmine Sirignano - MSFT

    Wednesday, October 16, 2013 10:25 PM