none
Detect Nao robot in Kinect RRS feed

  • Question

  • I am not sure if this has been tried before but I am trying to use Kinect and detect gestures made by the Nao robot.

    I have made a Kinect application, a gesture based picture viewer and it detects humans fine(Obviously it does!) What I wanted to try was (lazy as I am), to see if I could use some (say, voice) command to tell the Nao to do a Swipe Right gesture and have my application identify that gesture. The Nao can easily identify my command and do some gesture. The problem however is, when I put the Nao in front of the Kinect sensor, the Kinect does not track it.

    What I want to know is, are there some basics behind Kinect's human body motion tracking that essentially fails when a robot is placed in front of it instead of a human?

    PS: I have kept the Nao at the right distance from the sensor. I have also checked if the entire robot is in the field of view of the sensor.

    Thursday, October 25, 2012 12:56 PM

Answers

  • I am quoting this answer from a similar question posted by me on the Robotics StackExchange site (still in private beta):

    "You should read the paper published by Microsoft research on the actual algorithm behind the human motion tracking.

    Real-Time Human Pose Recognition in Parts from a Single Depth Image, Shotton et. al, : http://research.microsoft.com/apps/pubs/default.aspx?id=145347

    It relies on large labeled training data from the human body. That is why the Nao cannot just be tracked with the same method out of the box. To achieve that, you would need to re-train the algorithm with labeled data from the Nao in different poses."

    This, I believe is the actual answer/reason why a robotic exoskeleton as small as the Nao robot cannot be detected by the Kinect.


    Best, Karan http://karanjthakkar.wordpress.com

    • Marked as answer by karanjthakkar Tuesday, October 30, 2012 9:45 AM
    Tuesday, October 30, 2012 9:45 AM

All replies

  • from the pics on the Nao site, it looks like the Nao Robot is too small to be recognized by skeletal tracking
    Thursday, October 25, 2012 9:16 PM
  • So what is the average height of a human that the Kinect can track? Is there any documentation somewhere on that that I can refer to?
    Friday, October 26, 2012 10:56 AM
  • Did you try to run KinectSDK examples, e.g. Skeletal Viewer with NAO? Does it work?

    Friday, October 26, 2012 2:12 PM
  • I believe (not stating as an official fact) that the ST pipeline is tuned for people 40" and above. So the Nao bots are just about half that. I don't think you are going to get any reliable data to work with at that size.
    Sunday, October 28, 2012 2:29 AM
  • New in the May 2012 SDK Release

    • Seated mode skeletal tracking
      Provides the ability to track users’ upper body (10-joint) and overlook the lower body if not visible or relevant to application. In addition, enables the identification of user when sitting on a chair, couch, or other inanimate object

    • Improved skeletal tracking
      In near range, users who are seated or standing can be tracked within 40 cm (16 inches) of the sensor. Plus, the skeletal tracking engine is now faster, making better use of the CPU and scaling of computer resources. In addition, newly added joint orientation information for skeletons is ideal for avatar animation scenarios and simple pose detection

    ref:http://research.microsoft.com/en-us/collaboration/focus/nui/kinect-windows.aspx

    Monday, October 29, 2012 8:16 PM
  • Is that something you have inferred from your work with the Kinect or just an educated guess?

    Best, Karan http://karanjthakkar.wordpress.com

    Tuesday, October 30, 2012 9:36 AM
  • I am quoting this answer from a similar question posted by me on the Robotics StackExchange site (still in private beta):

    "You should read the paper published by Microsoft research on the actual algorithm behind the human motion tracking.

    Real-Time Human Pose Recognition in Parts from a Single Depth Image, Shotton et. al, : http://research.microsoft.com/apps/pubs/default.aspx?id=145347

    It relies on large labeled training data from the human body. That is why the Nao cannot just be tracked with the same method out of the box. To achieve that, you would need to re-train the algorithm with labeled data from the Nao in different poses."

    This, I believe is the actual answer/reason why a robotic exoskeleton as small as the Nao robot cannot be detected by the Kinect.


    Best, Karan http://karanjthakkar.wordpress.com

    • Marked as answer by karanjthakkar Tuesday, October 30, 2012 9:45 AM
    Tuesday, October 30, 2012 9:45 AM