Use Gesture Builder with Depth Data RRS feed

  • Question

  • Hello,

    Do I understand it correctly,  that the machine learning process with the visual gesture builder requires the body frame to be recognized? That means, I can only record gestures, that fit the kinematics of the skeleton.

    Is it possible to use that same machine learning technology with the depth frame? For example, I put a box in front of the kinect, and tell it to be a box? 

    Best regards


    Monday, June 22, 2015 12:44 PM


  • Hello Madera,

    You understood correctly. The machine learning algorithm in VGB uses the body joints to train with, so it will only be possible to train the gesture recognizer with a human body. Any frames that do not have a body visible, will be ignored by the trainer.

    While you could use a depth-based machine learning algorithm instead, we do not provide one with the SDK. Depth is much trickier to work with, and would require a lot more training, even for human gesture recognition. Simple things, such as clothing (long sleeves vs short) can have a huge impact on depth.

    If you would like to track arbitrary objects (such as a box), you can look into implementing blob detection:

    I hope this helps,


    • Proposed as answer by angela.h [MSFT] Wednesday, June 24, 2015 1:16 AM
    • Marked as answer by MaderaMadera Tuesday, July 14, 2015 7:02 AM
    Wednesday, June 24, 2015 1:16 AM