none
Does the Kinect 2 provide functionality to get the skeleton from capured depth and Infrared frames RRS feed

  • Question

  • Hello,

    I am using the Kinect sensor to do some research and evaluation on hand gestures. Is it possible to load depth and infrared frames obtained from the Kinect into the Kinect again and get the skeleton model, as the body frame only needs one depth frame. I would like to evaluate the accuracy and reliability of different hand pointing gesture recognition algorithms under different conditions by making test scenarios with the same input stream. 

    Thank you very much!

    Kind regards




    • Edited by Hao18 Monday, January 16, 2017 3:56 PM
    Monday, January 16, 2017 3:25 PM

Answers

  • No, there isn't API support to perform skeleton tracking by feeding depth frames in the Kinect SDK. Isn't it possible for you to record both depth and body tracking data and afterwards use body tracking (skeleton) data related to each depth frame as a base for comparison of evaluated algorithms?
    • Proposed as answer by Nikolaos Patsiouras Tuesday, January 17, 2017 9:08 AM
    • Marked as answer by Hao18 Wednesday, January 25, 2017 9:00 AM
    Monday, January 16, 2017 7:24 PM
  • Also depth frames,contrary to what one would suspect, are not the base for skeleton tracking. Depth as well as Skeleton Tracking and some other data are produced by the raw IR frame. Haven't done it before but it might have been possible to do it, if you had the IR frames instead, but I'm not sure whether Kinect Studio just plays back and the service just feeds that data to the client without processing them. If so then it's exactly as Jan says, you'd need all streams related(I think they are more than just the two).
    • Marked as answer by Hao18 Wednesday, January 25, 2017 9:00 AM
    Tuesday, January 17, 2017 9:11 AM
  • Did you click the Connect to Service button prior to clicking Play in Kinect Studio?

    Because you can play the recording in Studio just to see the data without feeding them to the Service.

    When you get data from the sensor, you basically ask a service process running in the background for data. You don't have the data straight from the sensor.(Since data need processing like I said previously for skeleton tracking etc).

    KinectStudio doesn't feed data to the application straight away as well. Basically there's a button you have to click on in order to override the link between sensor and service process and you feed data to the service process which then goes to your application. Internally the service process might have some way to recognize you have processed data and pipe the input straight to the output in case you recorded skeleton tracking data as well.


    • Edited by Nikolaos Patsiouras Wednesday, January 25, 2017 12:26 PM
    • Marked as answer by Hao18 Wednesday, January 25, 2017 1:50 PM
    Wednesday, January 25, 2017 12:26 PM

All replies

  • No, there isn't API support to perform skeleton tracking by feeding depth frames in the Kinect SDK. Isn't it possible for you to record both depth and body tracking data and afterwards use body tracking (skeleton) data related to each depth frame as a base for comparison of evaluated algorithms?
    • Proposed as answer by Nikolaos Patsiouras Tuesday, January 17, 2017 9:08 AM
    • Marked as answer by Hao18 Wednesday, January 25, 2017 9:00 AM
    Monday, January 16, 2017 7:24 PM
  • Also depth frames,contrary to what one would suspect, are not the base for skeleton tracking. Depth as well as Skeleton Tracking and some other data are produced by the raw IR frame. Haven't done it before but it might have been possible to do it, if you had the IR frames instead, but I'm not sure whether Kinect Studio just plays back and the service just feeds that data to the client without processing them. If so then it's exactly as Jan says, you'd need all streams related(I think they are more than just the two).
    • Marked as answer by Hao18 Wednesday, January 25, 2017 9:00 AM
    Tuesday, January 17, 2017 9:11 AM
  • Thanks for the quick reply, Jan. I tried it today and it worked perfectly fine for the official programs like BodyBasics. My own written code, however, doesn't work with the recorded data. If I play back the data with my own program opened, the program does not have any input, it is like the program freezes. Do I have to integrate special input variables or is there another solution?
    Wednesday, January 25, 2017 12:04 PM
  • Thanks for the quick reply. I tried it today and it worked perfectly fine for the official programs like BodyBasics. My own written code, however, doesn't work with the recorded data. If I play back the data with my own program opened, the program does not have any input, it is like the program freezes. Do I have to integrate special input variables or is there another solution?
    Wednesday, January 25, 2017 12:05 PM
  • Did you click the Connect to Service button prior to clicking Play in Kinect Studio?

    Because you can play the recording in Studio just to see the data without feeding them to the Service.

    When you get data from the sensor, you basically ask a service process running in the background for data. You don't have the data straight from the sensor.(Since data need processing like I said previously for skeleton tracking etc).

    KinectStudio doesn't feed data to the application straight away as well. Basically there's a button you have to click on in order to override the link between sensor and service process and you feed data to the service process which then goes to your application. Internally the service process might have some way to recognize you have processed data and pipe the input straight to the output in case you recorded skeleton tracking data as well.


    • Edited by Nikolaos Patsiouras Wednesday, January 25, 2017 12:26 PM
    • Marked as answer by Hao18 Wednesday, January 25, 2017 1:50 PM
    Wednesday, January 25, 2017 12:26 PM
  • Thank you for your good piece of information. I did not to activate the visible frame and that was why there was no input. Now it works perfectly fine. It is amazing how many tools are given to us by Kinect Studio and the SDK Browser.

    I would have another question: Might you know any collision calculation tools, algorithm or hints to calculate if I hit an object with the finger pointing line. For example pointing at a bottle. I am thinking to simulate this situation with an area by projection the bottle onto the plane, it stands on. 

    Thank you very much in advance!

    Wednesday, January 25, 2017 1:59 PM
  • Nope, I haven't dealt with object tracking.

    I have some ideas but nothing good I'm afraid.
    Wednesday, January 25, 2017 2:26 PM