none
Accessing the Images Captured by the Kinect (New SDK) RRS feed

  • Question

  • Since, with the new SDK, the Kinect publishes 30 RGB images per second, I'm trying to find to where it publishes these images in order to do some facial processing. However, I cannot find access to where they are published. Furthermore, in such images, I'd like to send the array pFaceModel2DPoint with it so that I can send each [.bmp?] image and its facial coordinates with it. However, Visualize.cpp seems like an entirely different file, so I'm not sure if doing those two tasks from two separate files will cause problems.

    Any help would be greatly appreciated; I'm not new to software programming but am very new to programming with the Kinect, but API and documentation have been less than helpful with this.

    Thank you!

    Andrea

    Tuesday, June 12, 2012 7:02 PM

Answers

  • If you are new to Kinect, it is best to start with the "basics" samples to get a good understanding of what the Kinect SDK does. The demonstate best how the API's work and best practice use. The latest SDK uses the same syntax and api's as v1.0.

    The choice of using event or pull mode are chosen by the requirements of the application. There is no right way to do it. The samples are only provided to demostract the use of the API's and by no means the only way. The Face Tracking SDK processes images one at a time, FTHelper::CheckCameraInput(). As long as your images that you are processing are in order you should have a problem.

    Understand the color/depth are only one factor in this sample. This also uses the Skeleton Tracker that is only going to work with live mode. The neck and head points help with tracking (KinectSensor::GetClosestHint), but are not required.

    Thursday, June 14, 2012 9:59 PM

All replies

  • The frames are generated from the Kinect SDK API's/Events. These are wrappd in the KinectSensor class more specifically, have a look at GotVideoAlert().

    Thursday, June 14, 2012 9:11 PM
  • Thank you for the tip - should GotVideoAlert() be used in place of the polling model (http://msdn.microsoft.com/en-us/library/hh973076)? It looks like that site's polling model might work, but I don't know if the face tracker will create the x-y coordinates of the face (in the pFaceModel2DPoint array) from each of these images polled. Is that possible?

    Thanks again for your help -- some of these functions aren't very well documented, especially because the new SDK just came out, so wading through syntax and understanding which functions are best used for what is tricky.

    Thursday, June 14, 2012 9:18 PM
  • If you are new to Kinect, it is best to start with the "basics" samples to get a good understanding of what the Kinect SDK does. The demonstate best how the API's work and best practice use. The latest SDK uses the same syntax and api's as v1.0.

    The choice of using event or pull mode are chosen by the requirements of the application. There is no right way to do it. The samples are only provided to demostract the use of the API's and by no means the only way. The Face Tracking SDK processes images one at a time, FTHelper::CheckCameraInput(). As long as your images that you are processing are in order you should have a problem.

    Understand the color/depth are only one factor in this sample. This also uses the Skeleton Tracker that is only going to work with live mode. The neck and head points help with tracking (KinectSensor::GetClosestHint), but are not required.

    Thursday, June 14, 2012 9:59 PM