none
Storing/Retrieving depth information without first converting to 3D

    Question

  • I'm trying to figure out how to save the pixel data from the DepthImageFrame to a binary file and to then retrieve that data and convert it to 3D coordinates.  Saving and loading pixel data is no problem, that part is easy.  The part I'm having trouble with is that once I reload the data, I'm not sure how to go about converting it to 3D.  

    In Kinect SDK 1.5 I know that I can use DepthImageFrame.MapToSkeletonPoint(x,y) to get 3D coordinates when the Kinect is running.  The problem I'm having is that I want to load my data without having to start the kinect and I can't find a static conversion function nor can I create an instance of DepthImageFrame (no valid constructors).  The only options I see are:

    a.  Save the data as 3D coordinates 

    b.  Start the kinect and then kludge it to convert my loaded data. 

    (a) is certainly an option but I'm recording several thousand frames over the course of an hour and storing that much float data is too much of a burden.  That's why I was hoping to store the raw depth data, to reload and convert at a later date.

    Wednesday, June 13, 2012 6:32 AM

Answers

  • At this time, no, though the request is definitely on our radar. 

    If you really want to do this, what you'll need to do is to build a model of the camera mapping by transforming a number of sentinel points (e.g. 8 "corners" of the view frustum) and then using this to build a frustum mapper for your self.


    -Adam Smith [MSFT]

    Wednesday, June 13, 2012 9:25 PM

All replies

  • Nevermind, answered my own question.   There is another conversion method in KinectSensor that you can use even if you don't run the sensor:

    // get KinectSensor from potential sensors, then...:
    sensor.MapDepthToSkeletonPoint(DepthImageFormat.Resolution640x480Fps30, x, y, depth);


    • Edited by Jerdak Wednesday, June 13, 2012 6:50 AM
    Wednesday, June 13, 2012 6:49 AM
  • That's right, but note that the conversion from Depth->Color is sensor dependent (the alignment of the RGB camera and the Depth camera are vary slightly from device to device), so if you do this conversion on a different sensor, the values may be somewhat incorrect.  Depth->Skeleton is sensor invariant at this time.

    -Adam Smith [MSFT]

    Wednesday, June 13, 2012 6:23 PM
  • That's a good point.  Does the SDK provide any mechanism to save that additional conversion data from the sensor?
    Wednesday, June 13, 2012 9:17 PM
  • At this time, no, though the request is definitely on our radar. 

    If you really want to do this, what you'll need to do is to build a model of the camera mapping by transforming a number of sentinel points (e.g. 8 "corners" of the view frustum) and then using this to build a frustum mapper for your self.


    -Adam Smith [MSFT]

    Wednesday, June 13, 2012 9:25 PM
  • Hmmm, that's a thought.  Time permitting I'll give it a try, for now it suffices that I can store a much smaller pixel map and then rebuild the 3D point cloud on another machine.  I'll just make sure to label which Kinect goes with which computer for a more refined post-processing.  :)
    Wednesday, June 13, 2012 9:31 PM