locked
Extracting RGB and Depth data from recorded .xef file RRS feed

  • Question

  • Hi there,

    I am using the Kinect Developer Preview SDK to record a dataset that includes the Depth and RGB data. I wanted to know how can I extract the RGB and depth data from the recorded .xef files? Does anyone have a script available that reads the .xef file or is there a way I could extract it? 

    I urgently need to do this as I am recording a dataset using the Kinect v2 Developer preview as it supports RGB data as well.

    Thanks a lot for your help in advance!

    Monday, September 8, 2014 12:21 PM

Answers

  • Replay the playback and in your Kinect enabled application, record the color/depth frames your application receives.

    Carmine Sirignano - MSFT

    Tuesday, September 9, 2014 8:33 PM

All replies

  • I'd love to know the answer to this as well!
    Tuesday, September 9, 2014 3:12 PM
  • I'm interested in knowing the answer as well. It would be very useful to be able to save depth and rgb frames in png/bmp file format.

    Thanks!

    Tuesday, September 9, 2014 6:50 PM
  • Replay the playback and in your Kinect enabled application, record the color/depth frames your application receives.

    Carmine Sirignano - MSFT

    Tuesday, September 9, 2014 8:33 PM
  • This is awful. I don't have direct access to the data that I record with the sensor. 

    If I try to do what is suggested, I am having missing lots of missing frames: I've tried multithreading, buffering, etc. 

    Moreover, RGB and depth are not synchronized. What I ended up doing is saving RGB images in batches: e.g. collect amount that fits into memory, then save it, then re-play the file again. Of course if you have a long video you can go mad doing this.

    Why not just provide API to read the .xef file? It is probably somewhere in the framework anyways, since studio can read streams. 

    Saturday, November 8, 2014 1:05 PM
  • This recording format is independent of the data you will get back from the runtime. The data KStudio must be re-processed by the runtime to regenerate all the runtime data(Color, depth, ir, body, audio). Depth and Color are ALWAYS out of alignment since they are separate cameras and data arrives at slightly different times. The runtime handles how to provide that aligned data as it does with a real sensor.

    During playback, there is the additional overhead of reading from the drive that could be adding to the issue you are having. Check the system resources to ensure hard drive, cpu and memory are not maxed out during playback. This can be done with Task Manager > Resource Monitor.


    Carmine Sirignano - MSFT

    Monday, November 10, 2014 11:49 PM