Record video from kinect v2 RRS feed

  • Question

  • Hello,

    I go straight to the point: I want to same the color stream of the kinect into a video. In order to do that I found two ways: 


    Use the datastream (like in the example ColorBasics) to obtain every single frame as a WritableBitmap and then save frames into a video. To do that I use emgu library that has a very simple videowriter class, unfortunately the method Add(frame) takes more that 100 ms to save the image. Considering that I'm sampling at 30 fps (33 ms more or less) I have a gap of 66 ms second. Thus is impossible for me to same the frame meanwhile they arrive or in separated thread (will fill the memory).


    I'm using Kinect.tool and kstudioClient to record the video stream and later I'll playback the stream to post-process data. The problem here is that the sampling rate is not constant. If I see the video in kinect studio it starts ok but after few seconds there is a frame every 1-2 second?! The kstudioclient is opened in another thread.

    Please suggest me how to save a color stream from kinect v2. There is no a clear way in the web but is hard to think that no one has managed it.



    Friday, March 10, 2017 2:08 PM

All replies

  • You could try using a thread pool with # number of threads and round robin it when saving the frames(frame X in thread 0, frame X+1 in thread 1, etc) .Then at the end you could join them into one frame collection by using the same round robin as basis for accessing the correct thread sub collection and grabbing a frame.

    Perhaps you shouldn't use VideoWriter class per thread.

    Use byte[] and pass them around to threads. So you should get the cost of the array copy alone. Then all you have to do is make sure to constrain your calculations in any single thread to less than the 33ms * (threadcount-1) ,basically the time it will take for the main thread to try and send a frame to that very same thread.

    Then when you've finished frame acquisition, do the round robin access thing and add them to the VideoWriter.

    PS:Haven't done it.

    Friday, March 10, 2017 4:09 PM
  • I cleaned my code and I'm able to process quite fast the frames, still the program (of course) depend to much on the PC power. 

    If I do everything sequentially I lose some frames: sometimes the processing of frames  takes too much and the next frame expires. That's ok on my PC (5% of frame) but to much on less powerful machines. 

    If I process frames in separated threads performance are good in both PC but frames are not ordered since thread are concurrent.

    Attempting to order threads seems to be too heavy: thread starts to accumulate in the queue.

    Any suggestion or experience is well accepted.

    Up to now I'm acquiring color and depth frames, saving the info using CopyConvertedFrameDataToArray and CopyFrameDataToArray, converting the arrays into bitmaps and using accord library to save two videos with h264 codec.

    • Edited by AlexGi Friday, March 31, 2017 8:40 AM
    Friday, March 31, 2017 8:38 AM
  • Ok so instead of ordering execution of threads, try for a faceless approach. Add frame numbers to the data(struct : int frameNum;byte[] data;) and ship them to any thread that's not busy. There do whatever you need to but make sure to keep the frameNum id for the output as well.Then it's just a sorting problem in the end.
    • Edited by Nikolaos Patsiouras Friday, March 31, 2017 8:46 AM
    • Marked as answer by AlexGi Friday, March 31, 2017 2:42 PM
    • Unmarked as answer by AlexGi Monday, April 3, 2017 10:44 AM
    Friday, March 31, 2017 8:44 AM
  • Yea,

    thank you very much! I thought that this solution was too heavy memory speaking but actually it works pretty well. I have my pool of threads performing conversion and a consumer thread sorting and saving frames.

    Friday, March 31, 2017 2:42 PM
  • I exulted too early. My struct was made of the frame Id and the color and depth bitmaps. The copy of bitmaps was not performed in the right way: I was copying the pointer. If I copy every bitmap the RAM need is too much. Unfortunately this is not the right way...

    Nice attempt.

    Tuesday, April 4, 2017 3:25 PM
  • That's not gonna change though...There's a reason why KinectStudio recordings are so damn big when it comes to storage. And KS loads the whole recording to RAM as well. No streaming involved. Also why do you need both color and depth? You only mentioned color in the opening post. Of course this will impact memory. Even more so if you blow the depth frame up to HD.

    If you really need it though, consider using different threads to handle different streams. Have a thread to read from Color stream and another from Depth stream. Divide your workload so it happens in parallel.

    But it won't change the fact that you'll need a lot of RAM. There's a reason why machines used for rendering are really powerful and resource rich.

    Wednesday, April 5, 2017 8:03 AM