How to distinct real Kinect sensor and .xef playback source RRS feed

  • Question

  • Hi there, 

    I’m making an application which compare a live motion(f.e. body frame) captured by the Kinect sensor and a playback xef file recorded by KStudio, I wanted that they display in two different canvas, but my difficulty is that how to know the frames come from the sensor or the xef file, now all frames display on all the canvas simultaneously.

    Thanks in advance for your help.


    Friday, February 27, 2015 8:12 AM

All replies

  • I think KStudio "plugs" directly the captured data into the Kinect Runtime so your app cannot tell apart frames from one source or the other.

    I don't know your scenario, but maybe you could use some information you "forced" inside the recording frames to distinguish them. Or maybe you could build a workaround using RelativeTimes... Only ideas, I'm not sure if they are really feaseble.

    Friday, February 27, 2015 10:00 AM
  • hi jmmroldan,

    Thanks for your reply and ideas.
    The StartRelativeTime is readonly, so I can't put personal information in it.

    Actually, my application is a dance comparaison, uses can learn and imitate from a pre recorded video(color and body frame).
    The scenario is quite simple
     - Record the "teacher" 's frames to xef file by using the KStudio of Microsoft.Kinect.Tools.
     - Play this xef file in a litte windows
     - Display the live captured data in the main windows.

    for this, I create two FrameReader to listen the FrameArrived event.

    I noticed that inside the KStudioPlayback or KStudioRecording class, there is a method called "GetMetadataForStream", but I don't know how to create then use it, and I'm not sure I can retrieve it from the captured data by Kinect Runtime.

    • Edited by haishan06 Friday, February 27, 2015 5:03 PM
    Friday, February 27, 2015 5:03 PM
  • What is the intent of using the recordings as this is not a feature that can be used in production applications. KStudio can only be used on developer machines.

    As noted in the previous response, there is no way to distinguish between data coming from a Kinect or KStudio at an applicaiton level. Metadata can be stored in a clip (see KStudio / Play / MetaData section on adding tags). This is going to be clip level metadata that wouldn't necessarily be used during applicaiton playback.

    If you need a way to compare between a known recorded set and live there might be a way to do something with VGB if your system is based on body data. As a movement/gesture is being perfomed you could determine how close it is to the real gesture. It wouldn't specifically outline what joints were particularly off but give you a general idea.

    Carmine Sirignano - MSFT

    Friday, February 27, 2015 7:26 PM
  • I think Carmine remark is an important one, you should consider KStudio only for development enviroment.

    Having said that, I didn't mean you to write RelativeTimes, I was thinking about comparing with known values, but I think time ranges for RelativeTime would not allow it (it would if they were absolute timestamps).

    Anyway, I would say that you don't really need .xef data to train users (if they are human) since they will not use other information than video (color, regular images) to learn. You can do this with a regular recording (not .xef) so you remove this problem. If you still think your users need to see the body over the color images you can write an app to generate a video file with this information (search the forum, there are several posts about this).

    Monday, March 2, 2015 9:27 AM
  • Thank you to all for your help.

    I thought I could use K Studio in production since it is published on Nuget.
    Visual gesture builder is not a good solution for me, because I should do some angle calculation between bones.
    Finally, I think I'll serialize the body data and transfert the color frame to video then generate a single file as what you proposed, it's better for data comparison.

    Monday, March 2, 2015 12:36 PM