how can i get every pixel 3D-position, and if i want to save it for offline-analysis, what format file should i save? RRS feed

  • Question

  •     Just as title, does the kinect have any funciton like ConvertRealWorldToProjective in openni?
    Wednesday, May 7, 2014 8:11 AM

All replies

  • I am not familiar of what the openNI method does, but are you asking for World Coordinate? We have the coordinate mapper that maps from screen space points into camera/world space/skeleton coordinate system as a complete frame or point.

    INuiCoordinateMapper Methods

    CoordinateMapper Methods

    MapDepthFrameToSkeletonFrame / MapDepthPointToSkeletonPoint

    How you would save that would be up to you whatever API's you are using to save the file. There are several code example you can find in the forum that discuss saving data that should fit the data you get from the API.

    Carmine Sirignano - MSFT

    Wednesday, May 7, 2014 7:10 PM
  • Hi Carmine,

    Thanks for your reply. But I am still wondering how to use the INuiCoordinateMapper Method. 

    My aim is to use the RGB image to find a target, and through the synchronized depth image to get the target's 3D positon(x,y,z), which x,y,z are all in millmeter.  

    My question is which coordinate mapper method should i use After i get the RGB image and depth image as the kinect-explorer-D2D shown.

    Thursday, May 8, 2014 3:11 PM
  • If you do not have full control of lighting, be very careful of using color for detection. Shadows and lighting temperature can vary the pixel values greatly.

    Any method will do what you want. You can map color to depth or even skeletal for "real world" coordinates. That would be up to you to decide what works for you.

    Carmine Sirignano - MSFT

    Thursday, May 8, 2014 7:02 PM
  •   Thanks for your advice. But I still have a question.
      I wanna to process them offline, and i save the color images as .avi, and i am wandering which format can i save the depth images and not loss precision?  and Can i use the NuiCoordinateMapper Method after I reload the images from the file which i save? OR I must use the method before saving?
    Friday, May 9, 2014 1:58 PM
  • Why not use Kinect Studio? There is no video format that is acceptable to save depth data. You will have to come up with your own lossless compression.

    As for using coordinate mapping, you can load the settings back into the coordinate mapper from the sensor using the GetColorToDepthRelationalParameters. Considering you dump the data to AVI if you applied compression this will affect your results. You have to ensure a 1:1 copy of the data as it was given off the sensor.

    Carmine Sirignano - MSFT

    Friday, May 9, 2014 8:40 PM
  •        Because I want the Kinect captured data synchronized with some other data from other device. So for every frame i must set a timestamp provided by the device, or just set the start & stop  record timestamp only when the Kinect frame rat is steady 30, but  i can't set the timestamp using Kinect Studio.

    I have tried save the datas frame by frame in "xxx.bmp" using opencv:: imwrite(). but the saving opearation will cost much more time than 33ms, I suppose the reason is the cost of create /close file. so now i try to use the video format instead.

    I will be appericated if you can provide me better solutions. I am wondering why Kinect Studio having no problem on saving datas, Can microsoft provide the source code of Kinect Studio.

    Sunday, May 11, 2014 1:18 AM