none
How to render a clear depth image? RRS feed

  • Question

  • I am a little confused. I want to get an image like the one the top. But I can only get something like the one on the bottom. 

    I want to render a 3D scene in my window, but I couldn't render them because there's no vertex. I want to know how can I do it?





    • Edited by KinectGood Saturday, September 20, 2014 12:25 AM
    Saturday, September 20, 2014 12:21 AM

All replies

  • The bottom image is just the depth data converted to an 8bit pixel. The zebra pattern is happening because the depth data (which gives values from about 0-4500, the real world units are about 1mm i believe) loops each time it goes over 255 (color pixels go from 0-255, the latter being the brightest).

    You can use the coordinate mapper to convert the depth data into "CameraSpacePoints". These are 3D vectors in relation to the camera (in 1 meter units). This data you can use to create a point cloud, 3d mesh, or calculate the normals and feed them back into an image from the IR camera view, which is how I believe the top image was made.

    Saturday, September 20, 2014 10:44 PM
  • The bottom image is just the depth data converted to an 8bit pixel. The zebra pattern is happening because the depth data (which gives values from about 0-4500, the real world units are about 1mm i believe) loops each time it goes over 255 (color pixels go from 0-255, the latter being the brightest).

    You can use the coordinate mapper to convert the depth data into "CameraSpacePoints". These are 3D vectors in relation to the camera (in 1 meter units). This data you can use to create a point cloud, 3d mesh, or calculate the normals and feed them back into an image from the IR camera view, which is how I believe the top image was made.

    I am familiar with 3D math. But is there any article about getting and rendering a point cloud and generating 3D meshes? I know kinect fusion can do this, but it's not in the v2 SDK, so I'm planning implementing this on my own. 
    Sunday, September 21, 2014 3:20 PM
  • Like I said in the SDK you can get a pointcloud by using the CoordinateMapper and converting the depth data to CameraSpacePoints.

    Visualization is not built into the SDK, that you will have to implement yourself.

    Sunday, September 21, 2014 10:40 PM
  • The top image is a 3D mesh based on the depth image. The code to achieve that efficiently will be done in a GPU geometry shader. We are working on providing that in a sample but no ETA at this time.

    Carmine Sirignano - MSFT

    Monday, September 22, 2014 5:03 PM