none
Which ways do you know to filter the depth map of the kinect to make it the best depth map possible? RRS feed

All replies

  • A median filter would probably be a good starting point.   For performance reasons, if you're doing it real time, I'd suggest doing it in a shader, C++ AMP, or some form of SIMD.

    Wednesday, May 16, 2012 7:32 PM
  • Which depth does undefined have?

    Eisenanstreicher

    Thursday, May 17, 2012 1:44 PM
  • Did you even read any of the documentation? ;D

    they have value = 0.

    The best method I know to fill missing pixels for DYNAMIC scenes (i.e. where people move in the scene) is to use inpainting that prefers pixels from the background rather than the foreground. See https://code.google.com/p/kinect-depth-map-inpainting-and-filtering/

    However, on my machine it takes about 150ms per frame, so it's not real-time. One speed up possibility could be not to call the method with one big mask that contains several connected components (regions where pixels are missing), but actually determine all the regions manually (in OpenCV using floodfill) and calling inpainting in several threads, each call involving just one region. That can get you close to near-realtime

    There are superb near-realtime methods for making a stable image for STATIC scenes. Not that I'd know why one would need that, other than maye for 3D scene reconstruction where it is OK to stay still with the Kinect for a few seconds. But even then, stuff like KinectFusion works as good as it can with imperfect depth maps anyway...

    Saturday, May 19, 2012 1:24 PM