none
Depth accuracy tests RRS feed

  • Question

  • Hi,

    I did some accuracy measurements with a slightly modified depth basics example:

    • green pixel = exactly the predefined distance in mm
    • changing to dark blue (in mm steps, up to 5mm) = closer to the camera
    • changing to red (in mm steps, up to 5mm) = other direction
    • black = undefined/out of range

    First I placed the Kinect with a distance of ~60cm in front of a door and set the green pixel distance to 60cm. Here is the result:

    The gap in the center is obviously from reflecting IR. The data changes quite often over time (stays within the given range). I expected a flat surface with values +/- 1mm. Looking at the infrared image ... does not look smooth and similar to the pattern above:


    Why don't I get a smooth flat surface?

    How to improve it? Averaging is not an option.

    Following I pointed at a wall (~2m distance). And got a similar effect (There is a table in the lower area; anyway looks like a donut):

    Again ... no flat surface :(

    In the end I went as close as possible to the wall (50cm) and got a black rectangle (???):

    Could you help me, please? Any suggestions/hints?

    Thanks!

    Wednesday, October 22, 2014 8:20 PM

All replies

  • Interesting findings and would be good to know the scenarios of what you are trying to work with. Given that 50cm is the minimum distance of the sensor, the last image can be explained by you are too close.


    Carmine Sirignano - MSFT

    Wednesday, October 22, 2014 8:54 PM
  • Hello. I have encountered this same issue. Using the Kinect to measure a known reference shape, I get errors similar to yours. For example @ 1.2 meters from the reference object, I find a constant error of +- 4 millimeters in the same pattern as you see. The use case is: as precise as possible 3D scanning.

    Also to add, this is the average of approximately 30 frames in a static scene. Not a time varying property


    • Edited by John Gunn Thursday, October 23, 2014 3:32 AM
    Thursday, October 23, 2014 2:24 AM
  • Right now I'm evaluating the accuracy of kinect v1 vs v2 and later on I will do 3d reconstruction (moving camera -> no average)

    Yep, I'm close. Borders should also be black if the center is black?!

    Today I will do some more accurate tests...

    Thursday, October 23, 2014 6:28 AM
  • It's the same. Any idea how to fix it?
    Thursday, October 23, 2014 6:29 AM
  • Resurrecting old thread, sorry.

    Any updates? I'm about to start doing something along these lines as I've done some tinkering that has given me significantly higher distortions. To me it looks like something caused from the IR lens having different thickness at different radial offsets from the optical axis, and the light pulses taking longer to get through the lens? Does this sound reasonable?

    I'm guessing the sensor has been factory calibrated/zeroed at a flat surface at a certain distance. If this is the case then this pattern should persist as you move away from the sensor, expanding linearly as a function of radial offset in the length of the ray from the 'true' surface to the measured pixel location.

    If so, then you should be able to measure your own Kinect's distortion map at a few set distances and interpolate it over a world space of any size you want. Then on every depth frame project each point into world space, apply correction from lookup table, repeat. 

    If anyone has experienced anything similar I would be grateful to hear more.

    Thanks,

    Phil

     


     


    • Edited by Phil Noonan Thursday, December 11, 2014 4:50 PM
    Thursday, December 11, 2014 4:45 PM