none
Data on typical depth accuracy/precision within the field of view RRS feed

  • Question

  • I've read a number of papers that have attempted to quantify the accuracy / precision of a depth measurement based on its 3D position within the field of view. This is typically done by sampling points across a physical planar surface either face on or at an angle to the sensor. I'm quite interested in trying to take such measurements and place them in general a look up function for a given x,y,z value - so I can discard data from fused point clouds from several kinect sensors. However, the data in papers I have looked at is not easily used. What data is available in the public domain?

    Monday, January 29, 2018 1:38 PM

All replies

  • Don't have the background to talk specifics but from what I've skimmed about such things, sometimes data are also related to environmental factors and also the temperature of the sensor. For example, the sensor needs at least 20-45 min to gain a steady temp in order to minimize the deviation/error between frames, or the imprecision/inaccuracy if you will. Any sampling before that stabilization point will be unpredictable and unreliable.

    Not sure in general if it's a good idea to try and fuse data from other environments in a generic function. If you were to do such a thing yourself with all sensors to take part in your setup, in the environment you'd be working in, the results would be more precise.

    Like I said, I don't have the theoretic background to back it up, but having installed Kinect applications in several places, I always have different problems. Also each sensor has it's own calibration internally which is configured at MS, but it's not like all sensors give the same results after factory calibration, nor do they have the same calibration.

    Have you actually noticed a pattern in those measurements? Apart from the pincushion which is already made known by MS, as in data in the corners are not reliable.

    Tuesday, January 30, 2018 10:22 AM