none
Kinect Fusion filtering RRS feed

  • Question

  • Sorry for hijacking this thread, but it just fits my own question perfectly. I have a Kinect, 60cm in front of a planar surface (a tabletop). The depth is measured both by taking a screenshot with the "Depth Basics" sample and using the "Kinect Fusion Explorer". In both cases, the results are similar (what was expected) but really very far form those ~1.5mm of the original object you mentioned above. In fact I get some kind of circular waves on the surface, probably caused by spherical distortion. In addition to that there is massive noise, resulting in a total amplitude of ~1.5cm around the original object.

    So I also would be very happy to be able to apply some sort of calibration prior to the fusion process. Is there a (simple) way to do that?

    Greetings from Germany.
    Tuesday, June 4, 2013 6:42 PM

All replies

  • Sorry for hijacking this thread

    Please don't, always create a new post and post a link to the thread you want to reference. It is easier to determine the age based on when the thread was originally created.

    http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/e7346b37-204d-48c1-a204-14157719f653

    As for your issue, you are seeing depth image artifacts:

    http://msdn.microsoft.com/en-us/library/jj131032.aspx#ID4EOC

    You can always apply some type of image smoothing on the data before passing the frame to the Fusion engine. Here is an older thread discussing this.

    http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/40d55ac6-19b7-4331-aca4-7f8cd271b78b

    Tuesday, June 4, 2013 7:39 PM
  • Ok, I'm sorry and won't do it again.

    The point is, I know there is work being done to improve the accuracy of the depth image significantly, to be found here: http://www.ee.oulu.fi/~dherrera/kinect/

    My question would be how to apply those calibration parameters to the image, without recoding the whole fusion-example (which is unfortunately way beyond my abilities in coding). I'm not after smoothing the image artificially, thereby degrading accuracy. I'd like to improve it by calibrating the depth image.
    I just stumbled over your ~1.5mm of the original (which would be absolutely accepable for my purposes) and wonder how to achieve those, regarding my experienced 1.5cm.
    Wednesday, June 5, 2013 8:34 AM
  • There isn't a way to modify the Kinect's calibration, this is done at the factory. Using the coordinate mapper, will project the values in the respective coordinate space.

    As for the research paper provided by that link, you will have to contact the author on his process. The values may be part of a image transformation applied to the depth frame as a post process. Have you looked the code they provide?

    Wednesday, June 5, 2013 10:55 PM
  • Yes, I have. As I understood it, they take the raw disparity values (provided by libfreenect) and do their own conversion to depth values. I think I could do that using libfreenect and the provided matlab toolbox, but what I need is a way to get the resulting depth images into fusion.
    Post processing the depth values prior to the fusion process would surely work also, but the problem of getting the data into fusion remains the same.

    Thursday, June 6, 2013 9:42 AM