none
Internal camera parameters and Kinect Fusion RRS feed

  • Question

  • Hello,

    I am trying to scan a model using Kinect Fusion but I need to be sure that the resulting model I get from SDK matches exactly with the real world object.

    I see that we can set internal camera parameters like fx fy cx cy in Kinect SDK. Should I calculate these parameters and set them myself or do Kinect SDK automatically set them to most appropriate values?

    My other question is: can we somehow also set radial distortion parameters like k1 k2 k3 k4 or are they automatically taken care of by SDK?

    Tuesday, May 28, 2013 8:41 AM

All replies

  • What do you mean by "exactly"? What is your use case?

    Since Kinect Fusion is using the depth data of Kinect, the depth values are within 1mm accuracy but will have some level of noise. Fusion is going to approximate that depth data over time. Additionally, there is going to be the resolution of the image itself where fine detail may not be picked up in a 640x480 scan. The generated object will have its own resolution based on the voxel size which will factor in as well. You should be with ~1.5mm of the original object, which may or may not fall within an acceptable range you are looking for.

    Tuesday, May 28, 2013 11:17 PM
  • Having a consistent +/- 1mm error is no problem. But if Kinect Fusion does not take internal camera parameters into consideration, then the model might have nonuniform, cm level errors which would cause trouble for our project.

    So, should I set internal camera parameters myself or is Kinect Fusion handling them for me?

    Friday, May 31, 2013 4:45 AM
  • Fusion doesn't do any further processing on the data. The depth data already takes into account this information and should take this into account for a full frame. The way Fusion constantly analyzes the depth data, the changes should smooth out over time based on the different angles that are taken.

    Friday, May 31, 2013 6:19 PM
  • Fusion doesn't do any further processing on the data. The depth data already takes into account this information and should take this into account for a full frame. The way Fusion constantly analyzes the depth data, the changes should smooth out over time based on the different angles that are taken.


    What do you mean by "this information", do you mean "internal camera parameters"?
    Tuesday, June 4, 2013 8:54 PM