none
Help/Criticize my calibration idea RRS feed

  • General discussion

  • Hey all, 

    This is just a quick question(sort of) for "post-calibration."

    I realized that even though the raw 11bit depth data is already in millimeter, it would

    be incorrect to calibrate the 11bit raw data since I wanted to plug it back into the skeletonengine

    again to find X, Y, Z in "real world" coordinates. Now, I am well aware that the Kinect was built to 

    play games and is probably not the most accurate sensor out there, but I have seen people calibrating 

    the Kinect to very accurate levels(although they used primesense drivers). 

    So I started a calibration test that would help with post-process of the X,Y,Z. 

    So the first test was obviously depth. Since depth is dependent on depth, I only measured depth given 

    by the kinect and compared it to the real world ruler measured depth; the error of the squared of the real

    world distance-away-from-the-kinect. Now my question is: Is the method I mentioned above the best?

    if so, should I continue on to calibrate the X (comparing real world X and virtual X, but also considering depth) 

    and then do Y? 

     

    Thanks in advance, this forum has been extremely helpful. 

     

    • Changed type Eddy Escardo-Raffo [MSFT] Friday, July 15, 2011 12:07 AM this seems to be more of a discussion starter rather than seeking a specific answer
    Thursday, July 14, 2011 3:06 AM

All replies

  • Project it into skeleton space then instead of encoding depth do so for x, y or z. Point it at the wall and see what you get. Play around with that a bit. Likely the first thing you will find is if you point the depth sensor at a wall you get circles. That's because the routine for converting depth to skeleton is wrong. Do that correctly and you find there's a few values you need. There's your calibration. I posted how to calculate the focal length and use it to project into 3 space on the general board. The focal length is wrong because the field of view is wrong. So finding the right field of view is one calibration task. Even with the wrong field of view though it should allow you to position the camera such that the wall is one solid color.

    That tells you the sensor is square to the wall. Switch between x, y and z encoding and you get horizontal, vertical lines or a solid color. If the floor and a side wall is visible and they are square then they switch too. The goal in calibration is to get that image to line up with the original depth image, i.e. the corner and edges are in the same place in both images. Adjusting the horizontal field of view moves zooms horizontally, the vertical vertically. You have three lines to line up.

    If you can get them to line up then the focal point isn't centered, it's normal and the sensor normal isn't coincident or both. Additional calibration to be done. The focal point isn't center is easy enough, you just shift the center point around. You wouldn't be able to tell that unless you have five parallel perpendicular surfaces visible, i.e. the end of a room. You can match a corner, but you can't match all for. If you're not axis aligned it's pretty obvious, you can match two corners, but not the other two.

    The depth measurements themselves are just what they are. This would get you at least you're projecting it correctly. Your virtual space lines up with the depth image from the device. You're relative proprotions and angles are the same.

    Thursday, July 14, 2011 9:27 AM
  • Keep in mind that the skeleton that Kinect provides can be jumpy sometimes. I cycle through the structure until I find a long series of matching lengths between frames (within 100th of an inch), then decide those to be my body segment lengths for that person
    Thursday, July 14, 2011 4:47 PM
  • You can also use smoothing to help with the jumpiness of the skeleton positions. (Call NuiTransformSmooth in C++, or set SkeletonEngine.TransformSmooth to true in C#.)
    Thursday, July 14, 2011 5:25 PM
  • Well, that was a fanciful idea I had, i.e. that you can calibrate without actually measuring anything. I really shouldn't post late at night :) I rather like the shading x, y and z idea, it's a rather cool effect. The aligning images thing would work between depth and rgp, but not depth to depth, those would always match up. I think playing around with that idea for shading would help get you a clear idea about what you want to do to calibrate.
    Thursday, July 14, 2011 7:42 PM