none
color-depth stereo calibration RRS feed

  • Question

  • Is there any way to get the transformation matrix between the color camera and depth camera?  I know there are functions to convert between color coordinates and depth space, but I would like to get the actual transformation matrix between these two cameras.

    For Kinect v1, I've done stereo calibration via a chess board (by covering the IR emitter and using my own IR source).  Since Kinect v1 is a structured light depth camera, calibrating the color camera and IR camera essentially gives me the relative transformation between these two cameras.

    Now that v2 is ToF camera, i guess the IR camera has nothing to do with the depth data?  The depth data is generated from the ToF camera directly?  In this case, how what should I do get the transformation matrix between the color camera and the depth camera?

    Anyone has done similar calibration for Kinect v2?

    Thanks!

    Saturday, August 2, 2014 8:37 AM

Answers

  • We don't expose a transformation matrix for mapping between the two sensors. However, our IR and Depth images are all based off of the data from our TOF IR sensors and so the same calibration techniques that worked on the v1 sensor will work on the v2 one as well. Just remember that for best accuracy these calibration values should be generated for each sensor individually.

    Thank you,

    The Kinect Team


    Jesse Kaplan [msft]

    • Proposed as answer by Jesse Kaplan [msft] Monday, August 4, 2014 7:42 PM
    • Marked as answer by RGBD Friday, August 8, 2014 5:01 AM
    Monday, August 4, 2014 7:42 PM

All replies

  • We don't expose a transformation matrix for mapping between the two sensors. However, our IR and Depth images are all based off of the data from our TOF IR sensors and so the same calibration techniques that worked on the v1 sensor will work on the v2 one as well. Just remember that for best accuracy these calibration values should be generated for each sensor individually.

    Thank you,

    The Kinect Team


    Jesse Kaplan [msft]

    • Proposed as answer by Jesse Kaplan [msft] Monday, August 4, 2014 7:42 PM
    • Marked as answer by RGBD Friday, August 8, 2014 5:01 AM
    Monday, August 4, 2014 7:42 PM
  • Thanks Jesse for the answer!  That's good news

    Looks like I had some misunderstanding of how Kinect v2 works. Just want to confirm i understand it right now: on this image, the thing in the center is just the IR emitter (does not see IR); the second camera from left is the IR camera from which depth points are generated using ToF principles. In other words, depth points are in the IR camera's frame.

    Kevin

    Tuesday, August 5, 2014 2:14 AM
  • Correct: IR, depth, and BodyIndex are all from the same lens and with the same coordinate system. Body joints are generated from depth and so is from the lens with the same frame, but is given in the form of distance from the camera plane rather than in pixels.

    Jesse Kaplan [msft]

    Tuesday, August 5, 2014 4:10 AM
  • I also need to perform camera calibration. So I captured the Images separately using the RGB and the IR camera. I ran them through MATLAB camera calibrator APP and I got the intrinsic and extrinsic parameters of the camera.

    So now my question is: Should I perform stereo calibration of the Kinect to get the relative position of the cameras with each other? I am confused because the depth camera is already giving me the depth values so I just need to properly map the depth and RGB. Do I need stereo calibration for this?

    So sorry if my questions are confusing.

    Thanks;

    Ali

    Tuesday, September 2, 2014 9:00 PM