none
Calibrating the kinect depth data with an external RGB camera to enable close up shots? RRS feed

  • Question

  • Hi,

    I am currently trying to use the kinect with xbox to try and perform dimension measurement on an object as part of a research project. However the kinect requires the object to be placed far away to allow accurate depth measurement. I would however need a close up shot of the object in RGB to do postporcessing. Is there anyway I could use a normal camera on top of the kinect to zoom in on object and calibrate the depth data out of the kinect with this RGB image? If I use the kinect data directly the object appears very small and I am unable to do any image processing on it succesfully.

    Vivek

    Saturday, February 11, 2012 4:50 AM

Answers

  • It's certainly possible - you'll have two RGB cameras (yours and the Kinect's) viewing the same scene.  They will have different fields of view, magnification factors, resolutions, white balance/gain, etc, but theoretically you should be able to determine the mapping between the two, and the Kinect SDK itself provides the mapping from the depth stream to the video stream.

    I'm sure there are academic whitepapers on the topic, though I cannot personally point you at them.  One technique might be for you to write a calibration program - say, hold a square card of a known color at a known distance, then do it again at another known distance.  A rectangular field of solid color should be easy to "find" programmatically in both images, and with two known depths I believe you'll have enough data to calculate the two frustrums and create a mapping.  Hrm... this sounds like a fun idea for a side project...

    In any case, the output of this would be a simple mapping from color->color, and then you can use the MapXXX methods in the KinectSDK to put depth values into the picture.


    -Adam Smith [MSFT]

    • Marked as answer by vivu91 Saturday, February 11, 2012 12:53 PM
    Saturday, February 11, 2012 7:12 AM

All replies

  • It's certainly possible - you'll have two RGB cameras (yours and the Kinect's) viewing the same scene.  They will have different fields of view, magnification factors, resolutions, white balance/gain, etc, but theoretically you should be able to determine the mapping between the two, and the Kinect SDK itself provides the mapping from the depth stream to the video stream.

    I'm sure there are academic whitepapers on the topic, though I cannot personally point you at them.  One technique might be for you to write a calibration program - say, hold a square card of a known color at a known distance, then do it again at another known distance.  A rectangular field of solid color should be easy to "find" programmatically in both images, and with two known depths I believe you'll have enough data to calculate the two frustrums and create a mapping.  Hrm... this sounds like a fun idea for a side project...

    In any case, the output of this would be a simple mapping from color->color, and then you can use the MapXXX methods in the KinectSDK to put depth values into the picture.


    -Adam Smith [MSFT]

    • Marked as answer by vivu91 Saturday, February 11, 2012 12:53 PM
    Saturday, February 11, 2012 7:12 AM
  • Thank you for the inputs... Will try it out and let you know what I am able to do.. Thanks

    Vivek

    Saturday, February 11, 2012 12:54 PM
  • You might also want to check into OpenCV, which has the necesary calibration functions built in.  You would likely be looking for the stereo calibration routines.  I would be interested in seeing how you implement the tech if you end up releasing it with a paper later on...  Good luck!
    Saturday, February 11, 2012 1:48 PM
  • I am now doing the similar work like you.

    I am studying in 3-D view synthesis with kinect. I try to set 3 RGB cameras with a Kinect sensor under them.

    The center RGB camera is the ground truth. So, I need depth information from Kinect mapping to the left

    and right cameras. Then ,using them synthesize the center view.

    Maybe we can discuss it together. My E-mail is "s7531234s@gmail.com"

    Welcome to contact me!!!



    • Edited by 夏飄雪 Sunday, February 12, 2012 2:33 PM
    Sunday, February 12, 2012 2:32 PM
  • Thank you wil check out openCV. Have you used it for some application like this?


    Vivek

    Saturday, February 18, 2012 8:42 AM