none
Removing lens distortion and camera parameters RRS feed

  • General discussion

  • Hi,

    my goal would be to remove the distortion of the camera lens and get an undistorted aligned color and depth frame. With the parameters exposed by the SDK I think it would be possible to remove the distortion of the depth frame, but not from color (missing the intrinsic camera parameters and coefficients for distortion). Is there a mapping function I could use for that?

    Although it has been requested numerous times my hopes are that you can expose the full camera parameters. With the Kinect V2 SDK we already got the depth parameters and I don't want to wait for V3 to get the color parameters. It would also be a blast to know the transformation between the spaces.

    So many hours are put into recalculating the stuff that is already stored in the device. Even 3D party camera drivers already DO EXPOSE this information  https://github.com/OpenKinect/libfreenect2/issues/41   Also the Intel Realsense SDK exposes all the camera parameters and give the developers all the access they need. So why not with Kinect?

    Best,

    Markus

    Saturday, April 18, 2015 1:03 PM

All replies

  • As you have stated, the team is aware of the request. The design of the SDK was to provide a core set of functionality and is recommended that people use the Coordinate Mapper as this abstracts that lower level knowledge needed for the majority of our developers base. Typically the reasons for not exposing something that lower level is to prevent taking a dependency on something that might change, breaking expectations most expect. 

    In the case of academics and advanced users needs for lower level access, as you said, there are ways for you to get the data. Some opt to calibrate themselves since you will still need extrinsic data and you can find information on that on the Internet. One such example is here: https://threeconstants.wordpress.com/2014/11/09/kinect-v2-depth-camera-calibration


    Carmine Sirignano - MSFT

    Tuesday, April 21, 2015 6:00 PM
  • In my case I am selling a product that gives severely disabled people access to a computer and games. You can get a better picture of the conditions of my typical users here: KinesicMouse user stories

    I just cannot request these people to print a checkerboard and perform a proper calibration of their device. They even need support in installing and placing the camera.

    I also don't understand your argument of backwards compatibility. If you change the camera parameters via firmware update and there is access to via an API function it should not break much. Depth parameters are there, color is not?

    I appologize for my insistance, but when you try building something that really makes a difference in peoples lifes and you are  stuck with something, it just makes you rage.

    Best,
    Markus


    Thursday, April 23, 2015 10:30 AM
  • Hi,

    to follow up this subject, here is how I get an undistorted depth frame with known camera parameters in depth frame resolution (512x424).

    1.) Acquire depth and color frame through IMultiSourceFrameReader

    2.) Use the ICoordinateMapper.MapDepthFrameToColorSpace to create a color picture matching the depth frame resolution.

    3.) Undistort the depth frame and the color frame from step 2 using the precalculated look-up-table below.

    This code generates a map of DepthSpacePoints. It generates coordinates for the distorted (the original kinect depth frame). When you loop through your undistordet frame in depth frame resolution it tells you the coordinates in the distorted frame. So 

    // refresh the look up table for the undistortion const double rd2o = mDepthCameraIntrinsics->RadialDistortionSecondOrder; const double rd4o = mDepthCameraIntrinsics->RadialDistortionFourthOrder; const double rd6o = mDepthCameraIntrinsics->RadialDistortionSixthOrder; for (unsigned int x = 0; x < DEPTH_IMAGE_WIDTH; ++x) for (unsigned int y = 0; y < DEPTH_IMAGE_HEIGHT; ++y) { double dx = (static_cast<double>(x) - mDepthCameraIntrinsics->PrincipalPointX) / mDepthCameraIntrinsics->FocalLengthX; double dy = (static_cast<double>(y) - mDepthCameraIntrinsics->PrincipalPointY) / mDepthCameraIntrinsics->FocalLengthY; double r = std::sqrt(dx * dx + dy * dy); double dsp_x = dx * (1.0 + rd2o * pow(r, 2.0) + rd4o * pow(r, 4.0) + rd6o * pow(r, 6.0)); double dsp_y = dy * (1.0 + rd2o * pow(r, 2.0) + rd4o * pow(r, 4.0) + rd6o * pow(r, 6.0)); DepthSpacePoint new_dsp; // the depth space point in the src (distorted) picture new_dsp.X = float(dsp_x * mDepthCameraIntrinsics->FocalLengthX + mDepthCameraIntrinsics->PrincipalPointX); new_dsp.Y = float(dsp_y * mDepthCameraIntrinsics->FocalLengthY + mDepthCameraIntrinsics->PrincipalPointY);

    unsigned int index_dst = (y * DEPTH_IMAGE_WIDTH) + x; mDepthUndistortionLUT[index_dst] = new_dsp; }


    As you can see this leaves me with aligned frames in depth resolution as opposed to the much better color res. The other way round would require the camera parameters for the color camera.

    Best,

    Markus



    • Edited by Xcessity Monday, April 27, 2015 9:58 PM added picture of depth distortion
    Monday, April 27, 2015 5:34 PM
  • Hi.

    isn't it needed for the calculation of X and Y to use the depth from the source depth frame ?

    i see that you have calculate the X and Y from the intrinsics of the camera and the u and v of the original frame but I cannot see any use of the depth relevant to u and v of the depth frame

    Wednesday, February 24, 2016 10:27 PM