none
Math behind Coordinate Mapper RRS feed

  • General discussion

  • I need to display data obtained from the Kinect as a colored point cloud. Everything works fine, except for the shift between color and depth maps.

    The CoordinateMapper is great for that, but let's assume that for the sake of optimisation, I'd like to make this computation on the shader phase. As the the SDK is close-sourced, there's no way I can go look at the code and reproduce the algorithm.

    So, my goal here is to exactly understand what re the calculation made by the Mapper to obtain color space coordinates from a depth point.

    I'm quite sure it would imply data like the sizes and fovs of both sensors, but I don't have these.

    As anyone already attempted something like this?

    Tuesday, September 16, 2014 2:54 AM

All replies

  • I would also be interested in having these parameters or having the projection matrices. 

    I was thinking about calibrating the IR and RGB cameras myself but it would be cool if someone has the data already available.

    Tuesday, September 16, 2014 10:03 AM
  • One the coordinate mapper there is a function to dump the depth/color table which you pass to your shader. You can also just pass the mapped frame values to the shader as a resource with the combination of camera specific values from the FrameDescription for the source. While we cannot provide the specifics of our code to ensure no-one takes dependencies, you can look for Internet posts on how to do calibration between the different camera spaces.

    http://nicolas.burrus.name/index.php/Research/KinectCalibration


    Carmine Sirignano - MSFT



    Tuesday, September 16, 2014 7:29 PM
  • The coordinate mapper depth/color table is not a real solution for my problem, as it implies asking the SDK to do all the calculation, and then passing the huge resulting dataset to the shader, instead of simply locating all the calculation in the shader.

    Thanks for the link, it contains the formulas I was looking for.
    However, all the 18 intrinsics provided correspond to the Kinect v1, so I guess I'll have to figure out what would be the Kinect v2 equivalents.

    I'm sure I wouldn't be the only one happy to have helpers functions for this in the SDK that would grant access to the device-specific constants used by the coordinate mapper.

    Wednesday, September 17, 2014 5:17 AM
  • Just want to add that this is something I am looking for as well. The API should expose device-specific constants which can then be bundled with the video streams and used to dynamically map the depth points to color space during playback.
    Thursday, October 23, 2014 2:35 AM
  • Hi!

    I have tried classical science for reimplementing mapper function from depth image to point cloud using intristics given by Kinect SDK 2:

    hr = m_pKinectSensor->get_CoordinateMapper(&m_pMapper);
    CameraIntrinsics ci;
    hr = m_pMapper->GetDepthCameraIntrinsics(&ci);
    
    std::cout << "intristics" << "FX" << ci.FocalLengthX << "FY" << ci.FocalLengthY << "CX" << ci.PrincipalPointX << "CY" << ci.PrincipalPointY;
    std::cout << "r2" << ci.RadialDistortionSecondOrder << "r4" << ci.RadialDistortionFourthOrder << "r6" << ci.RadialDistortionSixthOrder;

    I' ve recived constants. Than, using matlab, I've tried to recalculate point cloud and compair it with point cloud out of the Kinect SDK Mapper:

    fx = 366.382; fy = 366.382; cx = 255.719; cy = 210.346; k2 = 0.0877461; k4 = -0.265252; k6 = 0.0944819; X=zeros(8,1); %projectiob correction

    %M,N - coordinate on image, depth is raw uint16 value at (M,N) x = (M-cx)/fx; y = (N-cy)/fy; X(1)=x*depth*0.001; X(2)=-y*depth*0.001; %distorsion correction r2 = x^2+y^2; p = 1+k2*r2+k4*r2^2+k6*r2^4; x=x*p; y=y*p; X(3)=x*depth*0.001; X(4)=-y*depth*0.001;


    At the end I've calculated the difference between my point cloud and Kinect SDK mapper point cloud. The average difference was 0.006m! 

    May be I'm doing something wrong, or Kinect SDK intristics are corrupted.

    Any suggestions will be appriciated! Thanks in advance!

    Sunday, July 5, 2015 11:42 AM
  • Hum, your distortion correction doesn't look right, you should have a look at "the inverse mapping" here:

    http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html

    An implementation is given in the calibration toolbox.

    Sunday, July 5, 2015 7:20 PM
  • There wa a mistake in undistorison:

    I''ve corrected the code according to opencv undistort source:

    x = (M-cx)/fx;
    y = (N-cy)/fy;
    
    %distorsion correction double icdist = (1 + ((k[7]*r2 + %k[6])*r2 + k[5])*r2)/(1 + ((k[4]*r2 + k[1])*r2 + k[0])*r2);
    %k2 = k[0]
    %k4 = k[1]
    %k6 = k[4];
    
    x0= x;
    y0= y;
    for j=1:4
        r2 = x^2+y^2;
        p = 1/(1+k2*r2+k4*r2^2+k6*r2^3);
        x=x0*p;
        y=y0*p;
    end
    xc = x*depth*0.001;
    xy = -y*depth*0.001;
    But I still getting average error 0.0042m :(



    Monday, July 6, 2015 7:36 AM
  • I've figured out the problem. The problem was in matlab index shift

    This code in Matlab calculates the point coordinates with error 1e-8 (submillimeters) maybe someone will find it usefull:

    fx = 366.3814870652361;
    fy = 366.3814866524901;
    cx = 255.7188008480681;
    cy = 210.3457972491764;
    
    k2 = 0.087746424445;
    k4 = -0.26525282252;
    k6 = 0.09448294462;
    
    %projection correction (-1 is for Matlab indexes, for c++ == 0)
    x = (M-cx-1)/fx;
    y = (N-cy-1)/fy;
    
    %distorsion correction 
    x0= x;
    y0= y;
    for j=1:4
        r2 = x^2+y^2;
        p = 1/(1+k2*r2+k4*r2^2+k6*r2^3);
        x=x0*p;
        y=y0*p;
    end
    
    X = x*depth*0.001;
    Y = -y*depth*0.001;
    Z = depth*0.001


    Monday, July 6, 2015 10:31 AM