Asked by:
Math behind Coordinate Mapper
General discussion

I need to display data obtained from the Kinect as a colored point cloud. Everything works fine, except for the shift between color and depth maps.
The CoordinateMapper is great for that, but let's assume that for the sake of optimisation, I'd like to make this computation on the shader phase. As the the SDK is closesourced, there's no way I can go look at the code and reproduce the algorithm.
So, my goal here is to exactly understand what re the calculation made by the Mapper to obtain color space coordinates from a depth point.
I'm quite sure it would imply data like the sizes and fovs of both sensors, but I don't have these.
As anyone already attempted something like this?
 Changed type Carmine Si  MSFTMicrosoft employee Tuesday, September 16, 2014 7:37 PM
All replies


One the coordinate mapper there is a function to dump the depth/color table which you pass to your shader. You can also just pass the mapped frame values to the shader as a resource with the combination of camera specific values from the FrameDescription for the source. While we cannot provide the specifics of our code to ensure noone takes dependencies, you can look for Internet posts on how to do calibration between the different camera spaces.
http://nicolas.burrus.name/index.php/Research/KinectCalibration
Carmine Sirignano  MSFT
 Edited by Carmine Si  MSFTMicrosoft employee Tuesday, September 16, 2014 7:34 PM

The coordinate mapper depth/color table is not a real solution for my problem, as it implies asking the SDK to do all the calculation, and then passing the huge resulting dataset to the shader, instead of simply locating all the calculation in the shader.
Thanks for the link, it contains the formulas I was looking for.
However, all the 18 intrinsics provided correspond to the Kinect v1, so I guess I'll have to figure out what would be the Kinect v2 equivalents.I'm sure I wouldn't be the only one happy to have helpers functions for this in the SDK that would grant access to the devicespecific constants used by the coordinate mapper.


Hi!
I have tried classical science for reimplementing mapper function from depth image to point cloud using intristics given by Kinect SDK 2:
hr = m_pKinectSensor>get_CoordinateMapper(&m_pMapper); CameraIntrinsics ci; hr = m_pMapper>GetDepthCameraIntrinsics(&ci); std::cout << "intristics" << "FX" << ci.FocalLengthX << "FY" << ci.FocalLengthY << "CX" << ci.PrincipalPointX << "CY" << ci.PrincipalPointY; std::cout << "r2" << ci.RadialDistortionSecondOrder << "r4" << ci.RadialDistortionFourthOrder << "r6" << ci.RadialDistortionSixthOrder;
I' ve recived constants. Than, using matlab, I've tried to recalculate point cloud and compair it with point cloud out of the Kinect SDK Mapper:
fx = 366.382; fy = 366.382; cx = 255.719; cy = 210.346; k2 = 0.0877461; k4 = 0.265252; k6 = 0.0944819; X=zeros(8,1); %projectiob correction
%M,N  coordinate on image, depth is raw uint16 value at (M,N) x = (Mcx)/fx; y = (Ncy)/fy; X(1)=x*depth*0.001; X(2)=y*depth*0.001; %distorsion correction r2 = x^2+y^2; p = 1+k2*r2+k4*r2^2+k6*r2^4; x=x*p; y=y*p; X(3)=x*depth*0.001; X(4)=y*depth*0.001;
At the end I've calculated the difference between my point cloud and Kinect SDK mapper point cloud. The average difference was 0.006m!
May be I'm doing something wrong, or Kinect SDK intristics are corrupted.
Any suggestions will be appriciated! Thanks in advance!


There wa a mistake in undistorison:
I''ve corrected the code according to opencv undistort source:
x = (Mcx)/fx; y = (Ncy)/fy; %distorsion correction double icdist = (1 + ((k[7]*r2 + %k[6])*r2 + k[5])*r2)/(1 + ((k[4]*r2 + k[1])*r2 + k[0])*r2); %k2 = k[0] %k4 = k[1] %k6 = k[4]; x0= x; y0= y; for j=1:4 r2 = x^2+y^2; p = 1/(1+k2*r2+k4*r2^2+k6*r2^3); x=x0*p; y=y0*p; end xc = x*depth*0.001; xy = y*depth*0.001;
But I still getting average error 0.0042m :(
 Edited by Victor Kulikov Monday, July 6, 2015 7:37 AM

I've figured out the problem. The problem was in matlab index shift
This code in Matlab calculates the point coordinates with error 1e8 (submillimeters) maybe someone will find it usefull:
fx = 366.3814870652361; fy = 366.3814866524901; cx = 255.7188008480681; cy = 210.3457972491764; k2 = 0.087746424445; k4 = 0.26525282252; k6 = 0.09448294462; %projection correction (1 is for Matlab indexes, for c++ == 0) x = (Mcx1)/fx; y = (Ncy1)/fy; %distorsion correction x0= x; y0= y; for j=1:4 r2 = x^2+y^2; p = 1/(1+k2*r2+k4*r2^2+k6*r2^3); x=x0*p; y=y0*p; end X = x*depth*0.001; Y = y*depth*0.001; Z = depth*0.001
 Edited by Victor Kulikov Monday, July 6, 2015 11:24 AM