Asked by:
CoordinateMapper.GetDepthFrameToCameraSpaceTable camera intrinsic
Question

with the CoordinateMapper.GetDepthFrameToCameraSpaceTable i get a matrix for each depth Point to calculate. The basic for calculation are the cameraintrinsic  parameter. I have other cameraparameters from a cameracalibration. My questions: How can I use these parameters like the CoordinateMapper.GetDepthFrameToCameraSpaceTable table or
how can I calculate the spacepoints with new CameraIntrinsic values?
many thank in advance for your help
kind regards Thilo
Monday, January 5, 2015 11:27 AM
All replies

Depth and Camera space are the same coordinate system so you don't use this table at all. The only thing required from depth to camera space would be to invert the projection. The coordinate mapper will do this for you, but if you want to do it yourself create inverse projection matrix. The information needed is in the depth frame description (height/width/focal length).
Carmine Sirignano  MSFT
 Proposed as answer by Carmine Si  MSFTMicrosoft employee Monday, January 5, 2015 9:19 PM
Monday, January 5, 2015 9:19 PM 
Hello Carmine,
thank you for the answer.
Here the intrinsic Parameter from the Sensor and the first 5 values from the "CoordinateMapper.GetDepthFrameToCameraSpaceTable" (starting left corner)
I have now spend hours to get one of the values in the table Matrix Table (below)without success. Is it posible to get the formula for calculation? Or any description?
kind regards Thilo
FocalLengthX 365,09530000 FocalLengthY 365,09530000 PrincipalPointX 254,94070000 PrincipalPointY 205,57900000 RadialDistortionFourthOrder 0,28324810 RadialDistortionSecondOrder 0,09906865 RadialDistortionSixthOrder 0,09757799 [0] X:0.7564058 Y:0.609950244 [1] X:0.752449334 Y:0.6091492 [2] X:0.7485153 Y:0.6083601 [3] X:0.744603336 Y:0.6075827 [4] X:0.7407133 Y:0.606817067
 Edited by THBrueckner Tuesday, January 6, 2015 5:14 PM
 Proposed as answer by Aaron Bryden Thursday, March 5, 2015 6:10 AM
Tuesday, January 6, 2015 2:03 PM 
Why can't you use the CoordinateMapper::MapDepthtoCameraSpace function? The result will provide you a lookup table of CameraSpacePoints for each depth x/y value? http://msdn.microsoft.com/enus/library/windowspreview.kinect.coordinatemapper.mapdepthframetocameraspace.aspx
As I stated there is no need to map depth to camera space since it is the same coordinate system? Are you trying to map depth to color space?
Carmine Sirignano  MSFT
Tuesday, January 6, 2015 6:36 PM 
Hello Carmine,
i use more then 4 kinect for a scan System. I have to calibrated the kinects and for the extrinsic calibration i need the correted intrinsic parameter.The mapdepthframetocmaraspace use only the sensor intrinsic Parameter.
Thank you for help
 Edited by THBrueckner Tuesday, January 6, 2015 8:19 PM
Tuesday, January 6, 2015 7:36 PM 
Our mapping functions already take into account for the radial distortion. Providing you internal details of our implementation would be no different then using the function yourself. We don't give out internal details to ensure there are no external dependencies and if there are changes required, that can be exposed at a future time without affecting a dependency we were not aware of.
If you require a more accurate system than what is provided, then you are going into an area that is beyond our SDK. You would have to calibrate your own system and build out your own tables. There are a lot of other websites that discuss this process.
As you noted, you already have the intrinsic values, you can find code in the c++ sample for Fusion Explorer where we use them.
Carmine Sirignano  MSFT
Wednesday, January 7, 2015 7:54 PM 
Carmine,
We can retrieve the intrinsic parameters from the CoordinateMapper through the use of the function CoordinateMapper.GetDepthCameraIntrinsics
The intrinsic parameters returned contain 3 distortion correction coefficients, but the documentation does not explain how to interpret those coefficients. Actually, the documentation explains absolutely nothing about any of the intrinsic parameters returned by this function.
So, what I get from your answer is that: yes, you are willing to give us the Intrinsic parameters, but you don't want to tell us how to use them. It's like talking to someone in a foreign language without giving them a translation dictionary because you are afraid of how they would interpret your words.... it makes no sense.
I would suggest that either you remove this function from the SDK in the future, or you give programmers an explanation as to how to use these parameters. Giving us tools without explanation is just frustrating and pointless, how can we use a tool without knowing how to use it ?
Why not simply document the Intrinsic parameters, since you already have them accessible ? I am sure that the time needed to properly document these parameters would be less than the time you must spend on forums answering people's questions about camera calibration ...
Wednesday, March 4, 2015 8:49 PM 
And I would add that explaining people how to calibrate a camera is not the same as giving them the details of your application's implementation.
I would also add that your mapping functions are available only through a Kinect server, which means that a Kinect must be plugged for your functions to be usable. What if someone wants to develop offline ?
Being able to retrieve the intrinsic parameters and use them to properly convert points from depth space to camera space without the need to have a Kinect plugged in is a quite useful feature. But maybe that is also beyond the objective of the SDK ?
Wednesday, March 4, 2015 8:52 PM 
William,
Check out http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction for a description of the distortion coefficients. Note that the calibration provided doesn't include any tangential distortion. Any use of the parameters provided by Microsoft is an advanced use case specific to the application and they can't cover all of these in the documentation. Nonetheless, many of us are grateful that they expose this information. The parameters are properly named and you can use openCV, PCL, your own code or other libraries to make use of them.
As far as converting points from depth space to camera space I would recommend saving and using the depth frame to camera space table which is well documented. This table has already taken the intrinsics into account. Please note that the table and the intrinsics are unique to each individual Kinect so you will need to keep track of which depth frame came from which kinect and the associated table if you plan to do this.
Thursday, March 5, 2015 6:13 AM 
Thanks for the reply Carmine.
I was thinking of using the depth frame to camera space table for the conversion, but then I realised it would only work one way; with the table I can convert from depth space to camera space, but I cannot convert from camera space to depth frame.
As for the distortion correction, I already use the model explained in the wiki article you provided. Actually, my problem is that when I compare the results obtained from converting a depth frame to camera space though the conversion table I don't get the same results as when I apply the Intrinsic coefficients. I get an average error of 2 pixels (ie: I convert a depth frame to camera space with the CoordinateMapper.MapDepthFrameToCameraSpace, then use the Intrinsic parameters (including radial distortion correction) to convert back from camera space to depth frame, then compute the distance between the original depth frame points and the new points, and I get an average error of 2 pixel).
Now I am trying to find the source of this error. If you tell me the DepthFrameToCameraSpaceTable contains coefficients that include tangential distortion correction, it might explain this error. It is also quite possible that I have a mistake in my code somewhere, but first I need to properly understand how the DepthFrameToCameraSpaceTable is computed and what each Intrinsic parameter represents so I can validate my code, hence the need for documentation.
Thursday, March 5, 2015 12:51 PM