How to get extrensic (depth)camera parameters

• Question

• Hey folks!

Thanks to channel9 and this forum, I was able to get a 3d pointcloud from the Kinect(by using MapDepthToSkeletonPoint method).

My final goal however, is to pick objects with a robot from a conveyor. And the kinect is mounted above the conveyor looking vertical down to it, which shall provide me the information what kind of objects(due to border and height) there are and also where(X,Y,Z) these objects are.(then the conveyor starts moving, robot grabs objects, ...)

So, my current problem is, that I now need to get a rotation&translation matrix(or is there something easier?) from the Kinect camera coordinate system to real word coordinates. I am not settled on any referenceobject so far; chessboard, or (I guess this could do the trick also) three elevated points alongsides the conveyor(like 4 screwheads) in an fixed, known, quadratic arrangement.

Has anyone an idea, how to get the the parameters for such a translation matrix? I searched the web for kinect and extrensic camera parameters without any really relevant leads. Meanwhile i stumbled upon EMGU but i'm not sure if it contains the functions I would need respectively deliver the results i need.

Since I have only very basic knowledge of C++, I chose to work in this project with c#(made an easier impression to me)and build me in the last few weeks a pretty neat framework around my application. Therefore I would appreciate any solution, which bases also on C#, however I have also access to Matlab and its plenty toolboxes.

Any hints or recommendations are very, very appreciated, since my project timespan is approaching its end, and i am still far far far away from the expected results(suprise, suprise).

best regards,

me :)

Friday, July 13, 2012 10:23 PM

All replies

• The simplest way to express the transformation performed by MapDepthPointToSkeletonPoint is:

zSkeleton = (double)depth * 0.001;
xSkeleton = (((double)x / (double)width) - 0.5) * zSkeleton * 1.12032;
ySkeleton = (0.5 - ((double)y / (double)height) * zSkeleton * 0.84024;

...where:

• x and y are coordinates of a pixel in the depth frame
• depth is the depth (in millimeters) at that pixel
• width and height are the dimensions of the depth frame (e.g, 640x480, 320x240, etc.)

The multipliers in the x and y formulae are constants derived from the Kinect depth camera's field-of-view angle.

John
K4W Dev

Thursday, July 19, 2012 7:44 PM
• Thanks for the reply!

So the intrinsic camera parameters(focal length, ...)  are already considered in the depthframe?

Saturday, July 21, 2012 6:41 PM
• If you're asking whether the values in the depth frame are distances measured (perpendicular) to the plane of the camera, rather than (diagonally) to the center point of the camera, then the answer is yes.

John
K4W Dev

Monday, July 30, 2012 11:53 PM
• If you're asking whether the values in the depth frame are distances measured (perpendicular) to the plane of the camera, rather than (diagonally) to the center point of the camera, then the answer is yes.

John
K4W Dev

Thanks for the answer, but this was not really the question which I had in my mind.

Every camera has some optical abberations, like distortions. Are such abberations in the depthframe (or are they alread corrected by the kinect itself?)

Saturday, September 8, 2012 10:01 AM