Please share a function for the current Kinect SDK that returns an array of all points's coordinates(XYZ) in the current depth map frame

# Please share a function for the current Kinect SDK that returns an array of all points's coordinates(XYZ) in the current depth map frame

• Monday, March 19, 2012 4:29 PM

See title.

Something like

public <suitable type> coordinates(<suitable parameters>)

{

//

}

with the option of only using each n-th point to speed up processing.

Eisenanstreicher

### All Replies

• Monday, March 26, 2012 4:39 PM

What's the problem? Why can't anyone share the function?

Eisenanstreicher

• Friday, May 04, 2012 3:14 AM

It would be great if such a function were already built into the SDK! Conversion from depths to world XYZ coordinates would seem to be such an important feature, I can't believe this hasn't been done already.

From some of the videos I've seen on YouTube it's obvious that some people have already found a solution. But I don't know how accurate or cloodgy their solutions are.

Given a few parameters for the depth camera (such as FOV) it should be possible to write an approximate function. A proper solution will take into account the unique calibration of each Kinect (which I think can be found in the firmware somewhere).

• Sunday, May 06, 2012 11:29 PM

Here's what I came up with:

```// Depth Camera characteristics
depthWidth = 320.0f;
depthHeight = 240.0f;
depthWidthHalf = depthWidth / 2.0f;
depthHeightHalf = depthHeight / 2.0f;
depthHFOV = 57.0f;
depthVFOV = 43.0f;
depthH = tan ( (depthHFOV / 2.0f) * ( M_PI / 180.0f ) );
depthV = tan ( (depthVFOV / 2.0f) * ( M_PI / 180.0f ) );

Vec3f realWorld( float depth, int lineNumber, int pixelNumber) {
Vec3f position;
position.x = depth * depthH * (pixelNumber / depthWidthHalf);
position.y = depth * depthV * (lineNumber / depthHeightHalf);
position.z = depth;
return position;
}```

The realWorld function returns a world-space XYZ coordinate for a singe depth map pixel, given the depth value and pixel Y and X.

• Tuesday, May 08, 2012 12:07 PM

todd looks like you've been looking at the channel 9 intro videos and i concur that looks about right because I've seen some computer to world coordinates equations for kinect but that looks like it will work or is close enough to change.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

• Edited by Tuesday, May 08, 2012 12:08 PM
•
• Tuesday, May 15, 2012 9:38 AM

It looks like it's part of the SDK now.

```Vector4 NuiTransformDepthImageToSkeleton(
LONG lDepthX,
LONG lDepthY,
USHORT usDepthValue
)```

http://msdn.microsoft.com/en-us/library/nuiskeleton.nuitransformdepthimagetoskeleton

• Tuesday, May 15, 2012 1:46 PM

I'm needing it as well.. I'm trying to get a point cloud but still don't have what I need. if I find x,y,z am I able to create a 3d scene?? or maybe just a file with coordinates to plotting maybe in matlab??

• Thursday, May 17, 2012 5:31 PM

todd is right convert depth image to skeleton then get coordinates from it. Thats probably what i would have done.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

• Saturday, May 19, 2012 1:11 PM

Yes, and to make it more efficient (rather than calling this function 640*480 times) you'll actually discover that the
`NuiTransformDepthImageToSkeleton`
function is implemented in the header files, allowing you to derive your own implementation that does the "mass conversion", i.e. this allows you to create a function that more efficiently iterates over all pixels, using some pre-computed values in the inner loops to speed everything up.
• Tuesday, May 22, 2012 2:08 PM

I just checked my sources, hope it helps:

```/// calculates the point cloud from the depth map, i.e. fills the already allocated x, y and z float array (pointed to by pXCoordinates, pYCoordinates
/// and pZCoordinates)
void coordinateTransformToPointCloud( USHORT* depthMapInMillimeters, int width, int height, float* pXCoordinates, float* pYCoordinates, float* pZCoordinates )
{
const int size = width*height;

//calculation taken from SDK header, see constant NUI_CAMERA_DEPTH_NOMINAL_INVERSE_FOCAL_LENGTH_IN_PIXELS:
//float zCoord = depth / 1000.0f;
//float xCoord = (xPixelCoord – 0.5f*imageWidth) * (320.0f/imageWidth) * 3.501e-3f * zCoord
//float yCoord = -(yPixelCoord – 0.5f*imageHeight) * (240.0f/imageHeight) * 3.501e-3f * zCoord

float zCoord;
float preCompFactorX = (320.0f/width) * 3.501e-3f;
float preCompFactorY = (240.0f/height) * 3.501e-3f;
int imageWidthHalf = width >> 1;
int imageHeightHalf = height >> 1;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
zCoord = *depthMapInMillimeters++ / 1000.0f; //millimeters to meters
*pZCoordinates++ = zCoord;
*pXCoordinates++ = (x - imageWidthHalf) * preCompFactorX * zCoord;
*pYCoordinates++ = (y - imageHeightHalf) * preCompFactorY * zCoord * -1.0f;
}
}
}```

• Tuesday, May 22, 2012 6:49 PM

so I could use that function with a point cloud sdk to make 3d models? I f i understand correctly that is the purpose of point clouds and kinect.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

• Tuesday, May 22, 2012 6:59 PM

Digital Diligence can you team up with me and we create a codeplex project for 3d importation with kinect using microsofts kinect sdk v1? or maybe v1.5 since its getting ready to come out. Would be a lot easier with v1.5 because you can save depth data to file.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

• Edited by Tuesday, May 22, 2012 6:59 PM
•
• Tuesday, May 22, 2012 10:14 PM

Well, keep in mind that 3D models "constructed" from a single depth frame really just show the (actually incomplete) front side of the object. For truly reconstructing objects, the best approach is Kinect Fusion, look for it on YouTube. The PointCloudLibrary has an Open-Source implementation of that approach.

I barey worked with 3D reconstruction at all. I only had this function already done for an entirely different purpose, a project that used a Time-Of-Flight depth camera and required for each depth pixel to have the corresponding point in 3D metric space.

Given this method, I think it's not too hard to convert the format into something the point cloud library understands, and then you can use it to visualize and work with 3D models reconstructed from single depth frames. But then again it's probably better to jsut build the bridge between acquiring depth data with the Kinect SDK, and PCL in general, s.t. their algorithm can work with the data (I suppose their implementation is tailored to OpenNI?)

• Tuesday, May 22, 2012 10:55 PM

I heard it might be possible to create a pcl (point cloud library) wrapper in c++ and use it in kinect sdk but need to know how to remove open ni parts.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

• Wednesday, May 23, 2012 10:11 AM

• Wednesday, May 23, 2012 12:17 PM

Thanks richart. I didnt realize that reconstruct me got it working with kinect drivers also. I will wait a few more months because the google groups discussion says that a gui commerical version will be available in a few more months.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

• Wednesday, November 28, 2012 1:09 AM

Hi Digital Diligence, I want to get the point clouds XYZ and the information RGB,because I want to draw the kinect data in OpenGL,Can you help me ?

I also want to get the point colouds of kinect with SDK1.6,but i don't know how set parameters in the function NuiTransformDepthImageToSkeleton.And the function NuiImageGetColorPixelCoordinatesFromDepthPixel,besides,can I achieve the goal of alignment between depth image and color image?

This problem confused me a few days,thanks your help.

Thanks!

• Wednesday, November 28, 2012 1:10 AM

Hi Todd_9, I want to get the point clouds XYZ and the information RGB,because I want to draw the kinect data in OpenGL,Can you help me ?

I also want to get the point colouds of kinect with SDK1.6,but i don't know how set parameters in the function NuiTransformDepthImageToSkeleton.And the function NuiImageGetColorPixelCoordinatesFromDepthPixel,besides,can I achieve the goal of alignment between depth image and color image?

This problem confused me a few days,thanks your help.

Thanks!