Answered by:
How to measure distance in mm between skeletal point data
Question

Hi all,
I'm looking to measure the distance between 2 points from the skeletal data, I understand that I need to convert the skeletal points to depth data points using
sensor.MapSkeletonPointToDepth()
so that I can get depth information regarding the points, it is how I then calculate the distance between those points that I am stuck with.
So for example I would like to grab the left elbow joint and left wrist joint from the skeletal information and then use these as the starting points, the calculated distance between the 2 should stay roughly the same regardless of rotation etc (I'm not to worried about jitter or the accuracy being slightly out).
I thought it may be simple so tried a function I found on the web to calculate the distance between 2 points in 3d space, after using this I soon decided that the points return from the SDK depth information don't appear to work in a standard 3d space scenario (this could very easily be my misunderstanding). The distance function I tried is below and I was passing elbow.x, elbow.y and elbow.depth directly as x1, y1, z1 and wrist.x, wrist.y and wrist.depth as the x2, y2, z2 parameters.
Could anybody point me in the right direction please?
Distance Function
/// <summary> /// Finds the distance between two points on a 3D surface. /// </summary> /// <param name="x1">The point on the xaxis of the first point</param> /// <param name="x2">The point on the xaxis of the second point</param> /// <param name="y1">The point on the yaxis of the first point</param> /// <param name="y2">The point on the yaxis of the second point</param> /// <param name="z1">The point on the zaxis of the first point</param> /// <param name="z2">The point on the zaxis of the second point</param> /// <returns></returns> public static double Distance3D(double x1, double y1, double z1, double x2, double y2, double z2) { // __________________________________ //d = √ (x2x1)^2 + (y2y1)^2 + (z2z1)^2 // //Our end result double result = 0; //Take x2x1, then square it double part1 = Math.Pow((x2  x1), 2); //Take y2y1, then sqaure it double part2 = Math.Pow((y2  y1), 2); //Take z2z1, then square it double part3 = Math.Pow((z2  z1), 2); //Add both of the parts together double underRadical = part1 + part2 + part3; //Get the square root of the parts result = Math.Sqrt(underRadical); //Return our result return result; }
Thanks in advance
Wes
Answers

Your distance formula is correct.
Don't map the points to depths. The Skeleton points should all be coordinates in 3D space with the origin at the kinect sensor, Microsoft has already done that work for you. So for example elbow.x elbow.y elbow.z. I believe depth coordinates give information about what the depth camera is seeing. For example, at an x,y position on the camera's image, you can get the depth.
 Marked as answer by Wes Potter Thursday, April 19, 2012 12:25 PM
All replies

Your distance formula is correct.
Don't map the points to depths. The Skeleton points should all be coordinates in 3D space with the origin at the kinect sensor, Microsoft has already done that work for you. So for example elbow.x elbow.y elbow.z. I believe depth coordinates give information about what the depth camera is seeing. For example, at an x,y position on the camera's image, you can get the depth.
 Marked as answer by Wes Potter Thursday, April 19, 2012 12:25 PM

Aeronick is essentially correct. You should do your distance calculations in SkeletonPoint space.
A SkeletonPoint is already expressed in 3D space. The units are in meters, expressed relative to an origin located at the center of the sensor's depth camera. For example, if there is a joint exactly centered 1 meter in front of the camera, its SkeletonPoint will be (0.0, 0.0, 1.0).
The main reason for using SkeletonPointToDepth is to find out where the joint appears in the depth sensor's bitmap. For example, if your depth image is has a resolution of 320x240 pixels, the same point centered 1 meter in front of the camera would map to a DepthImagePoint with X=160, Y=120, and Depth=1000 (millimeters). The conversion for an offcenter SkeletonPoint is more involved, because its corresponding X and Y coordinates in the depth image will vary depending on its distance from the camera. (The closer you get to the camera, the closer to the edge of the frame you will appear to be.)
John
K4W Dev 
Hi John and Aeronick ,
Could you please explain me with some examples.
For example : Consider a scenario where i want to calculate the height of a person standing infront of kinect. User's Head and Ankle skeleton points in 320x240 resolution are (0,1,0) , (0,1,0) respectively. User is 1 meter away from kinect camera so Depth is 1000(millimeters).
In the above scenario Head(0,1,0) , Ankle(0,1,0) Depth 1000 mm and 320x240 resolution ,So could you please help me in calculating the User's actual height.
Thanks in advance
Gowri!

Your distance formula is correct.
Don't map the points to depths. The Skeleton points should all be coordinates in 3D space with the origin at the kinect sensor, Microsoft has already done that work for you. So for example elbow.x elbow.y elbow.z. I believe depth coordinates give information about what the depth camera is seeing. For example, at an x,y position on the camera's image, you can get the depth.
Thanks aeronick! slight oversight when reading the docs there! :) I removed the code doing the Mapping to the depth data and the app suddenly seems like it's doing as it should.
Regards
Wes
