Kinect for Windows SDK and XNA framework.. RRS feed

  • Question

  • Hi

     I am trying to show a 3D sphere on the screen at the same location as my hand shown in a video.  I am using Kinect and it's SDK for windows to get the video input and skeleton joints information to get  the location hand.  I am using XNA frame work to display the 3D sphere. Has anyone tried something like this?


    Tuesday, November 15, 2011 10:17 AM

All replies

  • Certainly seems possible.  Were you running into any issues?
    Wednesday, November 16, 2011 4:45 AM
  • Thanks Brian. I was able to get it working but the ball is never positioned exactly at the hand coordinates. There seems to be some distance between them all the time which is varying. But the ball moves as the hand moves all the time. However they are located some distance apart.

    Breifly here is what I am doing..

    I am overlaying the video from Kinect on the window using the Texture2D class. I created a SpriteBatch with this texture and invoked the draw function. It is working. Need to be able to view the video as is. This way  the video is hazy based on the texture applied. It would be great if there is an alternative way to  get clear video as well.

    I am equating the coordinates of the sphere to the right hand coordinates. Then I am creating a translation matrix with these cordinates of the Spehere. Then I multiply the World matrix with this translation matrix.  Then I do  a primitive.draw by passing the params of World, view, projection etc. 

    Please let me know if there are other ways to acheive the desired effect..



    Thursday, November 17, 2011 11:45 AM
  • Are you using the Smoothing function?


    Just call this line at the NUI initialization code:

    kinectObject.SkeletonEngine.TransformSmooth = true;

    And things should look al lot better already.

    You can use TransformSmoothParameters to fine-tune the smoothing. Apply the parameters using

    kinectObject.SkeletonEngine.SmoothParameters = transformSmoothParameters; 




    • Edited by Erik Kramer Friday, November 18, 2011 1:34 PM
    Friday, November 18, 2011 1:31 PM
  • Hello Erik thanks for helping. I made the change you suggested by setting the TransformSmooth = true. I am still facing the same issues as described above.



    Saturday, November 19, 2011 10:28 AM
  • Could you post some source code? I will look into it and maybe I can come up with a solution.




    Monday, November 21, 2011 8:23 AM
  • Hello Eric Thanks for offering to help.

    Here is the code for equating the coordinates of the sphere and hand.

                        spheres[i].Position.X = Joint.RightHandPosition.X ;
                        spheres[i].Position.Y = Joint.RightHandPosition.Y ;
                        spheres[i].Position.Z = Joint.RightHandPosition.Z;

                      worldTranslation = Matrix.CreateTranslation(spheres[i].Position);
                      worldX *= worldTranslation;
                    currentPrimitive.Draw(worldX, view, projection, spheres[i].Color,false);

    Here is the code I am using for drawing the Kinect image on the XNA app window

     PlanarImage p = KinectImage; // this is the RGB image obtained from Kinect
                    texture1= new Texture2D(this.GraphicsDevice, p.Width, p.Height);
                    SpriteBatch spriteBatch = new SpriteBatch(this.GraphicsDevice);

                  spriteBatch.Draw(texture1, new Rectangle(0, 0,1280,1024),Color.GhostWhite);


    Pls let me know if there are better ways of acheiving the same functionality.




    Thursday, November 24, 2011 1:53 PM
  • Ravi

    What kind of Graphics Card you are using




    Friday, November 25, 2011 4:54 PM
  • Hi Ravi, have you looked into the different coordinate spaces?  Skeleton data is in meters, whereas image data is in pixels obviously.  In order to get from skeleton positioning to image positioning, the Skeleton Viewer sample uses this function (see the sample or this walkthrough):

    private Point getDisplayPosition(Joint joint) 
    float depthX, depthY; 
    nui.SkeletonEngine.SkeletonToDepthImage(joint.Position, out depthX, out depthY); 
    depthX = Math.Max(0, Math.Min(depthX * 320, 320)); 
    depthY = Math.Max(0, Math.Min(depthY * 240, 240)); 
    int colorX, colorY; 
    ImageViewArea iv = new ImageViewArea(); 
    // only ImageResolution.Resolution640x480 is supported at this point 
    nui.NuiCamera.GetColorPixelCoordinatesFromDepthPixel( ImageResolution.Resolution640x480, iv, (int)depthX, (int)depthY, (short)0, out colorX, out colorY); 
    return new Point((int)(skeleton.Width * colorX / 640.0), (int)(skeleton.Height * colorY / 480)); 

    As you can see it goes from skeleton -> depth and then depth -> color.  Do you have something similar in your code? 

    • Proposed as answer by ykbharat Sunday, May 6, 2012 12:13 PM
    Friday, November 25, 2011 8:42 PM
  • Thanks Bob for the suggestion. I try this approach and get back.



    Monday, November 28, 2011 1:54 PM