none
Kinect JointOrientation and BoneRotation Matrix questions RRS feed

  • Question

  • I am currently working on a project where I need to use WPF 3D to render a piece of 3D content. The content consists of a mesh which is linked to a Kinect skeleton via smooth skinning. I have made progress using just the positional data but not much using the rotational (BoneRotation) data added in SDK Version 1.5.

    In the pictures attached I have just a couple of joints on screen hipcenter (black), hipright (red) and kneeright (blue). I am attempting to get the joints moving relative to each other using the following code:

    foreach (BoneOrientation bone in boneCollection) { Point3D originPoint = new Point3D(0, 1, 0); Matrix3D hipCenterMatrix = skeleton.GetRelativeJointMatrix("hipcenter"); Matrix3D hipRightMatrix = skeleton.GetRelativeJointMatrix("hipright"); Matrix3D kneeRightMatrix = skeleton.GetRelativeJointMatrix("kneeright"); Matrix3D hipRightComposite = Matrix3D.Multiply(hipRightMatrix, hipCenterMatrix); Matrix3D kneeRightComposite = Matrix3D.Multiply(kneeRightMatrix, hipRightComposite); Point3D position; switch (bone.EndJoint) { case JointType.HipCenter: position = Point3D.Multiply(originPoint, hipCenterMatrix);

    skeletonContent.Children.Add(CreateSkeletonJoint(position, Colors.Black).Content);

    break; case JointType.HipRight: position = Point3D.Multiply(originPoint, hipRightComposite);

    skeletonContent.Children.Add(CreateSkeletonJoint(position, Colors.Red).Content); break; case JointType.KneeRight: position = Point3D.Multiply(originPoint, kneeRightComposite);

    skeletonContent.Children.Add(CreateSkeletonJoint(position, Colors.Blue).Content); break; } } //Set the modelvisual content this.Content = skeletonContent;

    I have looked at the MSDN links on the kinect joint orientation however I have found the information in it is not particularly clear. I have also looked at the XNA avateering demo but I find that while it is well commented, the code is not particularly clear as to how it deals with the rotation matrices (particularly because a lot of constraining and filtering code to prevent model collisions is mixed into the manipulation of the rigged model used in it)

    My questions are as follows:

    What exactly does the joint orientation matrices represent? Do they contain any positional data?

    Should I be using the joint orientation data alone to deform the mesh or should I be using some combination of joint orientation and the kinect position data for each joint?

    How should I be using the joint rotation matrices to deform a mesh as following the matrices through to/from the hipcenter doesnt appear to be working? I can see part of the rotation is correct in the images below however I cant seem to get them to chain together 

    Thanks for reading

    -Brett

    Wednesday, July 11, 2012 10:57 AM

Answers

  • Answered inline.

    What exactly does the joint orientation matrices represent? Do they contain any positional data?

    Join orientation do not contain positional data. There are 2 kind of rotations Hierarchy and Absolute rotations. They are in DCM format(Directional Cosine Matrices). [xx,xy,xz; yx,yy,yz; zx,zy,zz]. Each row of the matrix represents a vector of the bone coordinate system (in it’s parent bone coordinate system for the hierarchical joints, or in the camera coordinate system for the absolute orientations). The +Y vector always lies along the bone direction.

    Should I be using the joint orientation data alone to deform the mesh or should I be using some combination of joint orientation and the Kinect position data for each joint?

    Technically, just the root hip center position, joint orientations, and bone Lengths should be sufficient to deform the mesh. If you are trying to use this information to deform a mesh(avatar), the bone lengths will be fixed for that particular avatar and all you need would be bone orientations and the hip center root joint position to translate the avatar.

    Relating to skinning the mesh: In the Avateering sample, I’d recommend you ignore the constraining and filtering code – this is present only to improve the visual appearance of the avateering. The code you should concentrate on are the retargeting functions in AvateeringXNA.cs (BuildJointHierarchy, RetargetMatrixHierarchyToAvatarMesh and SetJointTransformation) which convert from a Kinect SDK skeleton orientations to another model mesh skeleton and then the UpdateWorldTransforms and UpdateSkinTransforms in AvatarAnimator.cs which take these converted orientations  and apply them to the avatar skeleton bones to calculate the final bone and skin positions by multiplying through the hierarchy.

    Reference for Skinning avatars: Looking at how the skinning is done on the CPU might also help – the XNA CPU Skinning sample is a good place to start, as this demonstrates how the bone orientations are applied to the mesh vertices to deform the mesh in XNA: http://create.msdn.com/en-US/education/catalog/sample/cpu_skinning 

    Skinning for Avateering: The avatar mesh bone offset/bone length is used in the skinning, the first step in the Update function in AvatarAnimator.cs is to set the combined bind pose orientations and bone positions from the avatar mesh skeleton every frame. This default orientation is replaced by the calculated bone orientation by the ReplaceBoneMatrix function which is called for each bone from SetJointTransformation in AvateeringXNA.cs.


    How should I be using the joint rotation matrices to deform a mesh as following the matrices through to/from the hipcenter doesnt appear to be working? I can see part of the rotation is correct in the images below however I can’t seem to get them to chain together

    If you are just trying to re-calculate the skeleton joint positions from bone lengths and *hierarchical* orientations you will likely need to perform operations something like: starting with the root hip center position then multiply the XNA +Y/Up vector with the orientation of the hip center to set the root orientation of the whole skeleton, then multiply the calculated +Y bone vector of the hip center (i.e. the previous bone +Y vector) by the orientation of the next bone then translate by that next bone length  (which you either have from your avatar mesh or need to calculate from the joint positions), repeating for each bone in the chain. Don’t forget that orientations are stored at the end joint of a bone (so the hip center to hip left bone orientation is stored at hip left).

    Thanks,
    Ashok Jallepalli,
    Program Manager, kinect for Windows Team


    • Edited by ashok j Friday, July 13, 2012 11:41 PM
    • Marked as answer by BrettLawless Wednesday, July 18, 2012 9:14 AM
    Friday, July 13, 2012 11:40 PM

All replies

  • I thought i would mention kmotion link because the author has made source code available and the last poster has updated with joint orientation and bone orientation code which he has gotten to make movements much more fluid on. Heres the link:

    http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/2bfff4a4-0d2d-40a3-ae65-8299f65bec8c

    remember to post on that thread and ask last poster to send you his code and then you can modify it into yours and figure out where your going wrong.


    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

    • Proposed as answer by The Thinker Wednesday, July 11, 2012 7:27 PM
    Wednesday, July 11, 2012 7:26 PM
  • Thanks Thinker, I have just put a post up now. Just separate to your reply would you know of any other resources on joint orientation aside the msdn documentation? My root problem is an inadequate understanding of joint orientation. I cant figure out the magic incantation to get it to work due to this!

    Thanks

    -Brett

    Thursday, July 12, 2012 9:22 AM
  • I have one version in my email from last poster let me check it tonight and forward it to you. post email here and i can forward it to you.

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

    Thursday, July 12, 2012 12:41 PM
  • Answered inline.

    What exactly does the joint orientation matrices represent? Do they contain any positional data?

    Join orientation do not contain positional data. There are 2 kind of rotations Hierarchy and Absolute rotations. They are in DCM format(Directional Cosine Matrices). [xx,xy,xz; yx,yy,yz; zx,zy,zz]. Each row of the matrix represents a vector of the bone coordinate system (in it’s parent bone coordinate system for the hierarchical joints, or in the camera coordinate system for the absolute orientations). The +Y vector always lies along the bone direction.

    Should I be using the joint orientation data alone to deform the mesh or should I be using some combination of joint orientation and the Kinect position data for each joint?

    Technically, just the root hip center position, joint orientations, and bone Lengths should be sufficient to deform the mesh. If you are trying to use this information to deform a mesh(avatar), the bone lengths will be fixed for that particular avatar and all you need would be bone orientations and the hip center root joint position to translate the avatar.

    Relating to skinning the mesh: In the Avateering sample, I’d recommend you ignore the constraining and filtering code – this is present only to improve the visual appearance of the avateering. The code you should concentrate on are the retargeting functions in AvateeringXNA.cs (BuildJointHierarchy, RetargetMatrixHierarchyToAvatarMesh and SetJointTransformation) which convert from a Kinect SDK skeleton orientations to another model mesh skeleton and then the UpdateWorldTransforms and UpdateSkinTransforms in AvatarAnimator.cs which take these converted orientations  and apply them to the avatar skeleton bones to calculate the final bone and skin positions by multiplying through the hierarchy.

    Reference for Skinning avatars: Looking at how the skinning is done on the CPU might also help – the XNA CPU Skinning sample is a good place to start, as this demonstrates how the bone orientations are applied to the mesh vertices to deform the mesh in XNA: http://create.msdn.com/en-US/education/catalog/sample/cpu_skinning 

    Skinning for Avateering: The avatar mesh bone offset/bone length is used in the skinning, the first step in the Update function in AvatarAnimator.cs is to set the combined bind pose orientations and bone positions from the avatar mesh skeleton every frame. This default orientation is replaced by the calculated bone orientation by the ReplaceBoneMatrix function which is called for each bone from SetJointTransformation in AvateeringXNA.cs.


    How should I be using the joint rotation matrices to deform a mesh as following the matrices through to/from the hipcenter doesnt appear to be working? I can see part of the rotation is correct in the images below however I can’t seem to get them to chain together

    If you are just trying to re-calculate the skeleton joint positions from bone lengths and *hierarchical* orientations you will likely need to perform operations something like: starting with the root hip center position then multiply the XNA +Y/Up vector with the orientation of the hip center to set the root orientation of the whole skeleton, then multiply the calculated +Y bone vector of the hip center (i.e. the previous bone +Y vector) by the orientation of the next bone then translate by that next bone length  (which you either have from your avatar mesh or need to calculate from the joint positions), repeating for each bone in the chain. Don’t forget that orientations are stored at the end joint of a bone (so the hip center to hip left bone orientation is stored at hip left).

    Thanks,
    Ashok Jallepalli,
    Program Manager, kinect for Windows Team


    • Edited by ashok j Friday, July 13, 2012 11:41 PM
    • Marked as answer by BrettLawless Wednesday, July 18, 2012 9:14 AM
    Friday, July 13, 2012 11:40 PM
  • Thanks for that detailed reply. This resolves a lot of confusion I was experiencing. I'll post an update when I get back to this aspect of the project. I'm taking a day or two off it for mental defragging 

    Thanks,

    Brett

    Monday, July 16, 2012 10:07 AM
  • enfusion i have sent you the updated code from kmotion but im interested in the updated source myself and will use it later.

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://jefferycarlsonblog.blogspot.com/

    Tuesday, July 17, 2012 12:03 PM
  • Dear Ashok,

             I am new to kinect and XNA so understanding the avateer sample looks little complicated . I am trying to create a Virtual Dress app in kinect .

    Can you please suggest how to do this by looking on the Avateer sample code. I just want to show the present user in the video frame and show the dress on the video at the skeleton particular place.

    What are the basic things I want to do ?? It would be great if you give your points hear.

    Looking for your info.


    Regards,
    Jayakumar Natarjan
    Click Here :Blog

    Monday, July 30, 2012 2:17 PM
  • Hi guys,

    A day or two off took a while longer than expected as I had other work to get done. With the new information from Ashok I proceeded to create the skeleton using the relative rotation information. For each bone I translated by the bone length on the Y-axis and then rotated using the Kinect relative rotations (chaining these rotations and translation from hipcenter to joint).

    So basically for each joint you have a rotation and translation matrix (a transform group) and joined the transformation groups from the hipcenter until I reached the desired bone.

    Heres a pic of one of the skeletons being generated.

    Thanks for the help,

    Brett

    Wednesday, September 5, 2012 3:46 PM
  • can you provide the sample for the  same

    Regards,
    Jayakumar Natarjan
    Click Here :Blog

    Wednesday, October 17, 2012 12:36 PM
  • The avateering sample is now updated jayakumar nataraj. You should try playing around with the code and drawing cloths overtop of the human (avatars allow easy clothing changes compared to other 3d models).


    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://www.computerprofessions.co.nr


    • Edited by The Thinker Thursday, October 18, 2012 12:36 PM
    Thursday, October 18, 2012 12:36 PM
  • Hi there,

    I was wondering if you could help me with something please. I understand the principle of chaining the rotations but I am confused with the very beginning of the process. 

    From my understanding, the rotation given at the HipCenter is the rotation needed to transform the Kinect coordinate system (X to the left, Y up and Z away from camera ) to a coordinate system where the the hip bone follows the Y axis and the X points to the players left and Z points to the camera (...ish depending on if you have your body slightly side on e.t.c). Is this correct? 

    Did you do what Ashok suggested and multiply the HipCenter position by the HipCentre orientation? I'm confused at why to do that? Is it to put that cooordinate in the new coordinate system with origin at the HipCenter? 

    Do you then draw a line (length to be determined by Spine coords - HipCentre coords) up the Y-axis? Then draw a line for the child bone along this Y-axis and translate by the childs orientation....then draw a line along this new Y-axis and translate etc? 

    Some unrelated questions: 

    - do you know if the same technique applies to using quaternions instead of the matrices? 

    - in what circumstance would you use the absolute orientations? (My understanding is that if I wanted to draw a line to represent any of the joints, you would just draw it along the kinect coord y-axis and apply it's rotation?)


    Thanks you very much in advance,

    Laura

    Monday, October 22, 2012 7:11 PM
  • Hi there,

    I was wondering if you could help me with something please. I understand the principle of chaining the rotations but I am confused with the very beginning of the process. 

    From my understanding, the rotation given at the HipCenter is the rotation needed to transform the Kinect coordinate system (X to the left, Y up and Z away from camera ) to a coordinate system where the the hip bone follows the Y axis and the X points to the players left and Z points to the camera (...ish depending on if you have your body slightly side on e.t.c). Is this correct? 

    Did you do what Ashok suggested and multiply the HipCenter position by the HipCentre orientation? I'm confused at why to do that? Is it to put that cooordinate in the new coordinate system with origin at the HipCenter? 

    Do you then draw a line (length to be determined by Spine coords - HipCentre coords) up the Y-axis? Then draw a line for the child bone along this Y-axis and translate by the childs orientation....then draw a line along this new Y-axis and translate etc? 

    Some unrelated questions: 

    - do you know if the same technique applies to using quaternions instead of the matrices? 

    - in what circumstance would you use the absolute orientations? (My understanding is that if I wanted to draw a line to represent any of the joints, you would just draw it along the kinect coord y-axis and apply it's rotation?)


    Thanks you very much in advance,

    Laura

    I know that the coordinates of the joints are more precise know because i tested the updated source code in the above link emailed to me from Ashok.

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://www.computerprofessions.co.nr

    Wednesday, October 24, 2012 9:49 PM
  • The Thinker, could you send me updated kmotion code, which takes advantages from data about joint rotation,provided in sdk 1.5? I want to explore it to uderstand better the concepts.

    My goal now is to get BVH file from kinect row data and i hope that code will help me. After reading lots of posts i realized that its very complicated to calculate rotations of all joint in euler coordinates, but in 1.5 sdk rotation data is available through api, isn't it?

    Thanks in advance.

    Friday, February 8, 2013 8:41 PM
  • The Thinker, could you send me updated kmotion code, which takes advantages from data about joint rotation,provided in sdk 1.5? I want to explore it to uderstand better the concepts.

    My goal now is to get BVH file from kinect row data and i hope that code will help me. After reading lots of posts i realized that its very complicated to calculate rotations of all joint in euler coordinates, but in 1.5 sdk rotation data is available through api, isn't it?

    Thanks in advance.


    I will have to look for it post your email here and I will forward it on to you. If I do not respond for awhile send a query to my personal email: jefferycarlson@gmail.com. Sometimes college gets me caught up so if you email in april I might not respond until the 2nd week of may because of finals.

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://www.computerprofessions.co.nr

    Friday, February 8, 2013 10:32 PM
  • Sent an email to to you. Thank u very much.
    Sunday, February 10, 2013 1:33 PM
  • haven't received answer from you yet.

    may be my letter was placed in spam folder) my email: alexs555@yandex.ru

    i'm looking forward to take a look at what was already done with kinect joint rotations in order not to invent a wheel.

    Tuesday, February 12, 2013 5:51 PM
  • haven't received answer from you yet.

    may be my letter was placed in spam folder) my email: alexs555@yandex.ru

    i'm looking forward to take a look at what was already done with kinect joint rotations in order not to invent a wheel.


    You can always go to akira's thread and bug the guy their but I think it might have went into junk email. I will try sending it directly to your email. I will have to look for it though.

    Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - "Sherlock holmes" "speak softly and carry a big stick" - theodore roosevelt. Fear leads to anger, anger leads to hate, hate leads to suffering - Yoda. Blog - http://www.computerprofessions.co.nr

    Tuesday, February 12, 2013 6:49 PM
  • Thanks for these useful information,but kindly I want to know the coordinate frames of the human arm;(shoulder,elbow,wrist and the hand), can you send me a figure illustrate that?.

    Thanks,

    Asmaa Harfoush

    Monday, July 29, 2013 10:57 AM
  • This is already documented in the SDK docs.

    http://msdn.microsoft.com/en-us/library/hh973073.aspx

    Carmine Sirignano - MSFT

    Monday, July 29, 2013 6:01 PM
  • Thank you but this site didn't illustrate the coordinate frames of the wrist  and the hand?? that is what I really need.

    Asmaa Harfoush

    Tuesday, July 30, 2013 12:33 PM
  • It follows the same pattern, Right-hand coordinate, z facing out, where y is the heading direction. We don't provide twist deformation type rotations.

    Carmine Sirignano - MSFT


    Tuesday, July 30, 2013 5:32 PM
  • Thanks for your efforts,but I am still confused about the coordinate you had written;is it for the wrist ?,also I understood from the last queries and their answers that 'The +Y vector always lies along the bone direction' do you mean the same meaning by your answer?

    Thanks a lot,

    Asmaa Harfoush

     
    Tuesday, July 30, 2013 7:59 PM
  • The orientation applies to all joints. Yes, it is the same meaning.


    Carmine Sirignano - MSFT

    Wednesday, July 31, 2013 12:23 AM
  • Thanks a lot.

    Asmaa Haroush

    Wednesday, July 31, 2013 8:22 AM
  • I wonder if I can use markers with kinect??

    I want to be able to trace the motion of human arm during playing fencing,If I can put  markers at the tip of the weapon and the joints of human arm then get the positions of the joints and the tip of the weapon,my problem will be solved.

    Thanks in advance

    Asmaa Harfoush

    Wednesday, July 31, 2013 5:50 PM
  • Hi Brett,

    Im looking some help doing similar things with the Kinect skeleton data in 3D space - I need to workout the offset of each joint from the parent joint (starting with the hips center as the root).  I'm not sure how to go about doing this, do you now how I'd calculate the joint offsets?  Do you have sample code for what you achieved in this post?

    Many thanks!

    Tuesday, October 15, 2013 9:55 PM
  • Hi,

              Kindly can you tell me where is exactly the position of coordinate frame of kinect;where is the origin.Please can you tell me the dimensions (x,y,z) from the outer frame of kinect.

    Thanks in advance

    Saturday, January 25, 2014 3:19 PM