Applying Transformation matrix for KinectV2 BodyData RRS feed

  • General discussion

  • Hi,

      I have to Apply the Transformation matrix for each body from KinectV2.In my Application i getting two body data from different system. and merging together. while merging i facing the problem of displaced joint positions. for that i have to apply the transformation matrices for each body then easily can merging. So please anyone help to translating joint position from one position to another position using  translation i am doing based on kinect for windows version 1.8,here i could get accelaration of kinect.But in kinect for windows 2.0 i don't have accelaration property of kinect.Please suggest me what to do in this situation?

      my formula is : 

       // Here applying Transformation matrix

     //Getting acceleration

     Vector4 acceleration = kinect.AccelerometerGetCurrentReading();

    double kinectYaw = 0.0;           

    Point3D kinectPosition = new Point3D(0, 0, 0);           

    Matrix3D gravityBasedKinectRotation = findRotation(new Vector3D(acceleration.X,acceleration.Y, acceleration.Z), new Vector3D(0, -1, 0));   


     AxisAngleRotation3D yawRotation = new AxisAngleRotation3D(new Vector3D(0, 1, 0), -kinectYaw);           

    RotateTransform3D tempTrans = new RotateTransform3D(yawRotation);         

     TranslateTransform3D transTrans = new TranslateTransform3D((Vector3D)kinectPosition);         

     Matrix3D masterMatrix = Matrix3D.Multiply(Matrix3D.Multiply(tempTrans.Value, gravityBasedKinectRotation), transTrans.Value);     

     skeletonTransformation = masterMatrix;  //This may need a lock on it, but that will require a seperate lock object 


    Wednesday, November 19, 2014 10:20 AM

All replies

  • If you are trying to adjust for the camera angle, you need to use the floorClipPlane. This does require the sensor is setup in a way that it can determine that based on the way you mounted the sensor. If the sensor is not mounted horizontally with the sensor angled down you will have to determine the angle/position based on physical measurements.

    The alternative may be to put your own accelerometer on the devices and read data off that.

    Carmine Sirignano - MSFT

    Wednesday, November 19, 2014 7:39 PM
  • Thanks Carmine,
         Here we are placing two sensors opposite to the each other. Subject is standing to the middle of the sensors.
    Now i have to merge both sensors skeleton data to center position.But as you see in the picture it is not getting correct position.Please suggest your thought on this to merge the two sensors skeleton data to center position so that i can view 360 degree rotation of the skeleton without getting joints inferred.


    Thursday, November 20, 2014 1:22 PM
  • The body tracking isnt perfect especially if one sensor is seeing the back of someone. Have you considered filtering using something like ransac? Looking at your pictures sensor 2 shoulder and arms look very different to sensor 1. 

    I would start off more simply to test the code by have the sensors looking at the same person from the same view point. If this works and you successfully can align the joint positions I would then go on and start rotating the sensors.

    Hope this helps.

    Thursday, November 20, 2014 2:16 PM
  • If you're trying to merge data from multiple sensors you'll first have to determine a way to calibrate them into a single coherent 3D space and derive full transform matrices for both sensors.

    Then you can use those to transform all joints into that coherent space, and then you can start thinking about a clever algorithm for merging them.

    Also keep in mind that the skeleton tracking algorithm is designed to work with bodies facing the camera, so when seeing someone from the back the data may not be accurate.

    I believe this is why both sensor streams above look so different.
    Keep in mind this is quite an involved and advanced project to tackle.

    Btw Carmine, even if the floor cannot be seen the floorplane vector will still contain a valid direction, presumably derived from the internal accelerometer.


    Thursday, November 20, 2014 4:10 PM
  • Hi Brekel,

    What is the best way to calibrate sensor data into single coherent?

    Do you have any suggestion or algorithm to implement this?

    Thanks in advane



    Wednesday, December 3, 2014 12:09 PM
  • Using an Iterative Closest Point Algorithm can work in certain cases, maybe you can use Kinect Fusion's implementation of this to help.


    Wednesday, December 3, 2014 2:28 PM
  • Hi Brekel,

    Thank you for your input.

    Yes I following you. I implementing by following steps.

    1. Calibrate using Intrinsic Parameters

    2. Calibrate using Extrinsic Parameters

    3.Aligned Points Cloud data

    4.Iterative Closest Point Algorithm

    Please let me know if I missed anything here.

    If you have any sample for Iterative Closest Point Algorithm then that save my life .



    Thursday, December 4, 2014 7:14 AM