none
How to change the coordinate system of one kinectsensor to another kinectsensor RRS feed

  • Question

  • Hi,

    I have placed one kinect sensor V2 opposite to me and another kinect sensor V2 adjacent to me, such that two kinect sensors at an angle of 90 degree. I want to transfer adjacent sensor coordinate system to opposite kinect sensor coordinate system.

    Please suggest me with an answer.

    Thanks


    kirubha


    • Edited by KrupaKine Sunday, September 13, 2015 12:16 PM
    Sunday, September 13, 2015 12:15 PM

All replies

  • You need to perform stereo calibration. There are many ways to do this, but they generally all involve finding 4 or more points that exists in both image spaces. 

    Matlab: http://uk.mathworks.com/help/vision/examples/stereo-calibration-and-scene-reconstruction.html

    OpenCV: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

    Points to note: Flip your Kinect images, as they come mirrored by default from the sensor. Decide if you want to use colour or IR images as colour has higher resolution, the active illumination of the IR lasers can saturate a white and black checker-board pattern, but the IR camera is in the same space as the depth data. 

    If you want to do this very accurately you will need to carefully calibrate the intrinsic camera parameters also.

    • Proposed as answer by Phil Noonan Monday, September 14, 2015 7:55 AM
    • Unproposed as answer by Phil Noonan Tuesday, September 15, 2015 6:32 AM
    Monday, September 14, 2015 7:55 AM
  • Thanks for your reply.

    Here i am trying to change secondary kinect v2 sensor 3D point cloud coordinate system to primary kinect v2 sensor coordinate system which is adjacent to me.So if i use stereocalibration will it yields good result to me.

    please suggest me the way to perform transfer of one coordinate system to another coordinate system.

    kirubha

    Monday, September 14, 2015 12:39 PM
  • If you want to use an offline method to simply realign two point clouds then you can use a program like Meshlab http://meshlab.sourceforge.net/ it has an Iterative Closest Point align algorithm, and there are many guides online on how to do this.

    If you want to perform alignment while obtaining data in real time then you can look at the multiKinect sample in the Kinectv1 SDK1.8 http://www.microsoft.com/en-gb/download/details.aspx?id=40278 It shows you how to project from the Fusion volume into a point cloud depending on each Kinect's position.

    I would recommend starting with Meshlab.

    • Proposed as answer by Phil Noonan Tuesday, September 15, 2015 6:32 AM
    Tuesday, September 15, 2015 6:32 AM
  • Note that most if not all Iterative Closest Point algorithms rely on a good percentage of overlap in the two pointclouds it needs to align. Having your sensors far apart will probably not work or give poor results.

    Brekel

    Tuesday, September 15, 2015 9:18 AM
    Moderator