none
Kinect Fusion initial camera pose with stationary camera scanning RRS feed

  • Question

  • I am attempting to scan an object placed on a rotation stage with a stationary camera placed some distance away looking down on the stage. I want my world origin to coincide with the center of the rotation stage and the reconstruction volume seating on top of the rotation stage aligned with its axes at 0 rotation. I have available the sensor pose matrix relative to stage/world origin (i.e. CameraToWorldTransform). I am starting the fusion scan at 0 stage rotation and going all the way around. I want a vertex that lands at the center of the stage in exported mesh to have 0,0,0 coordinate.

    My question is how to correctly call the ResetReconstruction() method to achieve the aforementioned result. I have tried passing my sensor pose matrix as the initialWorldToCameraTransform parameter and leaving default worldToVolumeTransform, but this has no effect unless UseCameraPoseFinder is off. With UseCameraPoseFinder disabled, the orientation of the reconstruction volume does get aligned with the stage axes, however, resulting mesh has a large Z offset. The vertex at the stage origin has a large Z value. Perhaps I need to alter the worldToVolumeTransform to center the volume on the world origin. Any suggestions would be much appreciated.


    • Edited by simfero Friday, December 12, 2014 7:04 AM
    Thursday, December 11, 2014 9:49 PM

Answers

  • The 0,0,0 of reconstruction space is going to align with the first depth frame. The reconstruction volume uses a right hand coordinate system where z goes into the world. You would have to have setup your camera in a way make the center point align to the top left of the depth frame.

    The only way to correct for that would happen after you have build your reconstruction and rebase the vertices based on some offset that can be calculated as an average of points. 


    Carmine Sirignano - MSFT

    Friday, December 12, 2014 7:23 PM

All replies

  • The 0,0,0 of reconstruction space is going to align with the first depth frame. The reconstruction volume uses a right hand coordinate system where z goes into the world. You would have to have setup your camera in a way make the center point align to the top left of the depth frame.

    The only way to correct for that would happen after you have build your reconstruction and rebase the vertices based on some offset that can be calculated as an average of points. 


    Carmine Sirignano - MSFT

    Friday, December 12, 2014 7:23 PM
  • Simfero, did you get it working and how did you construct and pass the transformation matrix to ResetReconstruction()?
    Monday, December 14, 2015 7:23 AM
  • Yes, I eventually figured it out. My problem had to do with differences in coordinate systems and the order in which rotations were carried out. Otherwise, I just had to set initWorldToCameraTransform to the right matrix and it worked.
    Tuesday, December 15, 2015 6:21 AM