none
Calculating world/room space pose transformation using AlignPointClouds RRS feed

  • General discussion

  • I am attempting to track an object with a stationary camera. I am able to isolate an object from the scene and the initial world/object to camera transform is available to me. To do this, I am trying to use the generic FusionDepthProcessor.AlignPointClouds() function to align two point clouds. This works fine when observedToReferenceTransform parameter set to Identity (I am tracking small motions), but of course the resulting transform is the camera's relative pose (in camera local frame of reference). However, I would like to obtain the object's relative transform in world/room coordinates instead, with the origin placed at nominal object location. The documentation for the AlignPointClouds() function states that one must use Recontruction.CalculatePointCloud() as the input reference frame to AlignPointClouds, but I am not creating a volume. Any suggestions on how to achieve this would be much appreciated.

    Relevant AlignPointClouds documentation:

    /// To calculate the frame-to-model pose transformation between point clouds calculated from 
    /// new depth frames with DepthFloatFrameToPointCloud and point clouds calculated from an 
    /// existing Reconstruction volume with CalculatePointCloud (e.g. from the previous frame),
    /// pass the CalculatePointCloud image as the reference frame, and the current depth frame 
    /// point cloud from DepthFloatFrameToPointCloud as the observed frame. Set the 
    /// <paramref name="observedToReferenceTransform"/> to the previous frames calculated camera
    /// pose that was used in the CalculatePointCloud call.
    /// Note that here the current frame point cloud will be in the camera local frame of
    /// reference, whereas the raycast points and normals will be in the global/world coordinate
    /// system. By passing the <paramref name="observedToReferenceTransform"/> you make the 
    /// algorithm aware of the transformation between the two coordinate systems.






    Tuesday, March 3, 2015 5:43 AM

All replies

  • If you are not using a reconstruction then you can't use the Fusion library. Fusion isn't designed for generate point cloud generation, it will always base the calculation on a world space of identity where origin aligns with the top left of the first depth frame and Z goes into the reconstruction space.

    If you need a point cloud library you might want to look at PCL as an alternative.


    Carmine Sirignano - MSFT

    Tuesday, March 3, 2015 6:07 PM