MapDepthFrameToCameraSpace using DepthCameraIntrinsics Problem RRS feed

  • Question

  • Recently, I am considering map depth frame to camera space offline by using depth camera intrinsics. However, the results are not the same as CoordinateMapper API by using following method.

    float ux = (src_x - cameraParams.PrincipalPointX) / cameraParams.FocalLengthX;
    float uy = (cameraParams.PrincipalPointY-src_y) / cameraParams.FocalLengthY;
    float r = sqrt(ux*ux+uy*uy);
    float cof_xy = 1.0f + cameraParams.RadialDistortionSecondOrder*pow(r, 2) + cameraParams.RadialDistortionFourthOrder*pow(r, 4) + cameraParams.RadialDistortionSixthOrder*pow(r, 6);
    float dest_x = ux*cof_xy*src_depth;
    float dest_y = uy*cof_xy*src_depth;

    where src_x and src_y are the x,y value of depth image in pixels; dest_x and dest_y are my x,y results of camera space same as Coordinate System as  CoordinateMapper API  does. From what I have tested, the results are not the same as API's. 

    For example:

                       X                        Y
    API     3.52004266       1061.93689

    My      3.54410505       1069.19617

    Have anyone touched these things? Any suggestions or thoughts?

    msdn 论坛回复

    Monday, March 2, 2015 4:24 PM