none
ICoordinateMapper Consistency? RRS feed

  • Question

  • Do the results of the ICoordinateMapper ever change during run-time per device (not handling weird cases like the devices being disconnected)?

    I would like to build a UV field so that I can map between Color and Depth Frames between each other. Building up the UV field is relatively expensive to do every frame, would it be fine to populate it once per sensor open?

    Tuesday, September 30, 2014 8:30 PM

All replies

  • The correct way to implement this would be to subscribe to the CoordinateMappingChanged event. You don't want to do it right at sensor open, as the data is read from the device and will not be available until slightly after open.  Also, if you play back data from KS, you will get a change (as we will fault in the coordinate mapping parameters from the sensor that did the original recording). 

    Chris White _MSFT_

    Thursday, October 2, 2014 6:25 AM
  • Would would the best way to get translation corrected images be?

    i.e. to get the image from the Color Stream that lies (more or less) overtop the Depth Stream and vice versa?

    Why does the ICoordinateMapper::MapColorFrameToDepthSpace Method take in a depth stream not a colour stream? i.e. the data acquired from the underlying buffer of the Color Frame.

    Thursday, October 2, 2014 5:00 PM
  • Similarly how do I use the MapColorFrameToDepthSpace Method?

    I keep receiving "E_INVALIDARG One or more arguments are invalid.".

    Thursday, October 2, 2014 8:13 PM
  • Similarly how do I use the MapColorFrameToDepthSpace Method?

    I keep receiving "E_INVALIDARG One or more arguments are invalid.".

    Ok figured that out, needs specific sizes for the depthDataPointCount and  depthPointCount
    Thursday, October 2, 2014 9:09 PM
  • I have the same problem.

    I can't understand what "depthDataPointCount" means. 

    Could anyone tell me the difference between "depthDataPointCount" and "depthPointCount"?

    Examples using MapColorFrameToDepthSpace would also be helpful.

    Thanks.

    Wednesday, October 29, 2014 7:10 AM
  • Here is a snippet from an earlier discussion regarding the method MapColorFrameToDepthSpace(), originally mentioned Dec. 18, 2013:

    "The API as provided incorrectly states the parameters as:

    public:
    HRESULT MapColorFrameToDepthSpace(
         UINT depthDataPointCoint,
         _In_reads_(depthDataPointCount)UINT16 *depthFrameData,
         UINT depthPointCount,
         _Out_writes_all_(depthPointCount)DepthSpacePoint *depthSpacePoints
    )

    When I passed in parameters as specified here (depthPointCount = 512*424), every point in the color frame returned a not found depth point.  However, when changing the third parameter to be the number of color points in the frame the mapping works as desired.  The correct API should be:

    public:
    HRESULT MapColorFrameToDepthSpace(
         UINT depthPointCoint,
         _In_reads_(depthPointCount)UINT16 *depthFrameData,
         UINT colorPointCount,
         _Out_writes_all_(colorPointCount)DepthSpacePoint *depthSpacePoints
    )

    where depthSpacePoints is the same size as colorPointCount (1920*1080).  As before, depthPointCount is equal to the number of depth points (512*424) and depthFrameData is of the same size.  Now the points in the color frame are correctly mapped to a depth coordinate.  Note as specified elsewhere that the extreme left and right of the color frame do not return a depth point due to the aspect ratio difference in the two cameras.  "

    ----------------------

    Then on Oct. 6, 2014, Chris White from MSFT provided the following response:

    "I understand why the API parameter names are misleading, and will work to get the documentation updated. 

    The parameter name is technically the correct one from an API design perspective... That parameter is defining the number of depth space points which will be returned by the mapping function. 

    It is a constraint that the value passed in for that parameter must match the number of color pixels to be mapped.  Unfortunately that constraint is not clearly communicated in the documentation."


    Wednesday, October 29, 2014 1:08 PM
  • Thank you for your kind reply, and I finally found the solution.

    As mentioned, though the SDK document may state technically correct terms, it is hard to understand. I expect more detailed description for input/output parameters. ( ex. depthDataPointCount, depthPointCount..)

    Here is my final working source code.

    HRESULT hr =  m_pCoordinateMapper->MapColorFrameToDepthSpace(512*424, (UINT16*)m_depthImage, 1920*1080, m_pDepthCoordinates);

    m_pDepthCoordinates = new DepthSpacePoint[1920*1080];

    If the allocated memory spaces for the depthSpacePoints is not enough, (ex.  m_pDepthCoordinates = new DepthSpacePoint[512*424];) the MapColorFrameToDepthSpace function returns E_INVALIDARG.

    Thursday, October 30, 2014 12:14 AM