none
What is the meaning of the result of INuiCoordinateMapper::GetColorToDepthRelationalParameters? RRS feed

  • Question

  • I wish to calculate a view projection matrix for 3D rendering from the point of view of the Kinect's color camera in order to overlay the display. Unfortunately there is no function in INuiCoordinateMapper to convert from single points in the depth image to points in the color frame. If I had that I could use it to project the corners of the depth frame into color frame space and thus approximate a projection for the color frame using the results. So it seems the only possibility is to use the data returned by GetColorToDepthRelationalParameters, however there is no reference for the meaning of the result of this function. Could a Microsoft employee enlighten me?
    Thursday, March 14, 2013 11:13 AM

Answers

  • Could you use MapColorFrameToDepthFrame and then just examine the points you're interested in? It performs the mapping for the entire frame, but if you need to call it only once (which seems to be the case, based on the scenario you describe), the expense of a single call shouldn't be prohibitive.


    John | Kinect for Windows development team

    Friday, March 15, 2013 10:55 PM

All replies

  • I believe the method you seek is CoordinateMapper.MapDepthPointToColorPoint.

    The data returned by GetColorToDepthRelationalParameters is a binary serialization of the internal state of the coordinate mapper, and is not documented. It exists so that you can instantiate a CoordinateMapper without needing to have present the specific Kinect unit that originally captured your depth and color data. For example, say you wanted to capture depth and color data and then perform post-processing of that data on another computer. You could use the saved relational parameters blob to construct a CoordinateMapper at post-processing time; this CoordinateMapper would produce results identical to those of a CoordinateMapper obtained directly from the original sensor.


    John | Kinect for Windows development team

    Thursday, March 14, 2013 8:04 PM
  • Sorry, I now notice that in my original question I said "convert from single points in the depth image to points in the color frame". I meant to say "convert from single points in the color frame to points in the depth frame". That way I can project the corners of the color frame into skeleton space and so determine the frustum. So, sorry again for the confusion, but MapDepthPointToColorPoint is not what I need. I need MapColorPointToDepthPoint, but it does not exist. I was hoping to use those internal parameters to manually do the mapping.
    Friday, March 15, 2013 12:36 PM
  • Could you use MapColorFrameToDepthFrame and then just examine the points you're interested in? It performs the mapping for the entire frame, but if you need to call it only once (which seems to be the case, based on the scenario you describe), the expense of a single call shouldn't be prohibitive.


    John | Kinect for Windows development team

    Friday, March 15, 2013 10:55 PM