CoordinateMapper MapColorFrameToDepthSpace and MapColorFrameToCameraSpace methods RRS feed

  • Question

  • The method names imply we would pass color data in (byte), but the inbound parameters are declared as depth data (ushort).

    Is there a way to get a clarification about the these methods and status if changes are planned?

    Monday, January 19, 2015 9:09 PM

All replies

  • The function simply provides a lookup table that you can then use in your application to map a color point value to a particular depth/camera point. The coordinate space for the camera is define in the documentation here:

    As for the depth frame, this is the 3D world space projected onto a plane of the sensor. The coordinate mapper basics sample provides code on how you can use the lookup table within your own application.

    Carmine Sirignano - MSFT

    Tuesday, January 20, 2015 6:22 PM
  • Thank you. But I'm not sure how your response addresses my question.

    What doesn't make sense is that these methods imply you are mapping color data to depth data, but inbound parameters are depth data not color data.

    Are the methods names misleading or are the method parameters incorrectly declared in the API?

    Tuesday, January 20, 2015 8:01 PM
  • The naming is unfortunate, but the result is a lookup table. You need to take the resulting table results, as demonstrated in the sample, and lookup the x,y offset to get the color pixel value. for every depth/camera pixel x,y in that table, will have a resulting CameraSpace/DepthSpace value(some x/y offset into the color frame). Lookup the x,y in the color frame as long as it falls within the bounds of the color frame.

    This is demonstrated in the sample:

    // Loop over each row and column of the color image
    // Zero out any pixels that don't correspond to a body index
    for (int colorIndex = 0; colorIndex < colorMappedToDepthPointCount; ++colorIndex)
        float colorMappedToDepthX = colorMappedToDepthPointsPointer[colorIndex].X;
        float colorMappedToDepthY = colorMappedToDepthPointsPointer[colorIndex].Y;
        // The sentinel value is -inf, -inf, meaning that no depth pixel corresponds to this color pixel.
        if (!float.IsNegativeInfinity(colorMappedToDepthX) &&
            // Make sure the depth pixel maps to a valid point in color space
            int depthX = (int)(colorMappedToDepthX + 0.5f);
            int depthY = (int)(colorMappedToDepthY + 0.5f);
            // If the point is not valid, there is no body index there.
            if ((depthX >= 0) && (depthX < depthWidth) && (depthY >= 0) && (depthY < depthHeight))
                int depthIndex = (depthY * depthWidth) + depthX;
                // If we are tracking a body for the current pixel, do not zero out the pixel
                if (bodyIndexDataPointer[depthIndex] != 0xff)
        bitmapPixelsPointer[colorIndex] = 0;

    Carmine Sirignano - MSFT

    Wednesday, January 21, 2015 7:30 PM
  • Thank you Carmine. I am familiar with the coordinate mapping sample. It was more of a clarification of purpose and documentation. I wanted to ensure we understood correctly before explaining to clients. Would the following be more accurate description (and possible method rename)?

    Uses the depth frame data to map the entire frame from color space to depth space.
    ICoordinateMapper::MapDepthFrameColorSpaceToDepthSpace Method

    Uses the depth frame data to map the entire frame from color space to camera space.
    ICoordinateMapper::MapDepthFrameColorSpaceToCameraSpace Method

    Thursday, January 22, 2015 3:49 PM
  • It would be more "provides the lookup table/mapping of color values in relation to the DepthSpace pixels/CamaerSpacePoints"

    Carmine Sirignano - MSFT

    Thursday, January 22, 2015 6:52 PM
  • I understand the use of the function MapColorFrameToDepthSpace as shown in the Coordinate Mapper basics and as described above. For every (x,y) pixel in the color frame , it provides a mapping (X,Y) onto the depth frame.

    But I wonder why it should take pDepthBuffer as an input at all. This mapping should be a constant irrespective of the depth frame data. Wouldn't it suffice just to provide a simple map between the color and depth frames?

    For example if I have two depth buffers, one full of values 600 and one full of values 2500, the function MapColorFrameToDepthSpace should return the same map between the color and depth frame for both the cases.

    Isn't this true?


    Thursday, June 11, 2015 5:14 PM