none
MapDepthFrameToColorFrame does not properly align RRS feed

  • Question

  • Hi, I'm having problem with KinectSensor.MapDepthFrameToColorFrame function. I do not know if this problem applies to Kinect for XBox 360 only (doesn't have Kinect for Windows yet), but the mapped color image does not map to the depth image exactly. I'm calling this function for every depth image update, but the mapped color image has 5~10 pixels misalignment from the depth image. I'm using 640x480 at 30 FPS for both image and depth images. The coding environment is C#. Any suggestions or helps will be appreciated. Thanks
    Sunday, February 5, 2012 4:04 AM

Answers

All replies

  • The mapped frame is the depth frame mapped to color space, which should align with the color image, not the depth image.

    Monday, February 6, 2012 4:31 PM
  • Hi David,

    Thank you for a quick response. I have changed my code to map the depth frame to the color frame instead of the other way around, but I'm still getting misalignment, especially for fingers that are around 1 meter away from the Kinect.

    Here's my code. Would you mind letting me know what I'm doing wrong?

    void DepthImageReady(object sender, DepthImageFrameReadyEventArgs e)
            {
                if(mode == DisplayMode.Color || depthImageReady)
                    return;
    
                using (DepthImageFrame imageFrame = e.OpenDepthImageFrame())
                {
                    if (imageFrame == null)
                        return;
    
                    int tooNearDepth = sensor.DepthStream.TooNearDepth;
                    int tooFarDepth = sensor.DepthStream.TooFarDepth;
                    int unknownDepth = sensor.DepthStream.UnknownDepth;
    
                    int alpha = (int)(255 << 24);
    
                    int tooFarColor = (int)(alpha | 0x000000FF);
                    int tooNearColor = (int)(alpha | 0x00FF0000);
                    int unknownColor = (int)(alpha | 0x0000FF00);
    
                    if (depthPixelData == null)
                    {
                        depthPixelData = new short[imageFrame.PixelDataLength];
                        colorMapPoints = new ColorImagePoint[depthPixelData.Length];
                    }
    
                    imageFrame.CopyPixelDataTo(depthPixelData);
    
                    if (mode == DisplayMode.Overlay)
                    {
                        sensor.MapDepthFrameToColorFrame(sensor.DepthStream.Format, depthPixelData,
                            sensor.ColorStream.Format, colorMapPoints);
                    }
    
                    short distance = 0;
                    byte grayscale = 0;
                    int depthIndex = 0;
                    int y = 0;
    
                    for (int i = 0; i < depthData.Length; ++i)
                    {
                        distance = (short)(depthPixelData[i] >> DepthImageFrame.PlayerIndexBitmaskWidth);
    
                        if (mode == DisplayMode.Overlay)
                        {
                            y = colorMapPoints[i].Y;
                            if (y >= imageFrame.Height)
                                y = imageFrame.Height - 1;
                            else if (y < 0)
                                y = 0;
    
                            depthIndex = (y * imageFrame.Width) + colorMapPoints[i].X;
                        }
                        else
                            depthIndex = i;
                        
                        if (distance == tooFarDepth)
                            depthData[depthIndex] = tooFarColor;
                        else if (distance == tooNearDepth)
                            depthData[depthIndex] = tooNearColor;
                        else if (distance == unknownDepth)
                            depthData[depthIndex] = unknownColor;
                        else
                        {
                            grayscale = CalculateIntensityFromDepth(distance);
                            depthData[depthIndex] = (int)(alpha | grayscale << 16 | grayscale << 8 | grayscale);
                        }
                    }
    
                    
                }

    And here's an example image of depth frame overlaid on the color frame. The whiter bound of the hand shape does not exactly match the hand shape in the color frame.

    Finger misalignment

    Tuesday, February 7, 2012 1:17 AM
  • Have you looked at using the AllFramesReady event in v1?

    AllFramesReady gives matching color, depth and/or skeleton frames. Using separate callbacks for Depth and Color will have some discrepancy between separate callbacks. Alternatively, you could use a polling method to pull both Color and Depth at the same time.

    Wednesday, February 8, 2012 9:01 PM
  • Thanks for a response! I didn't realize that there was AllFramesReady event in v1. Was using the old way of getting the depth and color separately. However, I still do see a small misalignment (3~5 pixels). Is this a known issue that it won't align perfectly?

    Wednesday, February 8, 2012 9:16 PM
  • Using AllFramesReady will ensure that you are comparing frames that were captured at a particular point in time only. "Accuracy of pixels" to visualize depth data is a bit of a misnomer. Depth data represents distance. This post explains that a bit more:
    http://social.msdn.microsoft.com/Forums/en-US/kinectsdknuiapi/thread/06c8d108-8819-47f0-b40b-5ddd81fcb250

    Conversion of depth to a pixel in an image may not be 100%. What I think you are seeing is a bit of IR shadowing. When the depth camera cannot determine a distance, it is unknown.

    Thursday, February 9, 2012 1:23 AM
  • Understood. Thanks!!
    Thursday, February 9, 2012 1:29 AM