none
Kinect for Windows v2 depth to color image misalignment RRS feed

  • Question

  • currently I am developing a tool for the Kinect for Windows v2 (similar to the one in XBOX ONE). I tried to follow some examples, and have a working example that shows the camera image, the depth image, and an image that maps the depth to the rgb using opencv. But I see that it duplicates my hand when doing the mapping, and I think it is due to something wrong in the coordinate mapper part.

    here is an example of it: 


    Here is a code snippet of the function that creates the mapped image ("rgbd" frame in the example)

    void KinectViewer::create_rgbd(cv::Mat& depth_im, cv::Mat& rgb_im, cv::Mat& rgbd_im){
        HRESULT hr = m_pCoordinateMapper->MapDepthFrameToColorSpace(cDepthWidth * cDepthHeight, (UINT16*)depth_im.data, cDepthWidth * cDepthHeight, m_pColorCoordinates);
        rgbd_im = cv::Mat::zeros(depth_im.rows, depth_im.cols, CV_8UC3);
        double minVal, maxVal;
        cv::minMaxLoc(depth_im, &minVal, &maxVal);
        for (int i=0; i < cDepthHeight; i++){
            for (int j=0; j < cDepthWidth; j++){
                if (depth_im.at<UINT16>(i, j) > 0 && depth_im.at<UINT16>(i, j) < maxVal * (max_z / 100) && depth_im.at<UINT16>(i, j) > maxVal * min_z /100){
                    double a = i * cDepthWidth + j;
                    ColorSpacePoint colorPoint = m_pColorCoordinates[i*cDepthWidth+j];
                    int colorX = (int)(floor(colorPoint.X + 0.5));
                    int colorY = (int)(floor(colorPoint.Y + 0.5));
                    if ((colorX >= 0) && (colorX < cColorWidth) && (colorY >= 0) && (colorY < cColorHeight))
                    {
                        rgbd_im.at<cv::Vec3b>(i, j) = rgb_im.at<cv::Vec3b>(colorY, colorX);
                    }
                }
    
            }
        }
    }


    Does anyone have a clue of how to solve this? How to prevent this duplication?

    Thanks in advance

    Thursday, September 11, 2014 1:29 PM

All replies

  • From your screenshot I believe the coordinate mapper is working properly. The duplication is a result of the color and infrared cameras being offset. Both cameras can see certain parts of the image that the other can't.

    The larger "duplicate" hand is essentially being projected onto the back wall. The color camera can't see the same part of the wall that the infrared camera can, so the depth pixels are sampling the color from both the hand and that part of of the wall.

    You can test this by moving your hand closer to the wall, or putting a piece of paper directly behind your hand. The double image will be greatly reduced because it's not being "projected" as far.

    As for preventing the duplication that is tricky. You would need a way to figure out which pixels are being sampled twice, or check for overlap in the depth image from the camera's point of view. Afterwards you also have to figure out what to do with the pixels that the color camera simply can't see. Do you make them transparent? Or try to have the app remember what the color values were in previous frames and reconstruct it.

    Hope that helps somewhat!

    • Proposed as answer by sam598 Thursday, September 11, 2014 8:55 PM
    Thursday, September 11, 2014 8:55 PM
  • I ran into similar issue when worked on 3D scan app.

    Mine is also a kinect v2 for windows. 

    I noticed strong disparity between RGB and depth in preview which is not the case on Microsoft website.  

    I can't upload image as the account restriction. 

    Friday, June 30, 2017 12:15 AM
  • same here: http://imgur.com/a/Xhp6j

    • Edited by cp_sn Wednesday, July 12, 2017 7:25 PM
    Wednesday, July 12, 2017 7:24 PM