none
Coordinate Mapping RRS feed

  • Question

  • I want to create a depth image the same size as the color such that only valid color locations with valid depth pixels have an intensity value. I used the following code and obtained the following result (below). My question is: Why isn't there a black band around the resulting depth image? The depth image is much smaller than the color image. I expect that valid depth pixels of size 512x424 would be centered in the final image below (which is size 1920x1080). Am I doing something wrong in the mapping?

    HRESULT hr = m_ptrCoordinateMapper->MapColorFrameToDepthSpace(numDepthWidth * numDepthHeight, (UINT16*)ptrDepthBuffer, numColorWidth * numColorHeight, m_ptrColorToDepthCoordinates);
    		if (SUCCEEDED(hr))
    		{
    			int colorIndex = 0;																						//index into mapped points
    			m_mappedColorToDepth = cv::Scalar(0.0f);
    			for (int row = 0;row < numColorHeight; ++row)															// loop over output pixels
    			{
    				for(int col = 0;col<numColorWidth;++col)
    				{
    
    					colorIndex = row*numColorWidth + col;															//get the index
    					DepthSpacePoint p = m_ptrColorToDepthCoordinates[colorIndex];									//get the point at index
    
    					// Values that are negative infinity means it is an invalid color to depth mapping so we
    					// skip processing for this pixel
    					if (p.X != -std::numeric_limits<float>::infinity() && p.Y != -std::numeric_limits<float>::infinity())
    					{
    						int depthX = static_cast<int>(p.X + 0.5f);														//get the actual depth locations
    						int depthY = static_cast<int>(p.Y + 0.5f);
    
    						if ((depthX >= 0 && depthX < numDepthWidth) && (depthY >= 0 && depthY < numDepthHeight))
    						{
    							UINT16 player = ptrDepthBuffer[depthX + (depthY * cDepthWidth)];
    							m_mappedColorToDepth.at<UINT16>(row,col) = player;
    						}
    						else
    							m_mappedColorToDepth.at<UINT16>(row,col) = 0.0f;
    
    						;
    					} 
    					++colorIndex;
    				}
    			}

    Wednesday, April 29, 2015 7:29 PM

All replies

  • different field of view for depth vs color and the result will not be the same for every sensor. Each one will be slightly different than the next since the lenses can be slightly different.

    Carmine Sirignano - MSFT

    Thursday, April 30, 2015 10:22 PM
  • Thank you Carmine. I understand that the color and ir fields have different fields of view.

    However, as the depth returns a 512x424 image, and color returns a 1920x1080 image, there should be no way that the size of the above picture has a height (y-axis) of 1080. The width of the above picture makes sense since we can center 512 pixels in the center of 1920 pixels.

    The only way I see that I can get the above image is if the height of the depth image is actually larger than the returned 512x424. This would account for the height of the image since there are more color pixels overlapping in the ir field of view. Is this reasoning correct or am I missing some of the mathematics for the mapping from color to depth? Can anyone expand on this math if possible?

    Thanks

    Friday, May 1, 2015 1:58 PM
  • The coordinate mapper api's does a best fit. This was requested from of Developer Preview program we ran where you would have had to fill in but this is done for you now.

    Carmine Sirignano - MSFT

    Tuesday, May 5, 2015 6:54 PM
  • On my sensor, the 512x424 image overlaps the top and bottom of the color image when mapped.  so the color coordinate for the depth at x=1 and y=1 would be something like x=980, y=-200; so because the color is mapped to a negative y, that means the depth stream height overlaps the color height.

    You'll notice that in your image, the depth stream doesn't fill to the left or right either;

    so if you upsample depth stream by 3 (what the image above looks like)

    512x424 becomes 1536x1272, with color being at 1920x1080

    the aspect ratios don't lineup

    Wednesday, May 20, 2015 12:28 AM