none
MapColorFrameToDepthFrame in Kinect for Windows SDK v1.6 Coordinate Mapper RRS feed

  • Question

  • Hello Everyone:

    I was trying to map a color point to depth point. And found this MapColorFrameToDepthFrame() function in the Kinect for Windows SDK documentation. Well then the confusing came.

    This function takes a parameter named depthPixels which is in type of DepthImagePixel[]. I can't tell whether this is an input parameter or an output one. Shouldn't map from color takes a color frame parameter instead of depth?

    Thanks for helping me to puzzle out.

    Best,
    Tony Chen

    Wednesday, November 7, 2012 11:00 PM

Answers

  • Hi There:

    I found the prototype of this function in its header <NuiSensor.h>. As described below:

    HRESULT STDMETHODCALLTYPE MapColorFrameToDepthFrame(
                /* [in] */ NUI_IMAGE_TYPE eColorType,
                /* [in] */ NUI_IMAGE_RESOLUTION eColorResolution,
                /* [in] */ NUI_IMAGE_RESOLUTION eDepthResolution,
                /* [in] */ DWORD cDepthPixels,
                /* [size_is][in] */ NUI_DEPTH_IMAGE_PIXEL *pDepthPixels,
                /* [in] */ DWORD cDepthPoints,
                /* [size_is][out][in] */ NUI_DEPTH_IMAGE_POINT *pDepthPoints)

    Obviously the pDepthPixels is served as an input parameter. And I have also noticed that all the mapping functions listed in the coordinate mapper have this parameter. Like the MapColorFrameToSkeletonFrame, MapDepthFrameToColorFrame and the others.

    I have tested this MapColorFrameToDepthFrame function and passed the depth pixels retrieved from NuiImageFrameGetDepthImagePixelFrameTexture which converts the packed USHORT depth values to NUI_DEPTH_IMAGE_PIXEL format. Then I got the expected result. In NUI_DEPTH_IMAGE_POINT format.

    It turns out that the color frame is unnecessary in the convertion. Though the function converts the entire color frame to depth frame. The function can also run without the color camera. It has been tested. I think the algorithem behind the scenes just converts the passed depth pixels to the target space. For the MapColorFrameToDepthFrame what it does is to pretend the depth pixels are converted from the corresponding color frame. Then just guarantee each depth pixel has a color pixel. Backward deduce.

    My problem has been solved. Hope this information helps.

    Regards,
    T. Chen


    • Edited by T. Chen Thursday, November 8, 2012 8:03 PM
    • Marked as answer by T. Chen Thursday, November 8, 2012 8:19 PM
    Thursday, November 8, 2012 7:41 PM

All replies

  • Bump! I would like to know how this method works as well.

    Is the depth data arguments filled with data to be parsed to the method or should they be filled by the method? (The single indirection suggest they are arguments and not data returned). And from where does the method get the color data?

    I've looked in the GreenScreen sample, but it still uses the old NuiImageGetColorPixelCoordinatesFromDepthPixel, and a google search only yields this one forum post and a lot of links to msdn kinect documentation, which really isn't up to par and downright faulty in some cases.

    Any help would be most appriciated.

    /asger

    Thursday, November 8, 2012 3:54 PM
  • Hi There:

    I found the prototype of this function in its header <NuiSensor.h>. As described below:

    HRESULT STDMETHODCALLTYPE MapColorFrameToDepthFrame(
                /* [in] */ NUI_IMAGE_TYPE eColorType,
                /* [in] */ NUI_IMAGE_RESOLUTION eColorResolution,
                /* [in] */ NUI_IMAGE_RESOLUTION eDepthResolution,
                /* [in] */ DWORD cDepthPixels,
                /* [size_is][in] */ NUI_DEPTH_IMAGE_PIXEL *pDepthPixels,
                /* [in] */ DWORD cDepthPoints,
                /* [size_is][out][in] */ NUI_DEPTH_IMAGE_POINT *pDepthPoints)

    Obviously the pDepthPixels is served as an input parameter. And I have also noticed that all the mapping functions listed in the coordinate mapper have this parameter. Like the MapColorFrameToSkeletonFrame, MapDepthFrameToColorFrame and the others.

    I have tested this MapColorFrameToDepthFrame function and passed the depth pixels retrieved from NuiImageFrameGetDepthImagePixelFrameTexture which converts the packed USHORT depth values to NUI_DEPTH_IMAGE_PIXEL format. Then I got the expected result. In NUI_DEPTH_IMAGE_POINT format.

    It turns out that the color frame is unnecessary in the convertion. Though the function converts the entire color frame to depth frame. The function can also run without the color camera. It has been tested. I think the algorithem behind the scenes just converts the passed depth pixels to the target space. For the MapColorFrameToDepthFrame what it does is to pretend the depth pixels are converted from the corresponding color frame. Then just guarantee each depth pixel has a color pixel. Backward deduce.

    My problem has been solved. Hope this information helps.

    Regards,
    T. Chen


    • Edited by T. Chen Thursday, November 8, 2012 8:03 PM
    • Marked as answer by T. Chen Thursday, November 8, 2012 8:19 PM
    Thursday, November 8, 2012 7:41 PM
  • How did you initialize the function? Can you post an example? 
    Thursday, November 8, 2012 9:36 PM
  • Hi Koffiman:

    To use the coordinate mapper in NUI APIs. You have to instantiate the INuiCoordinateMapper class in advance. Just like how to instantiate the INuiSensor class. There is a member function named NuiGetCoordinateMapper in INuiSensor class. Call it. Then you get the mapper.

    Here is part of my code as an example:

    NUI_IMAGE_FRAME depthFrame;
    
    if FAILED(m_pNuiSensor->NuiImageStreamGetNextFrame(m_hDepthStream, dwMilliseconds, &depthFrame))
    	return;
    
    BOOL bNearMode;
    INuiFrameTexture* depthTex;
    
    if FAILED(m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(m_hDepthStream, &depthFrame, &bNearMode, &depthTex))
    	return;
    		
    NUI_LOCKED_RECT lockedRectDepth;
    
    depthTex->LockRect(0, &lockedRectDepth, NULL, 0);
    if (lockedRectDepth.Pitch != 0)
    {
    	INuiCoordinateMapper* pMapper;
    	NUI_DEPTH_IMAGE_POINT* depthPoints;
    
    	depthPoints = new NUI_DEPTH_IMAGE_POINT[640 * 480];
    	m_pNuiSensor->NuiGetCoordinateMapper(&pMapper);
    
    	pMapper->MapColorFrameToDepthFrame(
    			NUI_IMAGE_TYPE_COLOR,
    			NUI_IMAGE_RESOLUTION_640x480,
    			NUI_IMAGE_RESOLUTION_640x480,
    			640 * 480, (NUI_DEPTH_IMAGE_PIXEL*)lockedRectDepth.pBits,
    			640 * 480, depthPoints);
    }

    Please do not use these codes directly. It has been simplified to show the case. The m_pNuiSensor is a pointer to an INuiSensor object.

    Regards,
    T. Chen






    • Edited by T. Chen Thursday, November 8, 2012 10:12 PM
    Thursday, November 8, 2012 10:07 PM