none
how to use MapColorFrameToSkeletonFrame RRS feed

  • Question

  • Hello everyone,

    I'm using kinect Face Tracking Visualization project to do some interesting work, once the kinect tracks a face, which includes a color image and some feature points, I want to get the corresponding skeleton space points for these feature points.

    I notice there's a interface 'MapColorFrameToSkeletonFrame', which meets my need, but I have trouble using this interface:

    Firstly ,I construct a new member INuiCoordinateMapper*   m_pMapper; under class SingleFace, next I want to instantiate an instance for m_pMapper, then I can call m_pMapper->MapColorFrameToSkeletonFrame() to get the final results, but I don't know how to instantiate m_pMapper until I find that Interface INuiSensor has a method NuiGetCordinateMapper which can instantiate m_pMapper. But now I need to instantiate INuiSensor, which is a tough question because the Face Tracking Visualization Project doesn't use INuiSensor.

    So my question is:

    1) Is there any way to map Color frame to Skeleton Frame?

    2) If using MapColorFrameToSkeletonFrame, how can I instantiate m_pMapper?

    3) If using INuiSensor to instantiate, how ?

    Thanks a lot for your help.

    Best,

    Kang

    Wednesday, September 18, 2013 3:44 PM

Answers

  • Whether you have one or multiple sensors, use the enumeration method of getting the instance of the sensor you are using.

    HRESULT CColorBasics::CreateFirstConnected()
    {
        INuiSensor * pNuiSensor;
        HRESULT hr;
    
        int iSensorCount = 0;
        hr = NuiGetSensorCount(&iSensorCount);
        if (FAILED(hr))
        {
            return hr;
        }
    
        // Look at each Kinect sensor
        for (int i = 0; i < iSensorCount; ++i)
        {
            // Create the sensor so we can check status, if we can't create it, move on to the next
            hr = NuiCreateSensorByIndex(i, &pNuiSensor);
            if (FAILED(hr))
            {
                continue;
            }
    
            // Get the status of the sensor, and if connected, then we can initialize it
            hr = pNuiSensor->NuiStatus();
            if (S_OK == hr)
            {
                m_pNuiSensor = pNuiSensor;
                break;
            }
    
            // This sensor wasn't OK, so release it since we're not using it
            pNuiSensor->Release();
        }
        return hr;
    }
    
    You need to know this because coordinate mapping is specific for a sensor because of calibration.

    Carmine Sirignano - MSFT

    Thursday, September 19, 2013 6:09 PM

All replies

  • You get the coordinate mapper from the INuiSensor object you instantiated.

    http://msdn.microsoft.com/en-us/library/nuisensor.inuisensor.nuigetcoordinatemapper.aspx

    INuiSensor* pNuiSensor;
    // after pNuiSensor is initialized, call
    
    INuiCoordinateMapper* pMapper;
    hr = pNuiSensor->NuiGetCoordinateMapper( &pMapper );
    // check your hr value for S_OK

    To map color to skeleton, use:
    http://msdn.microsoft.com/en-us/library/jj883689.aspx

    There are the Coordinate Mapping Basics samples in the 1.8 SDK


    Carmine Sirignano - MSFT

    Wednesday, September 18, 2013 8:34 PM
  • Hi Carmine,

    Thanks for your help, as you suggest, I need to have an INuiSensor object, but my project just has one Kinect, and I found out in MSDN that

    INuiSensor Interface

    References multiple Kinect sensors. If you are using only one Kinect sensor, use the functions in NUI Functions instead of implementing this interface.

    So I just use those NUI functions instead of INuiSensor Interface, and it turns out that if I use the INuiSensor, the program will end up running error.

    Following the rule that using NUI Functions with only one kinect sensor, I found a function

    HRESULT NuiCreateCoordinateMapperFromParameters(
             ULONG dataByteCount,
             void *pData,
             INuiCoordinateMapper **ppCoordinateMapper
    )

    which can also get the coordinate mapper, but I don't know how to set the parameters, can you help me? Thanks a lot.

    Kang


    • Edited by kkmore Thursday, September 19, 2013 3:18 PM
    Thursday, September 19, 2013 3:08 PM
  • Whether you have one or multiple sensors, use the enumeration method of getting the instance of the sensor you are using.

    HRESULT CColorBasics::CreateFirstConnected()
    {
        INuiSensor * pNuiSensor;
        HRESULT hr;
    
        int iSensorCount = 0;
        hr = NuiGetSensorCount(&iSensorCount);
        if (FAILED(hr))
        {
            return hr;
        }
    
        // Look at each Kinect sensor
        for (int i = 0; i < iSensorCount; ++i)
        {
            // Create the sensor so we can check status, if we can't create it, move on to the next
            hr = NuiCreateSensorByIndex(i, &pNuiSensor);
            if (FAILED(hr))
            {
                continue;
            }
    
            // Get the status of the sensor, and if connected, then we can initialize it
            hr = pNuiSensor->NuiStatus();
            if (S_OK == hr)
            {
                m_pNuiSensor = pNuiSensor;
                break;
            }
    
            // This sensor wasn't OK, so release it since we're not using it
            pNuiSensor->Release();
        }
        return hr;
    }
    
    You need to know this because coordinate mapping is specific for a sensor because of calibration.

    Carmine Sirignano - MSFT

    Thursday, September 19, 2013 6:09 PM
  • Hi Carmine, 

    Thanks for your help, I've solved the coordinate mapper problem, but while I use the method MapColorFrameToDepthFrame in INuiCoordinateMapper interface, I have some questions about the output.

    HRESULT MapColorFrameToDepthFrame(
             NUI_IMAGE_TYPE eColorType,
             NUI_IMAGE_RESOLUTION eColorResolution,
             NUI_IMAGE_RESOLUTION eDepthResolution,
             DWORD cDepthPixels,
             NUI_DEPTH_IMAGE_PIXEL *pDepthPixels,
             DWORD cDepthPoints,
             NUI_DEPTH_IMAGE_POINT *pDepthPoints
    )

    In the method, the last parameter pDepthPoints is the output, to store the corresponding depth of a specific pixel location. In my implementation, the results are a little bit confusing, for example, if my color image is of 640*480, then the output pDepthPoints' s value may like this:

    most entries of pDepthPoints are

    {x=-2147483648 y=-2147483648 depth=0 ...}, only some entries may capture the reasonable values like

    {x=383 y=242 depth=1796 ...}, based on these results, I can only tell the depth for a small number of pixel locations, and the remained locations are unknown, are they all equal to zero or is there anything I can do to get all the depth info for all color image pixels?

    Thanks a lot.

    Best

    Kang.

    Monday, September 23, 2013 7:14 PM
  • That is the correct behavior. Given the way depth works, it is not a 1:1 relationship between depth/skeletal and color. Not every color pixel has a depth value. If you need that, then map depth to color. This make more sense in a visually. When you map depth to the color frame, you will note the mapping area doesn't fill the screen(see Coordinate Mapping basics sample).


    Carmine Sirignano - MSFT

    Monday, September 23, 2013 11:00 PM
  • Thanks very much for your answer. But it turns out I didn't get the expected results. As I want to align color image and depth image, so I use the MapColorFrameToDepthFrame to do calibration, I assume the depth data returned by the function is aligned with the color image, but the results are not. After comparing the depth data using as input and the returned depth data, I find that they're pretty similar, except that the returned depth data has many zeros, and cut the whole depth image like a chessboard. 

    I'm not sure why it like this, maybe I don't understand the function completely, can you help me with where I may make mistakes, or is there other ways to align color image and depth image?

    Thanks a lot.

    Kang

    Wednesday, September 25, 2013 10:12 PM
  • The sensor and coordinate mapping has already done calibration. If you go through the CoordinateMapper the results that you get will already take this into account. If you are mapping depth to color, the values should already be aligned.

    If you are trying to do something with external sources (projection), then you have to do your own calibration method. These are links you can follow to get the basic idea.

    http://labs.manctl.com/rgbdemo/index.php/Documentation/TutorialProjectorKinectCalibration

    http://www.camara-lucida.com.ar/tutorials/calibration


    Carmine Sirignano - MSFT

    Friday, September 27, 2013 11:43 PM
  • Hi Kang,

    How did you resolve the coordinate mapper problem? I'm in the same situation, since I've to map some points of the Face Tracking result from RGB to Depth coordinates. I cannot run the MapColorFrameToDepthFrame method. How can I retrieve NUI_DEPTH_IMAGE_PIXEL from IFTImage?

    thanks in advance,


    Andrea

    Tuesday, November 26, 2013 4:34 PM
  • The byte buffer pointer is accessed by the ::GetBuffer() method (native c++). If you are using the managed wrapper, you may not have access to it. Ideally, you need to create a NUI_DEPTH_IMAGE_PIXEL array yourself based on the depth frame you acquired from the sensor.

    If you did this with c++, use NuiImageFrameGetDepthImagePixelFrameTexture.


    Carmine Sirignano - MSFT

    Tuesday, November 26, 2013 7:14 PM