none
HD image for face recognition RRS feed

  • Question

  •  Nikolai Smolyanskiy [MSFT]says:

    "You can use your depth to color mapping function, for example you can use external RGB HD camera to improve face tracking at the distance, BUT you must have Kinect camera attached at all times :-) Not sure about the plans to expose Kinect's calibration params. IMHO it would be useful."

    Is there any sample of this? If I attach the HD data to an IFTImage like this:

    m_pColorIFTImg->Attach( 1280, 1024, pColorBuffer, FTIMAGEFORMAT_UINT8_B8G8R8X8, 5120 );

    and then 

     FT_SENSOR_DATA sensorData(  m_pColorIFTImg, m_pDepthIFTImg ); //depth from Kinect, color from HD
     hrFT = m_pFT->StartTracking( &sensorData, NULL, NULL, m_pFTResult );

    Will it work?

    Is there another way?

    Thanks,

    Michael


    Monday, June 4, 2012 5:22 PM

Answers

  • Yes, it will work, but you need to provide your own function that maps depth pixels to color pixels. The pointer to this function is passed to the face tracker on its initialization. To make this function you will need to callibrate your HD camer and Kinect depth camera, which should not be too hard given depth data from Kinect.

    • Marked as answer by stereosphere Tuesday, June 5, 2012 10:15 PM
    Tuesday, June 5, 2012 8:47 PM

All replies

  • Yes, it will work, but you need to provide your own function that maps depth pixels to color pixels. The pointer to this function is passed to the face tracker on its initialization. To make this function you will need to callibrate your HD camer and Kinect depth camera, which should not be too hard given depth data from Kinect.

    • Marked as answer by stereosphere Tuesday, June 5, 2012 10:15 PM
    Tuesday, June 5, 2012 8:47 PM
  • Following up for more details on this. I would appreciate some clarification:

    1. The custom mapping function appears to be called pixel by pixel rather than providing back a whole frame of lookup values. Is this true? If so, a later improvement in the SDK to allow for whole frame lookup results would be appreciated and more performant.
    2. I do not see any direct association of the FaceTracking codebase and a specific INuiSensor. Therefore, I am uncertain that the current FaceTracking codebase uses the calibration values present in each Kinect. How would the Facetracking code know from which of the two Kinects attached to my Windows PC to retrieve the calibration values?
    3. If the answer to #2 is...we don't align depth<->color using the calibration values...then...could I improve the recognition speed/accuracy of FaceTracking by providing frames of color and depth to FT_SENSOR_DATA which have been aligned using MapDepthFrameToColorFrame() or NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution()?


    --Dale

    Thursday, August 15, 2013 1:12 AM