none
SDK 1.6 Depth to Color point CoordinateMapper question RRS feed

  • Question

  • Hi All, I seem to be having some issues with using the new mapper class, I'm using the MapDepthPointToColorPoint which is returning values in Y correctly but appear slightly off in X, the result seems to be shifted slightly left of the true centre when the subject surface is not flat to the sensor, a sphere for example.

    I assumed that the mapping in this case would be simply adjusting the position taking account of differing resolutions and shift for camera / sensor positioning, but it appears to be doing something more complex involving the depth value maybe? since the error appears to decrease as distance increases.

    Any chance of some additional information on its functioning?.

    P.S As a minor note, the documentation for the item on the web, describes the input parameters in the wrong order (intellisense is fine) e.g.

    MapDepthPointToColorPoint (DepthImageFormat, DepthImagePoint, ColorImageFormat)

    rather than the correct parameters;

    MapDepthPointToColorPoint (ColorImageFormat, DepthImageFormat, DepthImagePoint)

    Monday, January 7, 2013 4:26 PM

Answers

  • Ravi -

    Upon preliminary investigation, it appears that you have found a bug in the Kinect for Windows runtime. If you attempt to access the KinectSensor.CoordinateMapper property prior to calling the KinectSensor.Start() method, the CoordinateMapper will not have been properly initialized. As a result, the math performed by the CoordinateMapper will not produce the correct results.

    Fortunately, the workaround should be easy: Defer all uses of the CoordinateMapper until after KinectSensor.Start() has been called.

    As for the cropping issue at the corners: The fields of view of the color and depth cameras are not identical. Based on the screenshot you provided, your left hand was likely not entirely within the depth camera's field of view, even though it may have been visible in the color image. To compensate for this, you may need to crop/resize the color image down to just the portion that intersects with the depth image.

    A quick way to do this may be to sample the depth at each of the four corners, map each of these depth points to color space (using CoordinateMapper.MapDepthPointToColorPoint), and then cropping the color image with a bounding rectangle based on the color points returned. Keep in mind that when the depth data for a pixel isn't available (Depth == 0), MapDepthPointToColorPoint will not give meaningful results. Depending on how many of the corners have Depth == 0, it may not be possible to form a bounding rectangle using this technique.


    John | Kinect for Windows development team

    Wednesday, March 20, 2013 9:53 PM
  • Yes, the coordinate mapping between depth and color does depend on the depth. How are you initializing the DepthImagePoint when you make the call?


    John | Kinect for Windows development team

    Tuesday, March 19, 2013 11:23 PM

All replies

  • Hi George,

    I am having a same problem which you have faced. I am using GreenScreen example given in the SDK 1.6 and 1.7. I have noticed that if initialize ColorToDepthRelationalParameters of CoordinateMapper then you will see the actual depth mask and RGB image remain the same size but the mapping goes to offset. I have tried to search more about this read-only collection, so far not got further information.

    Please update if you have managed to work around.

    Regards,

    Ravi Lodhiya


    • Edited by RLodhiya Tuesday, March 19, 2013 5:07 PM
    Tuesday, March 19, 2013 5:05 PM
  • Yes, the coordinate mapping between depth and color does depend on the depth. How are you initializing the DepthImagePoint when you make the call?


    John | Kinect for Windows development team

    Tuesday, March 19, 2013 11:23 PM
  • Hi John,

    Thanks for the reply. Kindly find the both images as attached below. I was playing with green-screen example and my resolution for color and depth is 640x480, my image size is also 640x480 (the visual on the screen). Now as you can see on first picture up left hand corner the mask has skewed a bit and same thing with all around with the mask. I was trying to achieve the full image without any data loss from my color image but somehow it is not working.

    When I added following line to my code in the window_loaded event I was able to get depth mask and color mask in the Image (view), but now my mask is not mapped correctly. And I have no idea why this is happening.

    the line I added "this.MyPara = this.sensor.CoordinateMapper.ColorToDepthRelationalParameters;" before kinect starts in the mention event. Please note that MyPara has been never used anywhere in the code. Its just a read-only collection and I don't understand why it is impacting on my code.Any help on this much appreciated.

    In essence I want to achieve greenscreen example without any corners edge loss on the mask and it should be match with color data and depth data without compromising the surroundings.

    Any thoughts or idea? Please share.

    Regards,

    Ravi Lodhiya


    • Edited by RLodhiya Wednesday, March 20, 2013 9:46 AM typo and correction
    Wednesday, March 20, 2013 9:26 AM
  • John, Ravi.

    In my case I was doing some object detection using the EMGU CV wrapper, then mapping the detected object onto the colour frame. At the time I used an arbitrary depth value for the depthpoint to mapped, and this was the error. This I fixed by finding the centre point of the detected object (in depth space), then I mapped this to world (or skeleton) space hence got the depth value, then mapped that to the correct colour space.

    Ravi, with regard to green screen I did do some work on it (I was thinking of using it with something like Sykpe just for fun :-) however looking at the images above have you considered looking at pixels that are possibly returning an "unknown" value rather than a true depth (these occur on the left and especially in the hair!!) and then doing something creative rather than just ignoring them, i.e maybe where its unknown for a row segment of pixels, copy the left and rightmost valid pixels to "infill" the unknown row segment??, just an idea.

    P.S Also look at using Parallel.For when processing the depth image (around line 236) if you have multiple processors available it makes a big difference for older machines like mine.

    Wednesday, March 20, 2013 10:33 AM
  • Ravi -

    Upon preliminary investigation, it appears that you have found a bug in the Kinect for Windows runtime. If you attempt to access the KinectSensor.CoordinateMapper property prior to calling the KinectSensor.Start() method, the CoordinateMapper will not have been properly initialized. As a result, the math performed by the CoordinateMapper will not produce the correct results.

    Fortunately, the workaround should be easy: Defer all uses of the CoordinateMapper until after KinectSensor.Start() has been called.

    As for the cropping issue at the corners: The fields of view of the color and depth cameras are not identical. Based on the screenshot you provided, your left hand was likely not entirely within the depth camera's field of view, even though it may have been visible in the color image. To compensate for this, you may need to crop/resize the color image down to just the portion that intersects with the depth image.

    A quick way to do this may be to sample the depth at each of the four corners, map each of these depth points to color space (using CoordinateMapper.MapDepthPointToColorPoint), and then cropping the color image with a bounding rectangle based on the color points returned. Keep in mind that when the depth data for a pixel isn't available (Depth == 0), MapDepthPointToColorPoint will not give meaningful results. Depending on how many of the corners have Depth == 0, it may not be possible to form a bounding rectangle using this technique.


    John | Kinect for Windows development team

    Wednesday, March 20, 2013 9:53 PM
  • Hi John,

    Thanks for the reply and greatly appreciated your help.It really helps me. I am now getting the right size image for the green screen example as you have suggested with manually defined depth points and mapped it to color points. Still need to live with the assumption that the KINECT is not going to change and if I change it, I need to recalibrate it for getting right results. 

    Now just for my further curiosity can you help me about what exactly ColorToDepthRelationalParameters do?  Any help much appreciated. 

    Regards,

    Ravi.

    P.S.: MSDN documentation is not so helpful in these instances.


    • Edited by RLodhiya Friday, March 22, 2013 8:31 AM amendment in question.
    Friday, March 22, 2013 8:00 AM