none
strange shadowing on depth image RRS feed

  • Question

  •  

    Hey,

    while using the depth sensor of the kinect (both with the "Sample Skeletal Viewer" and with the Working with Depth Data sample code from Channel 9) i noticed that all depth information has strange shadows and appears doubled horizontally. see example image

    (this is just one hand) is this a defective kinect sensor? i didn't notice such problems in any sample videos.

    greetings






    Saturday, June 18, 2011 2:22 AM

Answers

  • (from Eric, a dev on the Kinect PC team)

    Not defective, but very hard to understand when you're looking at two hands. There is a distance betwen the IR emittier (the laser) and the depth camera. anything that the depth camera can see but the IR transmitter doesn't bounce off of, gets flagged as "shadow zone" or "unknown" area. It just doesn't know the depth, so it flags it as zero. Your application should ignore these shadow regions. Actually, it makes it quite nice to know that you're in the shadow zone in some scenarios, because you really know you've hit the edge of something! But in any case, the closer you get to the camera, the more shadow-type double-image you'll see. It's just the way it works, with this specific type of hardware setup. IN the 2nd image, your hand is "too close" in the first place, that's why it's blue (it's got a value of 0), and the shadow zone, also gets marked as 0. One is "too close", one is "unknown shadow zone".

    weird, eh?

     


    ~ Eric R., Senior Dev, MS Research You can probably figure out more about me if you try!
    • Marked as answer by ignorator Saturday, June 18, 2011 1:52 PM
    Saturday, June 18, 2011 4:46 AM

All replies

  • (from Eric, a dev on the Kinect PC team)

    Not defective, but very hard to understand when you're looking at two hands. There is a distance betwen the IR emittier (the laser) and the depth camera. anything that the depth camera can see but the IR transmitter doesn't bounce off of, gets flagged as "shadow zone" or "unknown" area. It just doesn't know the depth, so it flags it as zero. Your application should ignore these shadow regions. Actually, it makes it quite nice to know that you're in the shadow zone in some scenarios, because you really know you've hit the edge of something! But in any case, the closer you get to the camera, the more shadow-type double-image you'll see. It's just the way it works, with this specific type of hardware setup. IN the 2nd image, your hand is "too close" in the first place, that's why it's blue (it's got a value of 0), and the shadow zone, also gets marked as 0. One is "too close", one is "unknown shadow zone".

    weird, eh?

     


    ~ Eric R., Senior Dev, MS Research You can probably figure out more about me if you try!
    • Marked as answer by ignorator Saturday, June 18, 2011 1:52 PM
    Saturday, June 18, 2011 4:46 AM
  • Hi,

    I've the same problem but I'm quite sure that it was ok at the beginning.

    Could it be that problem occured after installing Windows Phone Developer Tools?

    I installed it because of XNA (Sabre demo)

    greetings


    Tuesday, June 21, 2011 7:19 PM
  •  

    Hi there!

    I have the very same problem!

     

    Yesterday "CameraFundamentals" from Chanel 9 went fine, no probs at all.

    Today I have this wired shadow.

     

     

    I have to say I am not so convinced about that shadow-thing because I don't see any difference to yesterday (tried different light settings and so on).

     

    Is there any advise how to get rid of the shadow?

     

     

     

     

    Thursday, July 21, 2011 11:00 AM
  • As eric said above, the shadow artifact is expected given the distance between the IR emitter and the depth camera. The closer you are to kinect device, the more pronounced the shadow will be.

    Does that make sense?
    Eddy


    I'm here to help
    Thursday, July 21, 2011 7:49 PM
  • I think I understand this problem better now, and I'll try to explain it, the way I see it, and how I think it can be corrected. When building a 2d image from depth data, keep in mind depth data is measured from a central focal point. As things get farther away, those measurements are on a whole other plane. And since the Kinect depth sensor has a range between 800mm and 4000mm, there are 3200 possible planes. The kinect depth sensor measures from that central focal point out into space, but at an angle depending on what pixel its reading. It basically casts a ray from the center of the sensor, if it intersects with something, it returns the depth. Think like a gear drive sprinkler you might find in your front yard. The water can go so far, but as the sprinkler rotates, it might appear to go farther at one angle than another, from a line drawn parrallel to the sprinkler, but, it always shoots the same distance. It will create a curved pattern. In the case of the Kinect, the depth values returned are the distance from the sensor to the intersected object from the focal point. Each pixel of data is at an angle to the focal point. If you use the values returned from the kinect, you have  not compensated for the angle. I think this is similar to crossing your eyes. However, if you factor in the angle, and, recalculate the true depth to be computed to 2d, you should be able to get rid of that shadow, or most of it anyway. I've done it in 3d, and, it was here that I realized the problem. The algorithem to correct is very simple. But before I post it, I need to verify it in 2d (I'm pretty sure its correct). Here is a link to a 3d mesh I created after I recalculated the true x,y,z coordinates in 3d space to be more like polar coordinates. If you look at it closely, it should explain why depth data creates a shadow.

    <iframe title ="Preview" scrolling="no" marginheight="0" marginwidth="0" frameborder="0" style="width:320px;height:249px;padding:0;background-color:#fcfcfc;" src="https://skydrive.live.com/embedphoto.aspx/DevStuff2/corrected.png?cid=ee56281807d66e46&sc=photos"></iframe>

     

    I think that knowing all this, I would just build my own bitmap using the corrected values, instead of using PlanarImage. But thats just me, and a theory.


    Friday, July 22, 2011 2:36 AM
  • Do you mind sharing the 3d source code with us once you have verified it in 2d? :)
    Friday, May 18, 2012 3:52 AM