none
Kinect for Windows and precision/calibration RRS feed

  • Question

  • Hello, and sorry for bringing up this topic that might have been discussed already in the past.

    So i installed fresh sdk 1.5.0 on my win7 64 machine, and plugged in a kinect xbox (the only i had at the time), and it seemed to work ok (unfortunately i have no screenshots from when it worked ok)
    After a couple of hours of playing around with the sdk examples, it seemed that the calibration started to go off, and the black shadow in the colorwithdepth examples got way larger, and weirder. We had another kinect for xbox here, i tried to exchange with that one, and the results were even worse, even larger black shadow, and image seemed not to match.

    So we got a Kinect for Windows, it arrived today, i tested with the sdk examples, and it didnt improve from the last situation.
    here is what i get: http://tinypic.com/r/v5fmh0/6   ( http://oi47.tinypic.com/v5fmh0.jpg  )
    I would like to ask: does this look right to you? if not, how to solve such problem? since it's obviously not a kinect calibration problem, i just mounted a brand new unit, and i still have the problem, and besides the kinect for windows cannot be calibrated, would uninstall/reinstall the sdk help?

    thanks

    Friday, August 3, 2012 12:34 PM

Answers

  • This is, as it happens, expected.  What you're seeing is a combination of factors: 

    The way in which the Kinect sensor detects depth is to project a pattern of IR dots into the world (the LED emitter is far left as you're looking at the sensor) and then reads then using an IR sensitive camera (far right).  Because the emitter and receiver are not coincident, there are portions of the scene that are visible to the depth sensor that are in the IR shadow.  Similarly, the color camera (center) can see things that are occluded for the depth sensor and vice-versa.  The image displayed in DepthWithColor-D3D is a composite of these data streams, so there will be black-out areas anywhere where the IR was shadowed or where the depth camera was occluded.  Then, the way our color mapping math works, two depth points that are colinear from the perspective of the color camera will return the same color pixel.  This is why you see magic fingertips on the whiteboard, for example - a line drawn from the point on the whiteboard through your real finger will intersect the color camera.

    Ok, great, but this seems pretty exaggerated, yes?  That's because this particular sample uses a perspective projection, which will cause the shadows to be drawn larger when cast onto walls behind you (which is true to life, of course) than you'd see in an orthographic projection.

    Hopefully this make sense - happy to try to drill in further if there's more we can explain here.


    -Adam Smith [MSFT]

    Thursday, August 9, 2012 6:57 PM

All replies

  • Here's another clearer example of the problem:

    http://tinypic.com/r/2db3vo7/6   (  http://oi49.tinypic.com/2db3vo7.jpg  )

    you can see that the color image is not particularly well registered, the object in front (me) ends mapped into the wall (fingers and the top of the head)

    Friday, August 3, 2012 12:49 PM
  • did you change the sample at all or are you just running it from the toolkit?
    Wednesday, August 8, 2012 11:31 PM
  • just straight out the toolkit. Not even recompiled. And it does the same with the other samples, as well as with the kinect studio.
    As i was describing, it happens with 3 different kinects, one for windows and 2 for xbox, and on 2 different machines (but on the second i tried only one kinect, maybe i should try this other one as well)
    Thursday, August 9, 2012 3:48 PM
  • This is, as it happens, expected.  What you're seeing is a combination of factors: 

    The way in which the Kinect sensor detects depth is to project a pattern of IR dots into the world (the LED emitter is far left as you're looking at the sensor) and then reads then using an IR sensitive camera (far right).  Because the emitter and receiver are not coincident, there are portions of the scene that are visible to the depth sensor that are in the IR shadow.  Similarly, the color camera (center) can see things that are occluded for the depth sensor and vice-versa.  The image displayed in DepthWithColor-D3D is a composite of these data streams, so there will be black-out areas anywhere where the IR was shadowed or where the depth camera was occluded.  Then, the way our color mapping math works, two depth points that are colinear from the perspective of the color camera will return the same color pixel.  This is why you see magic fingertips on the whiteboard, for example - a line drawn from the point on the whiteboard through your real finger will intersect the color camera.

    Ok, great, but this seems pretty exaggerated, yes?  That's because this particular sample uses a perspective projection, which will cause the shadows to be drawn larger when cast onto walls behind you (which is true to life, of course) than you'd see in an orthographic projection.

    Hopefully this make sense - happy to try to drill in further if there's more we can explain here.


    -Adam Smith [MSFT]

    Thursday, August 9, 2012 6:57 PM
  • Thanks for the clarification Adam.

    So you mean that also this much shadow on the right on the depth image is supposed to be there?
    http://tinypic.com/r/2nvumb6/6 (  http://oi46.tinypic.com/2nvumb6.jpg  )

    s
    Monday, August 13, 2012 6:00 PM
  • Yup.  What you're seeing is the infrared shadow that your hand is casting against the wall. 

    If it helps you visualize this: If you're holding a flashlight at the exact location of your eye, you won't see any shadows, but if you hold it a few inches to the side, you'll see shadows that look very much like the image you shared.  Because our IR emitter is not at the exact location of the depth sensor, the sensor can see shadows on one side of the objects casting them.  And, the depth sensor cannot determine the depth of anything not illuminated by the IR pattern.


    -Adam Smith [MSFT]

    Tuesday, August 14, 2012 8:24 PM
  • Hi Adam, thanks for answering, and sorry for not giving up, but i really feel there's something wrong with my setup
    It's like if the IR shadow gets duplicated

    to explain you what i mean, look at these two images:

    http://tinypic.com/r/a32m1w/6 (  http://oi47.tinypic.com/a32m1w.jpg  )

    here you can see, especially on top of my head, that it looks like there are 2 shadows overlapping

    http://tinypic.com/r/64f30n/6 (  http://oi50.tinypic.com/64f30n.jpg  )

    and here you can see that my two fingers get four, in the shadows. You say this all is fine, but i dont know, it just looks wrong to me..

    Thursday, August 16, 2012 7:12 AM
  • I think, perhaps, you might do better playing around with Kinect Explorer, at least for this issue.  The perspective transform on the 3D sample is greatly enlarging the shadows compared to an orthographic projection.  If you can you take a look at KE for a bit, and send screenshots of that where you have questions, I may be able to describe things better.  Alternately, I readily acknowledge that I may still be "missing it" - in which case a demonstration from another perspective may help me out. :)


    -Adam Smith [MSFT]

    Thursday, August 16, 2012 3:27 PM
  • Hi Adam,  this is another example of what looks strange to me:

    http://tinypic.com/r/a2ai50/6 ( http://oi46.tinypic.com/a2ai50.jpg  )

    Here i see in the depth image from KinectExplorer my depth shape in gray, and the IR shadow (darkest gray).

    However, you can see that in the 3D viewer of the attached KinectStudio both of these shapes contribute to generate a sort of "larger IR shadow", and you can see that my fingers, for example, duplicate. It somehow does not look normal. Does it happen the same to you? could you maybe post an image in a similar setup? i assume it shouldnt be hard to recreate my setup (if it ever needed): Im sitting at about half meter from the kinect, horizontally aligned, and placed on a shelf that is also 20-30cm above my head.

    Edit:

    Ive been searching the internet for videos of kinect in action, and it seems to me that in every case the shadow is only on one side (like in the dept mask image), so i dont understand why the 3D viewer gets it both sides.
    I feel there's something wrong with the drivers or something, but i just dont know what..

    • Edited by tedturbo Monday, August 20, 2012 11:38 AM
    Monday, August 20, 2012 9:05 AM