none
Kinect v2 scanning RRS feed

  • Question

  • Hello,

    I scanned couple of things using the Kinect v2 Fusion software and the models are not accurate. Even though the nose of the model was not curvy, but you can see below that it is not accurate.

     

    Also I scanned a flat surface plastic box with some black electrical tape on it. As you can see the scanned model is not a flat surface any more, especially where the black tape and the dark spots are on the box.

    Also the small stuff around the models which actually doesn't exist really. Because of these problems, 3d scanning with Kinect v2 won't generate an accurate model and is not feasible for 3d scanning.  Any idea how to overcome these issues and is there an update coming to the SDK which fixes these issues? Also these errors were not there on kinect v1. We can't use v2 for our application and since v1 is phasing out, is the production of Kinect v1 being given to any other company because with out kinect v1 we can't proceed further.

    Thanks,

    Santosh

    Wednesday, January 14, 2015 7:51 PM

All replies

  • Time of flight depth sensors are all, to a lesser or greater degree, sensitive to the texture of the surface that it is imaging. When you have mixed surfaces with different reflectance, lambertian or specular, etc then effects such as these appear. I'm no expert on how to fix these issues, but there does seem to be a lot of literature about this area, as well as other issues such as multipath where the emitted photon bounces off multiple surfaces before it is detected by the IR sensor.

    This paper seems quite promising as it is coming out of MSR and it runs at frame rate real time 

    http://research.microsoft.com/apps/pubs/default.aspx?id=232079

    http://arxiv.org/abs/1403.5919

    As for the curvature of the bridge of the nose, this sounds like a combination of a bad surface, the shiny specular mannequin head, and difficult geometry, i.e. the bridge of the nose sending photons all over the place and not back to the IR sensor. I have a similar head model and have covered it in medical tape but it still is not a perfect imaging surface.

    You can get quite good models out of the v2 if you scan skin, however it is often difficult to quantify how good the model is without comparing it to a CT scan of the person obtained at the same time, or another gold standard scanning technique.

    The noise around the edge of the model could be removed using a type of spatial or temporal filter on the depth data. What range are you scanning these objects at? A sweet spot that I've found to reduce this kind of noise is around 1m. The way that Fusion integrates the data into the truncated signed distance function volume should get rid of this noise if you rotate the object around enough.

    Hope this helps

     

     
    Thursday, January 15, 2015 3:38 PM
  • Hey Phil,

    Thanks for the paper. If it was a common problem, I wonder why Microsoft did not fix the issue before releasing the SDK. I tried scanning people too and their nose tend to become more pointy than they appear. I usually scan in 50cm to 1m range. 

    Thanks,

    Santosh

    Thursday, January 15, 2015 7:14 PM
  • Hello,

    These methods use the raw TOF measured vectors. I was wondering on how I can access this information to correct for these issues? Also if there any ideas or hints to avoid/correct these issues after getting the depth data from the sensor?

    Thanks,

    Santosh


    Thursday, January 22, 2015 7:42 PM