Fusion problem RRS feed

  • Question

  • Hi,

    I am trying to acquire a 360 degrees model of a human head using the new fusion explorer. The problem is that, even rotating the head very slowly, limiting the range of acquisition and so on, the model finishes with a lot of errors. What am I doing wrong?

    Thanks in advance!

    Wednesday, October 1, 2014 1:13 AM

All replies

  • Are you moving the sensor or the person rotating? The Fusion technology is based on where the sensor is in real-world and calculates new camera positions to integrated the camera data from that point. Therefore, you need to have the subject that is being scanned stand still, and you will need to move around the person.

    Be sure to take note of the 2 depth views(on the right) of the window to ensure you don't get to close and don't move too fast to prevent loosing the tracking. See the first part of the video we recorded back in July:

    Carmine Sirignano - MSFT

    Monday, October 6, 2014 5:31 PM
  • I noticed the same problem.

    With a Kinect V1, I used Fusion to successfully 3D scan an object with colors. Now, I try to scan the same object, using the same technique and I noticed two significant difference:

    1- Tracking is much less reliable. I have to move the Kinect much slower.

    2- The colors that have been scanned in previous poses but that are not currently being seen by the Kinect V2 are messed up.

    I try to change the parameters and I switched the Camera Pose Finder ON and OFF to no avail. The Camera Pose Finder is pretty much useless. At some point, the object starts "dancing" around. I can successfully scan the 3D shape without the Camera Pose Finder but the colors are wrong except for the last view

    The object is about 0.5mx0.5mx0.5m and has irregular shapes (it's a stuffed dog). I get frames at between 5 and 15 FPS. My video card is a NVIDIA Quadro K1100M with 2GB of memory.

    I noticed a similar color problem with V1 but much more rarely. Any suggestion for trying to scan around an object?

    By the way, moving the Kinect around the object or rotating the object are the same thing from the Kinect point of view as long as the Fusion volume contains only the object that is rotated.

    Saturday, November 8, 2014 1:44 PM
  • Here are examples of 3D meshes with colors obtained with the Fusion explorer using the Kinect V1 and Kinect V2. The problem with the Kinect V2 is the white colors that appear on the side of the object that are not currently seen by Kinect. The colors as viewed by the Kinect on the final pose are perfectly OK.

    Meshes of stuffed dogs obtained with Kinect V1 and V2

    I don't know if it is related to the problem shown above but I notice that if I create a cloud of points using the functions MapDepthFrameToColorspace and then MapDepthFrameToCameraSpace, the colors in the resulting cloud of points are not correctly registered (as shown by the stuffed dog colors showing up on the back wall in the left image below). However, if I use the function MapColorFrameToCameraSpace, the colors are correctly registered in the cloud of points.

    I do not understand the reason why the approach using the two functions creates a color registration problem. However, that problem puts the wrong colors at the edge of the object and during a Fusion scan, when the Kinect is moved around the object, the wrong colors (from the back wall for example) would accumulate on the side of the object not currently seen by the Kinect. I can see during the Fusion process with the Kinect V2 that the edges of the objects in the Captured Surface Color image a white line at the side edges of the stuffed dog.

    Saturday, November 8, 2014 10:24 PM
  • I will assume you are running the released version of the SDK, if not please update.

    There was a bug in the sample code that I thought was fixed (I asked the team to confirm offline). The 0,0 pixel value was being used for values based on a rounding/casting error, the effect of which is what you described.

    As for the tracking, on GPU's getting low frame rates you will have to adjust the scanning volumes and depths. This will improve things greatly as demonstrated with the 3D scanning incorporated in 3D Builder application in Windows Store.

    Carmine Sirignano - MSFT

    Tuesday, November 11, 2014 1:34 AM
  • I'm using the latest SDK, as far as I know: 2.0.1410.1900.
    Tuesday, November 11, 2014 3:07 PM
  • Hey Macdub,

    I'm curious to know on how you were able to obtain a clean model from kinect v1 as you have shown. The problems that you have on v2, are there for me on v1 too. I was wondering if you are doing a whole 360 degree scan of the doll with v1 and what settings are you using for the scan.



    Thursday, November 13, 2014 8:33 PM
  • The stuffed dog was scanned over 360° with V1. I noticed some color problems when scanning with V1, especially when you scanning an object over 360° (see image below). The brown stripe in the black is approximately where the scan was started and stopped. The problem with the V2 is with any color that is not currently seen by the Kinect.

    The settings were not anything special, just to confine the volume as much as possible to the dog itself with the highest resolution possible. The dog was rotated (it is important that nothing static is seen by the Kinect, volume definition is therefore important), and then I moved the Kinect above the dog to scan the top.

    The fusion worked pretty well in general. We have been able to scan large volumes using robotics (see, by using the robot position values instead of the position calculated by the change of view. I think that scanning an object over 360° is more difficult. The problem with the colors for the V2 would probably not be obvious when trying to scan a scene. Scanning an object over 360° from the interior (like the fuselage in the video) also created color problems. This was solved by scanning over 3 different volumes (~120° each) and adding the volumes together afterwards. This approach also helped with the resolution (each volume could use 3x more memory than if we had scanned the whole volume at once).


    • Edited by macdub Thursday, November 13, 2014 9:45 PM
    Thursday, November 13, 2014 9:39 PM
  • Hey,

    The problem with my kinect v1 is that the depth and colour images are not perfectly aligned(see the image below), due to this the colour in the model is all messed up. Don't you have this problem? I started the scan from the front and I slowly rotated the model clockwise until the left side of the model is captured and then rotated anti-clockwise to capture the right side and stopped the scan on the right side. The left side of the model is all messed up as it was scanned initially, but the model on the right side looks pretty good. 

    Friday, November 14, 2014 6:23 PM
  • With the Kinect V1, I had similar color problems but I cannot tell you exactly why. As I said before, rotating the object (or moving the Kinect around the object) creates that kind of problem while scanning only one side of something from only a few side creates less problem.

    Here are the tips I found would help to make an acceptable 3D colored scan:

    1. Don't keep the object too close of the Kinect. The closer to the Kinect, the worse the registration between the color and depth camera is. I used a distance between 1.0 and 1.5 m. Farther is even better.

    2. Don't leave the scan going on too long on one side of the object. Actually, it is better to continuously move as fast as the tracking allows. Better to not loose tracking though.

    3. Don't go over 360°. As soon as the scan is completed, stop the reconstruction.

    4. IF you rotate the object, make sure that no object is ever seen that is not rotating with the object (floor, wall, or your hand).

    Monday, November 17, 2014 4:34 PM
  • Thanks for the suggestions, tired them but no luck still the model is messed up.
    Tuesday, November 18, 2014 2:13 PM
  • To make the point 3. of my previous post more explicit: Try to scan in one direction only (counterclockwise or clockwise) even if you don't scan over 360°. I saw that scanning the same areas more than once definitely messes up the colors.
    Monday, December 1, 2014 6:49 PM
  • Hi,

    To begin with, I must say I am very impressed by Kinect v2.

    I have been trying to get the best possible scans with the Kinect and the Fusion Explorer app on my Lenovo Yoga 2 pro (GPU = Intel HD Graphics 4400). I have noticed the following:

    - camera tracking fails often ("Kinect Fusion camera tracking failed. Align the camera to the last tracked position"), and the higher the "voxels per meter" and "volume voxel resolution" settings are, the more failures I get. I can reproducibly manage entire scans of a person of an object at low settings only. Could you please explain what those parameters do in some details? I can't find much details in the documentation or the jumpstart videos. Would I have less camera tacking failures with a better GPU? Intel HD Graphics 4400 is well under your specs, but I would like to understand the results I get.

    - the best results I have had were with camera pose finder off. With camera pose finder off, moving around the camera or turning the object on a stool appear to be equivalent, am I correct?

    Many Thanks,


    Saturday, December 6, 2014 5:32 PM
  • Yeah a 4400 is a little under powered for fusion in interactive rates. Fusion works best when the motion between frames is small, as the algorithm only has to iterate a few times to converge to the best registration. With a fast GPU this assumption is valid and tracking performs well, however if the GPU is slower then you need to make sure that the motion between frames is reduced.
    Sunday, December 7, 2014 1:43 PM