none
Kinect Fusion Color RRS feed

  • Question

  • Hello All,

    I'm trying to use the Kinect Fusion for scanning the objects with colour. When I'm actually capturing, the colour looks good but when I move the kinect around the object and try to capture the complete surface, then the initially captured colour doesn't look good. As you can see from the pictures below, I started scanning from the right side of the object and slowly moved the kinect around the object to capture the left side as well. When first started on the right side of the object, the colour looked fine on the Kinect Fusion Explorer window, but once I moved the kinect to the left side, the colour on the right side were messed up. So I was wondering if anyone could help me regarding this, as capturing the data with colour is important for me. 

    Thanks,

    Santosh

    Friday, October 4, 2013 4:47 PM

Answers

All replies

  • Which application were you using? If you use Kinect Explorer, what did the image look like when scanning the area? In relation from where you started from are you in the same spot? Did you ever loose tracking? What type of frame rate do you get when running the tool?

    The color is not a texture, but comes from color values on the vertices that make up the polygon. After you have exported the model you can clean some of this up in another that is capable of updating these values.


    Carmine Sirignano - MSFT

    Friday, October 4, 2013 11:48 PM
  • Hello Carmine,

    I'm using the Kinect Fusion Explorer-WPF. When I'm scanning the model, the colours on the model looks good, but after I scan the other side of the model, the colours on the firstly scanned side will be messed up. I did not loose tracking and the frame rate was around 10. I even tried scanning the model with multiple repetitions, but still no luck. 

    I have to automate the application so I need the colours to be accurate when I'm done scanning and export the model, so I can't do any manual steps for editing the colours.

    Thanks,

    Santosh

    Monday, October 7, 2013 12:58 PM
  • That is a pretty low frame rate and could explain you seeing something that may have been missed in your visual. Try with a dedicated DirectX 11 GPU such as NVidia 6600 series or higher or AMD ATI 6500+ or better. There is re-localization logic that may explain why it seems that you didn't drop, but that might in turn affect what you are seeing.

    Carmine Sirignano - MSFT

    Tuesday, October 8, 2013 9:25 PM
  • Hello Carmine,

    We purchased a geforce gtx 780 and now I have a frame rate around 20fps with colour and without it is 30fps . I still have the same issue. I need a high resolution model with good colour so the setting that I'm using are:

    Volume Max Integration Weight: 1000

    Volume Voxels Per Meter: 768

    Volume Voxels Resolution: X Axis, Y Axis, Z Axis = 640, 640, 640

    Depth Threshold: Min = 0.35m and Max = 1m

    Near Mode, Capture Color, Kinect View, Use Camera Pose Finder are enabled.

    Thanks,

    Santosh

    Friday, October 11, 2013 2:00 PM
  • How close are you getting to the target object? Fusion with the depth/resolution/noise factors will get you close to a quality output, but may require additional post processing in third party applications. You have to find the right balance between performance and quality. If you lower the voxel resolution to 512 do you see a significant difference is mesh density of exported model? Have you tried the multi-static camera sample? 

    You may also want to add clutter and follow the Tips section in the documentation to keep help with the re-localization.
    http://msdn.microsoft.com/en-us/library/dn188670.aspx

    Tips
    •Add clutter at different depths to scenes when environment scanning to improve problematic tracking.
    •Mask out background pixels to only focus on the object when scanning small objects with a static sensor.
    •Don’t move the sensor too fast or jerkily.
    •Don’t get too close to objects and surfaces you are scanning – monitor the Kinect depth image.
    •As we only rely on depth, illumination is not an issue (it even works in the dark).
    •Some objects may not appear in the depth image as they absorb or reflect too much IR light – try scanning from different angles (especially perpendicular to the surface) to reconstruct.
    •If limited processing power is available, prefer smaller voxel resolution volumes and faster/better tracking over high resolution volumes (slow) and worse tracking.
    •If surfaces do not disappear from the volume when something moves, make sure the sensor sees valid depth behind it – if there is 0 depth in the image, it does not know that it can remove these surfaces, as it is also possible something may also be very close to the sensor inside the minimum sensing distance occluding the view.


    Carmine Sirignano - MSFT


    Monday, October 14, 2013 7:52 PM
  • Hello Carmine,

    I always monitor the depth data and make sure that I'm not too close to the object. I also tried the resolution 512, but still I have the color issue. And I haven't tried the multi-static camera sample. Looks like I cannot get a high resolution color model out of fusion, so I was wondering if I can capture the model without color in Fusion sample and save the color images separately and do post-processing for texture mapping? Is it possible to do something of this sort? If its possible then can you please point me to the right direction for achieving this?

    Thanks,

    Santosh

    Tuesday, October 15, 2013 2:50 PM