none
Color segregation for background removal RRS feed

  • General discussion

  • Hi,

    I would like to modifiy the Kinect Explorer tutorial  (WPF) to account for color segregation. The user should enter a range of color values (e.g. RGB values) and only pixels (from depth and color stream) within the specified range should be stored and displayed and exported as stl or obj file. 

    So, the aim is to distinguish between the  object and the background. The object has always a unique color (and only  small color variations).

    What is the best way to modify the tutorial to add the desired feature? My first guess is that some lines of code have to changed in the MapColorToDepth method but I don't know if this would be the right entry point. Maybe there is an alternative (and easier) way to implement this feature?

    Best regards

    Matthias     

    Sunday, March 29, 2015 5:21 PM

All replies

  • On how to segment the body is demonstrated in the CoordinateMappingBasics sample. Kinect does not provide 3D mesh model data unless you scan and export using Fusion. Depending your requirements, this will require more on your part to implement. 

    Creating 3D mesh information out of the depth data will require that you map the depth to world space and that will give you a better world representation of the depth data. Using frameworks like openframeworks, Cinder or Unity you can build a mesh from the depth point information, but will depend on your requirements. That would be outside the scope or what the SDK can provide.


    Carmine Sirignano - MSFT

    Monday, March 30, 2015 5:39 PM
  • Thanks for the information. Actually I'm using the Fusion framework as presented in the Kinect Explorer example. I assumed that it would be easier to handle this.

    Maybe do you know an alternative way to separate the object from the background using the Kinext Fusion framework? The (3D) object is quite flat and has a small curvature but sharp edges (curved plate). It has always a unique color as already described above. The background is rough and different-colored.

    To distinguish between object and background the slide of the depth plane (or occlusion plane) is not sufficient, because there are always artefacts at the edges of the plate from the background.

    One way could be the export as stl mesh and using additional software to remove the artefacts but this is complex and time-consuming (and must be done every time by hand).   

    Tuesday, March 31, 2015 7:37 PM
  • Fusion provides a way that you can clip the depth at a certain point. There are some techniques when scanning to ensure there are no objects and you can put the object you want to scan in the middle of a turn-table, but that is just a technique and no code required. It just takes practice with that.

    Carmine Sirignano - MSFT

    Thursday, April 2, 2015 12:50 AM