none
Kinect Fusion Explorer - Export the mesh to be center of the 3D model RRS feed

  • Question

  • Hello,


    I'm am exploring using Microsoft's KinectFusion using the Kinect Fusion Explorer, written in C#, in the Kinect for Windows SDK 1.8 Dev Toolkit. I noticed that after scanning and exporting a model to a file, the actual scanned object is far away from the [0,0,0] coordinate. I believe this distance from [0,0,0] to the object itself is the distance from the Kinect to the object that I am scanning. It looks like this below:

     

    I want the center of the exported model to be the center of the scanned object, such that the axes lie in the center of the object. This shift needs to be done programmatically, so fixing this issue in a 3D modeling program won't work.

    I tried looking at:

    1. ColorMesh and Mesh classes to see if they offer some sort of manipulation of the model space transform of the object or access to where the center of the scanned object it, but they don't seem to offer any function of that sort.
    2. ColorReconstruction, which has the volume before it gets turned into a mesh, but I did not find a function that would simply translate the volume, or or report what the center of the scanned object would be.


    I also thought about collecting all the vertices and calculating the center of the mesh myself, but given that the scanned models like the one above are 300K+ of vertices, this may be computationally expensive.

    Please let me me know if you either have any ideas to any of the below, preferably in C#:

    • how to change or move an object's transform to be in the object's center.
    • quickly find the center of a scanned scene. Maybe the toolkit provides some GPU access..?
    • or even ideas on how to cut down on the vertex count of a scanned object from KinectFusion. 300K for a 30cm x 30 cm x 10 cm object like the one above seems a tad overkill.

    Thanks!

    Saturday, November 16, 2013 2:44 AM

Answers

  • The calculated positions for the resulting mesh are based on the Fusion volume that is scanned. The origin (0,0,0) of the volume is based at the top front left corner(screen space z heading in). Fusion is not aware of the objects themselves and that knowledge would come from the user. You would have to re-base the vertex positions around an arbitrary point yourself. You can do this in MeshLab.

    Carmine Sirignano - MSFT

    Tuesday, November 19, 2013 4:02 AM

All replies

  • The calculated positions for the resulting mesh are based on the Fusion volume that is scanned. The origin (0,0,0) of the volume is based at the top front left corner(screen space z heading in). Fusion is not aware of the objects themselves and that knowledge would come from the user. You would have to re-base the vertex positions around an arbitrary point yourself. You can do this in MeshLab.

    Carmine Sirignano - MSFT

    Tuesday, November 19, 2013 4:02 AM
  • Carmine,

    Thanks for the information that the 0,0,0 point is at the front left corner of the Kinect viewspace.

    Fusion indeed does not know of objects. But it does know of 3D points and triangle meshes made from those points. So, user knowledge would not be needed.

    Again, I can't do this using a 3D modeler like Meshlab, since I would need to do it programmatically in code. Do you mean that MeshLab has an SDK where I could call a function that would do that efficiently? I tried to find such a thing, but could not find it.

    I tried to iterate through all the points, find the min/max of each value, get the center point, and re-base the vertex positions myself manually brute-force, but for 300K points, it became computationally expensive.

    Do you have any suggestions for either changing the center point with the Kinect SDK while as a Color Mesh, or a way to quickly find the center point of a mesh, or even how to cut down on the vertex count?

    Tuesday, November 19, 2013 6:48 PM
  • Interpretation of the object space is something Fusion is not equipped to do. If it is a fixed area you are scanning, you can do a quick guess-timate of the center point of the area and apply that offset to your vertex values. Keep in mind, once you are exporting the vertex values, 3D modelling programs are better equipped for model space.

    If you are already iterating all the points, you can throw more processors at it using Parallel.For:

    http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.for(v=vs.110).aspx

    The alternative to that is GPU calculations, which are not part of Kinect but using C++ AMP. Lowering the resolution of the scan will result in lower vertex counts but you also will get a lower quality scan.


    Carmine Sirignano - MSFT

    Tuesday, November 19, 2013 10:32 PM