none
Merging multiple depth maps from multiple Kinect v2s? RRS feed

  • Question

  • Hi, 

    I am going work with KinectFusion for a project and I need to use only the depth data from 2/3 Kinect v2s. Can anyone please suggest me the best method on how to merge multiple depth maps of the same scene (from different angles) from multiple sensors and reconstruct the 3D scene?

    It's being a while since the last update regarding using multiple Kinect v2 sensors on a single machine. Is there any progress on that? I have gone through the forum and found nothing about the support of multiple Kinect v2s in a single machine (using SDK 2.0)? There is libfreenect2 that might work. Has anyone any insight on this?

    Thanks in advance.


    • Edited by mskhan89 Thursday, March 10, 2016 9:42 AM
    Wednesday, March 9, 2016 4:52 PM

All replies

  • If you have projectors, then roomalivetoolkit would be a good place to start.

    https://github.com/Kinect/RoomAliveToolkit

    If you dont, then you need some other method of determining the position of each Kinect in the scene. You could try an offline ICP using the depth points from each Kinect, if your sensors are going to remain static. You could use a checkerboard pattern and do simple stereo calibration. Etc. From then on, it depends on what you want to do, recon a full 360 model without having to rotate the object all the way around? You can tell fusion to integrate the depth maps from each kinect into the same volume just by passing the worldToCamera transform for each Kinect. Overlapping regions will most probably have some artifacts, but thats fairly unavoidable with ToF cameras. 

    For multiple cameras using one PC, libfreenect2 would be the place to start. If you want to stay in the SDK then you will need to use sockets and multiple PCs like in roomalive. 

    Thursday, March 10, 2016 9:26 AM
  • Thank you so much, Noonan.

    In fact, I am planning to come up with a collision avoidance system using KinectFusion and my sensors will remain static but the object will rotate. What do you think?

    Even though if I use libfreenect2, is it possible to make use of RoomAlive or KinectFusion? In the description of libfreenect2, it is mentioned ambiguously. Can you please clarify?




    • Edited by mskhan89 Thursday, March 10, 2016 10:04 AM
    Thursday, March 10, 2016 10:01 AM
  • I haven't played with roomalive much, but you can easily copy the libfreenect2 arrays to the kinectFusionImage types. For example if depthFrame is an openCV mat from libfreenect2, then you can make a depth float image for fusion, by substituting something like this

    hr = NuiFusionDepthToDepthFloatFrame(
    reinterpret_cast<UINT16*>(depthFrame.data),
    m_paramsCurrent.m_cDepthWidth,
    m_paramsCurrent.m_cDepthHeight,
    m_pDepthFloatImage,
    m_paramsCurrent.m_fMinDepthThreshold,
    m_paramsCurrent.m_fMaxDepthThreshold,
    m_paramsCurrent.m_bMirrorDepthFrame);

    Collision avoidance sounds fun, I would take a look at what HTC/Valve are doing with the Vive and their chaperon system. 

    https://www.youtube.com/watch?v=vnciEkUDnhs


     
    Thursday, March 10, 2016 11:45 AM
  • Thank you for your prompt reply. 
    Thursday, March 10, 2016 12:15 PM