Kinect Fusion Preview RRS feed

  • Question

  • Hello guys,

    I want to use kinect fusion but the version that is released in the SDK for windows captures only one small part of the scene without being able to have a continuous 3d reconstruction of the surroundings as it is advertised by microsoft videos (eg

    While searching I found the following video: . The user says that "we need an invitation from ms to get access to the kinect fusion preview". How am I applying to this? Where can I find it? Is there any way that i can find an open source of the code that microsoft uses for the kinect fusion and advertises it?

    A similar version exists in PCL - kinfu but there are lots of restrictions to use it (nvidia, cuda, c++, etc) and it is not that good as microsoft's judging from the released videos.

    Thank you in advance,


    Wednesday, October 7, 2015 2:37 PM

All replies

  • Have you tried moving the Kinect around?

    Try the kinect fusion explorer samples(d2d or wpf) in the SDK and experiment by changing the volume size and resolution. Make sure you are running the samples on a computer with a good graphics card as Kinect Fusion requires a lot of GPU memory.

    Also take care with non static scenes.

    Thursday, October 8, 2015 9:58 AM
  • Thank you very much for the answer!

    Yes I have tried all the above. The 3d reconstruction is limited to a bounded area. It can get bigger by playing with the resolution etc. but I cannot for example scan a room. Is there any way that I could overcome this limitation?

    Thank you,


    Thursday, October 8, 2015 10:58 AM
  • I'm no Fusion expert but I believe it's a tradeoff between GPU memory, voxel size and voxel count. Getting a GPU with more memory may give you an advantage.

    Btw, the "we need an invitation from ms" reference comes from posts during the pre-release SDK's, In the first few versions Fusion wasn't included yet but you could get an early version if you asked. Since the release Fusion has been rolled into the SDK.


    Thursday, October 8, 2015 11:22 AM
  • As Brekel mentioned, there is a tradeoff between resolution and volume size, caused by fusion requiring a lot of GPU memory. 

    Kinfu Large scale gets around this problem by detecting when the Kinect is close to the edge of a current volume and then it swaps the data on the GPU to system memory thereby freeing up the GPU memory.

    Voxel Hashing is another type of large scale fusion type algorithm that efficiently stores the sparse volume data meaning that it doesnt have to do many swaps from system to GPU memory. The less swaps, the less chnace you have of drifting volumes. It works well, and I've got it running with both Kinect v1 and v2.

    There are more of these types of algorithms out there, I think researchers at TU Munchen have got some nice things, but I don't have links, sorry. They will all require a bit of work to get up and running and wont work 'out of the box' like the samples in the SDK.

    Thursday, October 8, 2015 12:29 PM