Reducing Computational Requirement to Process Point Cloud RRS feed

  • General discussion

  • The Kinect 2.0 for Windows has the following minimum specs:

      • 64-bit (x64) processor.
      • Physical dual-core 3.1 GHz (2 logical cores per physical) or faster processor.
      • Dedicated USB 3.0 controller.
      • 4 GB of RAM.
      • Graphics card that supports DirectX 11*
      • Windows 8 or 8.1, or Windows Embedded 8.

    These specs essentially phase out any type of mobile robotic platform integration. In order to use the Kinect 2.0 for object identification the minimum specs need to be reduced. Is anyone aware of a way to dynamically scale the number of points in the point cloud that Kinect is processed in order to reduce the computational requirements for things like large object detection or mobile applications are there ways to redefine the number of points per frame in the Kinect firmware?

    <iframe class="ginger-floatingG-popupFrame" src=""></iframe>
    Friday, December 5, 2014 3:08 AM

All replies

  • There is not support for this since the depth information is generated by the IR data that is acquired from the sensor. Depth is generated by the runtime and to ensure minimal latency, this requires a GPU to process that amount of IR data.

    For that type of embedded system you may want to look into a structured light sensor(Occipital Structure, Kinect v1) or some other depth sensor for that type of system.

    Carmine Sirignano - MSFT

    Friday, December 5, 2014 3:46 AM