none
why the cpu utilizaiton of kinect for windows v2 is so high ? RRS feed

  • Question

  • The cpu of our latop is 3rd gen core i7, and we just read the depth and color frame.

    There are two process about the sdk: Kinect service and the application itself.

    The cpu utilization of the kinect service is about 12%, and the second one is about 18%.

    Why?

    Will you optimize the SDK later?

    BTW, we want to use the kinect v2 on ATOM E3845 or Celeron 2980U, so we really care about the performace.

    Thanks a lot.

    Tuesday, July 22, 2014 2:40 PM

Answers

  • Not all of the Kinect processing is performed on the GPU, and we still need non-trivial amounts of CPU in Kinect Service.

    We also have this recommendation because of the work we do in client apps upon request (color conversion, coordinatemapping, ...) and to give the app enough room to do interesting work. For example: we've seen machines that can run our "Basics" color sample just fine, but if the app tries to iterate over the color pixels it causes significant frame drops (because the app is still using an old frame and so can't receive the new ones when they come in).


    Jesse Kaplan [msft]

    Wednesday, July 23, 2014 3:50 PM

All replies

  • The CPU utilization of Kinect Service of 12% is about what we expect and you should not expect it to drop significantly before the RTM release of the SDK.

    In the app, the Kinect APIs use almost no CPU resources unless the app calls APIs that perform work. The most common of these are the color conversions (ColorFrameCopyConvertedFrameDataTo) calls and use of the CoordinateMapper to map entire frames or a large number of pixels. Other APIs like Face, HD Face, VGB, and Fusion also use significant in-app processing power, but they are used much less commonly than the former. All of these tech are already highly tuned and we do not expect additional performance improvements in the RTM release of our SDK.

    The minimum specs currently require a 3.1ghz or faster dual core machine (see developer section of faq). We do expect this to drop before release, but it is unlikely to drop low enough to support those ATOM or Celeron machines.

    Please let us know if you have any other questions.

    Thanks,

    The Kinect Team


    Jesse Kaplan [msft]

    Tuesday, July 22, 2014 3:44 PM
  • Thanks for your reply.

    We will check our application code.

    And can you tell us when we can get the RTM release of the SDK?

    We want to capture the input depth and color frame, and design a detecting and tracking like algorithm to process the frames. We can optimize our algorithm to fit the CPU. Is it possible to do this on a ATOM or Celeron?

    After all, we have installed the SDK on the ATOM E3845, and depth sample can launched.

    Tuesday, July 22, 2014 4:01 PM
  • The only hard requirements we have are for USB 3.0, Windows 8 or 8.1 (for their improved USB 3.0 support) and a DX 11 capable GPU. Everything else, including CPU, is just a recommendation and supportability statement.  If you build your app for a machine and find that its runs on that machine and meets your performance requirements, then you can use that machine for your app.

    We haven't announced an RTM date for the SDK other than "this summer".

    Thanks,

    The Kinect Team


    Jesse Kaplan [msft]

    Tuesday, July 22, 2014 4:09 PM
  • Hi, you mentioned a DX11 capable GPU is a hard requirement. Can you tell us what the CPU is for? Computing in kinect service? or computing in APIs called by APPs? or just rendering?
    Wednesday, July 23, 2014 2:34 PM
  • Not all of the Kinect processing is performed on the GPU, and we still need non-trivial amounts of CPU in Kinect Service.

    We also have this recommendation because of the work we do in client apps upon request (color conversion, coordinatemapping, ...) and to give the app enough room to do interesting work. For example: we've seen machines that can run our "Basics" color sample just fine, but if the app tries to iterate over the color pixels it causes significant frame drops (because the app is still using an old frame and so can't receive the new ones when they come in).


    Jesse Kaplan [msft]

    Wednesday, July 23, 2014 3:50 PM
  • Depth + color, conversion from raw color format (YUY2) to rgb format e.g. BGRA using CopyConvertedFrameDataToArray api is the big CPU consumer (as Jesse indicated). If you can use the YUY2 format directly e.g. Y levels only for detection in color space that would help with slow processors. Jesse also mentioned coordinate mapping.

    Performance on slow CPUs. Last years ATOM went 64bit quad core so organizing threads will help. 64 bit gives measurable improvements for imaging type applications although unfortunately many OEMs ship Atom systems with Windows 32 bit. SIMD is helpful (if you are using managed code useful to note that SIMD support for .Net is currently in preview. C++ its there already).

    The delayed 14nm process for nextgen Atom and core systems should make Kinect 2 on low end systems more feasible in 2015. Personally, I'd take Core i5 as a realistic entry level for current generation systems, even equivalent Core i3 devices don't leave much headroom.

    Thursday, July 24, 2014 12:13 AM