Not sure i picked the right forum for my question, but this guess is as good as all the others I picked. If you think this should move, please let me know.
Now to the issue:
HW Environment: Intel Core i7-4700EQ Processor
Intel QM87 Chipset, 2x 4GB, PC3-12800, DDR3L, SO-DIMM,
SSD, 2.5", >=60GB
1One or two Dell monitors (1920x1200) attached.
SW Environment: Windows 10 Enterprise IoT LTSB 1511 or IoT CBB 1607 (both unpatched)
Intel HD 4600 Graphics driver, version 184.108.40.20601
Measuring SW: TechPowerUp's GPU-Z 1.12.0 GPU performance measurment helper
We developed an application which is rendering multiple video streams via DirectShow, thereby including our own renderering filter, source and decoding filters are also our own make and worked good and performant so far. The renderer is D3D9Ex based and running with FlipEx presentation mode. Each of the videos we render runs its own set of worker threads and of course its own renderer sink.
Single HD monitor attached
When setting up a test scenario rendering 20 SD MPEG-4 sources evenly distributed on the LTSB W10 appliance, GPU load is around 70%-75%. The CPU load is also evenly distributed in good shape across all cores.
When testing the same scenario on the CBB W10 appliance, GPU load already climbs up to 100% when adding the 7th video to the test scenario, but I can still add up to 20 videos, as planned for the test without any significant degradition in terms of video smoothness.
Two HD monitors attached (extending the display to 3840*1200 pixel)
When I add a second HD monitor to the system, the GPU load in the CBB system starts to climb up to 100% similar to the above test scenario. If we look at the number of smooth playing videos, the CBB setup now exposes more hickups and frame drops, as on the LTSP setup.
Microsoft introduced changes into the W10 CBB 1607 version, which seems either to disturb the GPU load measurement or degrades the rendering performance on bigger surfaces or both.
Note: All Power and GPU settings, where perfomance has a tradeoff against quality or power savings are selected to prefer performance. Also visual effects are adjusted for best performance.
What did Microsoft change between W10 IoT LTSB 1511 and W10 IoT CBB 1607, to influence graphics performance, and how can I eventually get hold of these change(s) to bring back the CBB situation to normal (a.k.a. LTSB)?
P.S.: When displaying 6 videos only, the LTSB setup shows a steady low GPU load at 18%, whereas the CBB setup shows lots of spikes, thereby the GPU load peaking up to 100% shortly. I will add these pictures, as soon as my account has been checked.
- Edited by Pinao Thursday, February 02, 2017 8:56 AM Fixed some typos
you can see the version history
for Version 1607 (OS build 14393) and Version 1511 (OS build 10586) here:
On each of the versions several KB articles are listed there. Search them, so maybe there's some Information about changes related to GPUs. Especially Hardware Acceleration settings may have changed.
Microsoft deprecated at some time these DirectShow Interfaces:
(maybe your code is affected in some way by that)
You can increase your performance by using new Media Foundation instead of old DirectShow
On Video Playback the new Media Foundation API has much advantages related to DirectShow.
As far as possible I prefer using Media Foundation, rather than DirectShow.
Disclaimer: No Warranty.
- Edited by internal access 54 Friday, March 24, 2017 10:08 PM