locked
BitmapCache and RenderTargetBitmap

    Question

  • The following code does not seem to take advantage of the bitmap cache

    ------
    uiElement has a bitmap cache


    RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap(
                                                                   (int)uiElement.ActualWidth, (int)uiElement.ActualHeight,
                                                                   96, 96,
                                                                   PixelFormats.Pbgra32
                                                                   );

    renderTargetBitmap.Render(uiElement);    // still very slow

    uint[] arrBits = new uint[renderTargetBitmap.PixelWidth * renderTargetBitmap.PixelHeight];
    renderTargetBitmap.CopyPixels(arrBits, 4 * renderTargetBitmap.PixelWidth, 0);
    --------

    I have more or less the same performance with and without BitmapCache.

    Is there a way to take advantage of the new BitmapCache feature to get the pixel data of an UIElement ?
    Thursday, November 26, 2009 9:30 AM

All replies

  • The answer here is no.  The BitmapCache API is designed to cache your content (when rendering in hardware) in video memory, meaning it stays resident on your GPU.  This saves you the cost of re-rendering that content when drawing it to the screen.  RenderTargetBitmap renders your content in software to a buffer in system memory.  In fact, RenderTargetBitmap is implemented by creating a whole new instance of the compositor and native rendering stack, so BitmapCache will have no appreciable effect at all when using RenderTargetBitmap.  There is no way to render your content in hardware and put the output in a RenderTargetBitmap in system memory.  It's expensive to pull bits back from video memory to system memory so we did not offer any way of doing what you're asking for with the new BitmapCache API, but the addition is a good suggestion for a future version.
    Monday, December 14, 2009 9:02 PM
  • Brendan I'd like to second the vote for being able to easily pull down bits from the GPU (after hardware rendering).

    While you're correct that it's not the fastest operation, there are scenarios for it like using effects for local video/image editing where it's needed.

    Also many GPUs now days can get data off the card at > 1GB/sec actual speed (not theoretical) and at that speed many things become practical.

    Regards,
    Lee
    Tuesday, January 19, 2010 2:22 PM
  • re:  Brenda,

    could you differs between 1) pull bits back from Video to Memory  and 2) Setting rendered RenderTargetBitmap as a the source of Image. 

    In WPF Drawing huge/complex GeometryGroup  is fast, but when i got some overlays animation over the object,  like Tamir Khason's  Performance appliance of RenderTargetBitmap ,   the redraw of the hug/complex Geometry make overall perfromance laggy and huge in WPF.

    In my case, those Geometry object was plain image background (graphics) which have no interaction with the overlay animation.

    I first try is freeze the RenderTargetBitmap but it still take relative high CPU usage for WPF to redraw underlying geometry.    If WPF can take the freezed geometry as a Raster Image buffer, the overall performance should be greatly improved.

    Next solution I tried  is to convert those geometry to DrawingVisual and convert that back to raster image with RenderTargetBitmap, It do improve the overlay animation redraw performance.

    However , placing the raster image back to the canvas seems takes longer time during initial rendering.


    Lastly, what i suggest to Microsoft  is to implement a new Raster property mark over the geometry Object to let WPF handle that as raster image/graphics object without extra workaround effort (using RenderAsTargetBitmap).


    Saxon
    Friday, January 29, 2010 3:13 AM
  • The reason creating the RenderTargetBitmap and placing it into the scene is slower to draw initially is because it renders in software on the main UI thread, which means the rendering operation itself is slower and it blocks other UI operations from executing while the expensive geometry is being rendered.  The suggestion you're making is exactly what we've added in .NET 4.0 with the new CacheMode API.  If you set CacheMode = new BitmapCache() on your expensive geometry object, it will be rendered into a texture one time with hardware acceleration by the render thread.  Then when you animate objects over the top, the cached bitmap will be redrawn to the screen instead of re-rasterizing all the expensive geometry content, which should greatly improve this scenario for you.

    http://msdn.microsoft.com/en-us/library/ee230083(VS.100).aspx

    Thursday, February 18, 2010 9:24 PM
  • Brendan , Thanks for your followup

    just after my last suggestion, I noticed that CacheMode feature already available in the .net framework beta 2. 
    However, with this .net Beta 2 Bitmap CacheMode on, the graphics rendered look blurry.
    Do I need to enable the PixelSnapping  or layout rounding as well?
    Or it 's already improved on .net4 RC.


    P.S. If the Geometry not gonna change  when Geometry got freezed [by Geometry .freeze() ], it's already ensure the object can be considerd as Raster object (cachable) why .net (no matter 3.5 or 4.0) still use expensive geometry re-rendering for the dirty rect? (not until geometry resize)


    Saxon
    Saturday, February 27, 2010 4:38 PM
  • Hi Brendan,

    I like what you have done so far with accelerating the rendering using HW for the normal case, great progress. But I believe that many advanced use cases are still hard to enable using WPF. I would love to use WPF in TV production, but then I need fast access to the pixels for transfer to a SDI TVout card (a Decklink DirectShow source filter in my case). Today all rendering have to happen using RenderTargetBitmap, which is hard to use for anything but static content (SW rendering only). So I was hoping to be able to use BitmapCache -> RenderTargetBitmap -> Native buffer -> DirectShow in .NET 4. But if I read you correct, this will NOT be possible?

    Can you explain a little about why you don't expose any type of efficient SW access of surfaces? Direct3D doesn't have these restrictions, but then that is much harder to use for animating graphics. WPF have everything we need and then some, if we only could get access to those nice pixels ;)

    Would you consider adding a simple UIElement.RenderToBitmap()/BitmapCache.CopyToBitmap() that utilize HW rendering and the bitmap cache (which can be locked)?

    Or why not just optimize this use case using the existing API of BitmapCache -> RenderTargetBitmap? It would not require any new API but still allow a lot of advanced WPF use cases like WPF -> DirectX/DirectShow/MediaFoundation.

    But ANY solution that would allow HW rendering and getting the pixels into a Bitmap/byte[] would be fine, pleeeease ;)

    Of course, a DirectShow source filter WPF render target would be THE solution for Me, but maybe not so generic.

    BTW. The solution have to allow ARGB (note Alpha) rendering, otherwize it can't be used for keying.

    /BR
    /Marcus

    Saturday, February 27, 2010 8:23 PM
  • Please! I need this too. It's crazy, I'm going to have to re-write my application in another framework as it's impossible to get the bits off the screen into an image!
    Tuesday, May 18, 2010 9:11 PM
  • Is it possible that this issue is resolved? I thought that I would be able to render my control to a bitmap with hardware acceleration and now it seems that I would have to rewrite my project to. Is there any way of obtaining Hardware rendered bitmap/pixel-array from the visuals? 
    Friday, April 08, 2011 6:41 PM