none
Fatal Flaw in the Large Object Heap?

    Question

  • Hi All,

    After playing with the GC I find the following interesting. The problem domain I typically work in requires the allocation of very large arrays for image processing. We have discovered some issues when our processes are long running. We run out of memory and can not reclaim it. Our only recouse is to restart our processes. In an effort to discover why I uncovered the following: The LOH (Large Object Heap) is never compacted. I think this is a fatal flaw in the GC. I understand MS stated reason for this, but shouldn't there be someway to compact the LOH if necessary, at least a method call that forces the issue?

    Does anyone know of anything that can be done? Any workaround is better than killing the process...

    Here is a demo program that shines a very bright light on the issue. If anyone sees anything wrong with the logic please respond.

    Here is a typical run's output:

    Pass: 1     Array Size (MB): 0910  0910
    Pass: 1 Max Array Size (MB): 0920  0000 System Out of Memory...

    Pass: 2     Array Size (MB): 0910  0910
    Pass: 2 Max Array Size (MB): 0920  0000 System Out of Memory...

    Pass: 3     Array Size (MB): 0010  0010
    Pass: 3 Max Array Size (MB): 0020  0000 System Out of Memory...

    Pass: 4     Array Size (MB): 0010  0010
    Pass: 4 Max Array Size (MB): 0020  0000 System Out of Memory...

    Pass: 5     Array Size (MB): 0010  0010
    Pass: 5 Max Array Size (MB): 0020  0000 System Out of Memory...

    Press any key to exit...

    As you can see by the time the program is finished with the LOH you can't even get 20 MB allocated !!!! 

    Here is the source code:

    using System;
    using System.Text;
    using System.Threading;

    namespace Burn_LOH
    {
        class Program
        {
            static void Main (string[] args)
            {
                for (int count = 1; count <= 5; ++count)
                    AllocBigMemoryBlock (count);
                Console.Write ("\nPress any key to exit...");
                while (Console.KeyAvailable == false)
                    Thread.Sleep (250);
            }

            static void AllocBigMemoryBlock (int pass)
            {
                const int MB = 1024 * 1024;
                byte[] array = null;
                long maxmem = 0;
                int arraySize = 0;

                GC.Collect ();
                GC.WaitForPendingFinalizers ();
                while (true)
                {
                    try
                    {
                        arraySize += 10;
                        array = new byte[arraySize * MB];
                        array[arraySize * MB - 1] = 100;
                        GC.Collect ();
                        GC.WaitForPendingFinalizers ();
                        maxmem = GC.GetTotalMemory (true);
                        Console.Write ("Pass: {0}     Array Size (MB): {1:D4}  {2:D4}\r", pass, arraySize, Convert.ToInt32 (maxmem / MB));
                    }
                    catch (System.OutOfMemoryException)
                    {
                        GC.Collect ();
                        GC.WaitForPendingFinalizers ();
                        maxmem = GC.GetTotalMemory (true);
                        Console.Write ("\n");
                        Console.Write ("Pass: {0} Max Array Size (MB): {1:D4}  {2:D4} {3}\r\n\n", pass, arraySize, Convert.ToInt32 (maxmem / MB), "System Out of Memory...");
                        break;
                    }
                    finally
                    {
                        array = null;
                        GC.Collect ();
                        GC.WaitForPendingFinalizers ();
                    }
                }
            }
        }
    }

    Monday, October 16, 2006 5:31 PM

Answers

  • Hi Keith

    Your sample app appears to be a bug in v2.0 of the .NET CLR that we discovered and fixed for the Vista release. 

    For your PictureBox/Image application, I recommend trying out the suggestions I gave above (pooling objects if possible, judicously calling GC.Collect), and seeing if that helps.

    -Chris

    Wednesday, October 18, 2006 12:27 AM

All replies

  • Hi Keith

    For performance reasons, there is no way to force the GC to compact the LOH. 

    That being said, there are ways to mitigate the OOM problem you are seeing.

    First of all, if you aren't already, you should upgrade to v2.0, which has fixed multiple LOH and OOM issues.

    Secondly, if your application uses many short-lived objects, instead of allowing old ones to become garbage and creating new ones, consider a large object pool, and reusing no-longer-needed objects.  This should eliminate the OOM issues you are seeing.

    Here's a blog entry that gives more details about the LOH:
    http://blogs.msdn.com/maoni/archive/2004/06/15/156626.aspx

     

    Hope that helps

    -Chris

    Monday, October 16, 2006 6:35 PM
  • Hi Chris,

    Thanks for you reply. 

    We are using the CLR 2.0 framework. Yes I have heard about "performance" reasons. Hay but if I want to take time to compact the LOH, why not let me? I guess that's the crux of my tag line argument... 

    Unfortunately the typical solutions don't work. Specifically, PictureBox controls, Images, and Bitmaps. Don't allow sufficient control over their memory allocations to allow us to use pooled objects. They all do array allocations under the covers. So the only way I can see using pooled objects would be to create our own suite of image objects from the ground up that would use our pooled buffer arrays.... 

     

    Keith

    Monday, October 16, 2006 7:29 PM
  • Hi Keith

    Have you considered judicious use of GC.Collect?  Here's a blog entry that outlines when it's appropriate (and even beneficial) to calling GC.Collect manually: http://blogs.msdn.com/ricom/archive/2004/11/29/271829.aspx.

    Since you've indicated that you'd be willing to take the perf hit to compact the LOH, you should be ok with the perf hit of a full heap collection at specific points in your app once the large objects have died.

    Let me know how this approach works for you.

    -Chris

    Monday, October 16, 2006 10:10 PM
  • Hi Chris,

    Did you look at the code?

    Lots of GC.Collect() throughout the test program.

    Also no amout of GC.Collect() will compact the LOH.

    Doesn't appear you can......

    Keith

     

    Monday, October 16, 2006 11:04 PM
  • Hi Keith

    I was suggesting calling GC.Collect at appropriate times in the application that you're using Bitmaps and PictureBoxes.  The only place you call GC.Collect in your sample code that would have any effect is in the finally block (since there is nothing to collect in the try and catch blocks).

    You are correct, no amount of calling GC.Collect will compact the LOH.  However, in the sample code you posted, the reason you are getting an OutOfMemoryException is not due to a fragmented LOH.

    At the point of the OOM, if we attach a debugger and load the SOS debugger extensions, we can see the heap is not fragmented (the largest free block I get with !dumpheap -stat Free is 500 bytes).

    The OOM is actually being thrown when the code attempts to allocate an array of size greater than the amount of contiguous free Virtual Memory.  For example, on my machine, according to !address, I have: Largest free region: Base 06c80000 - Size 6d940000 (1795328 KB), and I see an OOM at the point where I try to allocate an array bigger than 1760 MB.  On 32-bit, this is expected (I assume you're on the 32-bit runtime).

    If you suspect your OOMs are caused by a fragmented heap, attach a debugger at the point where the OOM is thrown, load the SOS debugger extension and run the !dumpheap -stat Free command.  That will give you an idea as to the state of the heap.

    For further reading, I recommend Rico's excellent article about tracking down managed memory leaks: http://blogs.msdn.com/ricom/archive/2004/12/10/279612.aspx and the new MSDN article "Investigating Memory Issues": http://msdn.microsoft.com/msdnmag/issues/06/11/CLRInsideOut/default.aspx

     

    Hope that helps

    -Chris

     

    Tuesday, October 17, 2006 5:12 PM
  • Hi Chris, thanks for your input.

    I call GC.collect() several times to cause the GC to update its memory stastics. As I understand it the stats on used memory are only updated when a collection cycle is run. Therefore, I assume I have to call GC.Collect() to get a updated answer for the GC.GetTotalMemory (true) calls.

    I understand what you are saying about the largest continious VM block. Which on my machine would explain the first OOM at about 960 MB. How do you interpit the subsequient OOM failures. By pass 5 when I try and allocate a 20 MB array it fails, meanwhile the GC.GetTotalMemory (true) calls return less than 1 MB of allocated memory.

    It is these subsequent OOMs than causes me to beleive that something is amiss with the LOH / VM memory blocks.

    BTW I have not tried to attach the SOS debugger.

    What are your thoughts?

    Keith 

    Tuesday, October 17, 2006 5:41 PM
  • Hi Keith

    GC.GetTotalMemory's return value is accurate at the point it was called, no GC.Collect is required (you may be thinking of some of the .NET Memory Performance Counters).

    I'm not seeing the behavior your test exhibits (my OOMs are always happening after 1GB+).  I recommend you use a debugger to determine if the VM is too fragmented to allocate a large array (!address), and SOS to determine the state of the managed heap (!dumpheap, !eeheap). 

    Please post your findings here, I'd be interested in what you discover.

    -Chris

    Tuesday, October 17, 2006 6:33 PM
  • Hi Chris,

    I will post the details of the debug session when completed.

    What OS are you testing under?  From what I can tell 64 bit OS's don't have this trouble, and from one report I have seen Vista RC1 doesn't either. My testing has been with 32 bit XP with all of the latest SP and updates.

    Which does lead one to believe that this issue occurs when the VM manager and the LOH manager begin interacting (in a negative way) with each other, on some 32 bit OS's. 

    Keith

     

    Tuesday, October 17, 2006 6:46 PM
  •  

    Hi Keith

    GC.GetTotalMemory's return value is accurate at the point it was called, no GC.Collect is required (you may be thinking of some of the .NET Memory Performance Counters).

    I'm not seeing the behavior your test exhibits (my OOMs are always happening after 1GB+). I recommend you use a debugger to determine if the VM is too fragmented to allocate a large array (!address), and SOS to determine the state of the managed heap (!dumpheap, !eeheap).

    Please post your findings here, I'd be interested in what you discover.

    -Chris

     

    Chris,

    I'm experiencing the exact same issue, if I watch the the !address output at the moment the OOM is thrown (after the second iteration which fails when allocating a 10MB array) , I notice that large regions of memory are Free but have a PAGE_NOACCESS protection level.

    ...

        05620000 : 05620000 - 57a70000
                        Type     00000000
                        Protect  00000001 PAGE_NOACCESS
                        State    00010000 MEM_FREE
                        Usage    RegionUsageFree

    ....

        5d12a000 : 5d12a000 - 19266000
                        Type     00000000
                        Protect  00000001 PAGE_NOACCESS
                        State    00010000 MEM_FREE
                        Usage    RegionUsageFree

    ....
    Largest free region: Base 05620000 - Size 57a70000 (1436096 KB)

    After each iteration the Largest free region stay's at the same amount, same for the sos !vmstat output,  but it looks like the protected regions remain protected until all Free regions become PAGE_NOACCESS protected.

    Question is - who disabled all accesses to these comitted pages and why?

    OS XP SP2, framework version 2.0.50727

    Willy.

    MVP C#

     

     

    Tuesday, October 17, 2006 9:49 PM
  •  
     

    Hi Keith

    GC.GetTotalMemory's return value is accurate at the point it was called, no GC.Collect is required (you may be thinking of some of the .NET Memory Performance Counters).

    I'm not seeing the behavior your test exhibits (my OOMs are always happening after 1GB+). I recommend you use a debugger to determine if the VM is too fragmented to allocate a large array (!address), and SOS to determine the state of the managed heap (!dumpheap, !eeheap).

    Please post your findings here, I'd be interested in what you discover.

    -Chris

     

    Chris,

    I'm experiencing the exact same issue, if I watch the the !address output at the moment the OOM is thrown (after the second iteration which fails when allocating a 10MB array) , I notice that large regions of memory are Free but have a PAGE_NOACCESS protection level.

    ...

        05620000 : 05620000 - 57a70000
                        Type     00000000
                        Protect  00000001 PAGE_NOACCESS
                        State    00010000 MEM_FREE
                        Usage    RegionUsageFree

    ....

        5d12a000 : 5d12a000 - 19266000
                        Type     00000000
                        Protect  00000001 PAGE_NOACCESS
                        State    00010000 MEM_FREE
                        Usage    RegionUsageFree

    ....
    Largest free region: Base 05620000 - Size 57a70000 (1436096 KB)

    After each iteration the Largest free region stay's at the same amount, same for the sos !vmstat output,  but it looks like the protected regions remain protected until all Free regions become PAGE_NOACCESS protected.

    Question is - who disabled all accesses to these comitted pages and why?

    OS XP SP2, framework version 2.0.50727

    Willy.

    MVP C#

     

    Ok, I see this is decomitted memory and freed by a VirtualFree(...., PAGE_NOACCESS) call, so nothing to worry about. But question remains why can't the CLR memory manager allocate from this larger free regions?

    Willy.

    Tuesday, October 17, 2006 10:22 PM
  • I changed Keith's original code like this:
     
    ..
          while (true)
          {
            MemoryFailPoint memFailPoint = null;
            try
            {
              arraySize += 10;
              using(memFailPoint = new MemoryFailPoint(arraySize))
              {
                array = new byte[arraySize * MB];
                array[arraySize * MB - 1] = 100;
                maxmem = GC.GetTotalMemory (true);
                Console.Write ("Pass: {0} Array Size (MB): {1:D4} {2:D4}\r", pass, arraySize, Convert.ToInt32 (maxmem / MB));
              }
            }
            catch (InsufficientMemoryException e)
            {
              Console.WriteLine("\nExpected InsufficientMemoryException thrown.  Message: " + e.Message);
              maxmem = GC.GetTotalMemory (true);
              Console.Write ("Pass: {0} Max Array Size (MB): {1:D4} {2:D4} {3}\r\n\n", pass, arraySize, Convert.ToInt32 (maxmem / MB), "System Out of Memory...");
              break;
            }  
    ...
     
    so here I used the MemoryFailPoint (V2 of the framework), which correctly throws an InsufficientMemoryException. Only difference is that here the code works as expected, that is, each iteration allocates the same amount of memory before throwing.
    So it looks like (as I always thought it was the case) you can't reliably recover from OOM exceptions, all you can do is throw away the AD and start again, or terminate the process. The MemoryFailPoint was added to the framework just to prevent such issues, I'm I right Chris??
     
    Willy.
     
     

    Hi Keith

    GC.GetTotalMemory's return value is accurate at the point it was called, no GC.Collect is required (you may be thinking of some of the .NET Memory Performance Counters).

    I'm not seeing the behavior your test exhibits (my OOMs are always happening after 1GB+). I recommend you use a debugger to determine if the VM is too fragmented to allocate a large array (!address), and SOS to determine the state of the managed heap (!dumpheap, !eeheap).

    Please post your findings here, I'd be interested in what you discover.

    -Chris

     

    Chris,

    I'm experiencing the exact same issue, if I watch the the !address output at the moment the OOM is thrown (after the second iteration which fails when allocating a 10MB array) , I notice that large regions of memory are Free but have a PAGE_NOACCESS protection level.

    ...

        05620000 : 05620000 - 57a70000
                        Type     00000000
                        Protect  00000001 PAGE_NOACCESS
                        State    00010000 MEM_FREE
                        Usage    RegionUsageFree

    ....

        5d12a000 : 5d12a000 - 19266000
                        Type     00000000
                        Protect  00000001 PAGE_NOACCESS
                        State    00010000 MEM_FREE
                        Usage    RegionUsageFree

    ....
    Largest free region: Base 05620000 - Size 57a70000 (1436096 KB)

    After each iteration the Largest free region stay's at the same amount, same for the sos !vmstat output,  but it looks like the protected regions remain protected until all Free regions become PAGE_NOACCESS protected.

    Question is - who disabled all accesses to these comitted pages and why?

    OS XP SP2, framework version 2.0.50727

    Willy.

    MVP C#

     

     

    Tuesday, October 17, 2006 11:09 PM
  • Hi Chris,

    Playing around I find things get even stranger. If I put the program into a loop the maximum array size will cycle up and down. At its maximum I can create an array of about 960 MB, at its minimum its about 10 MB. If you let it continue after about 1/2 second of only getting 10 MB per pass the size jump back up to 960 MB for a few tries then back to 10 MB. This pattern will repeat as long as you allow the program to run (at least on a machine that works like mine). To see this behavoir change the main to the following code snip:

            static void Main (string[] args)
            {
                int pass=0;
                Console.Write ("\nPress any key to stop...");
                while (Console.KeyAvailable == false)
                {
                    ++pass;
                    AllocBigMemoryBlock (pass);
                }
                Console.ReadKey ();
                Console.Write ("\nPress any key to exit...");
                while (Console.KeyAvailable == false)
                {
                    Thread.Sleep (250);
                }
            }

    Tuesday, October 17, 2006 11:10 PM
  • Hi Keith

    I was using Vista and couldn't repro your problem.  When I try on XP, it I get the same OOM at 20MB.  I will investigate and let you know what I find.


    Thanks for bringing this to our attention.

    -Chris

    Tuesday, October 17, 2006 11:54 PM
  • Hi Keith

    Your sample app appears to be a bug in v2.0 of the .NET CLR that we discovered and fixed for the Vista release. 

    For your PictureBox/Image application, I recommend trying out the suggestions I gave above (pooling objects if possible, judicously calling GC.Collect), and seeing if that helps.

    -Chris

    Wednesday, October 18, 2006 12:27 AM
  • Hi Chris,

    Thanks for following me down this rabbit hole.

    Do you know if this issue will be addresses in some future update for XP's CLR?

    BTW, did you see the cyclic behavior I described? Out of currosity do you know what might cause it? 

    Keith

    Wednesday, October 18, 2006 3:40 PM
  • Hi Willy,

    Thanks for your input.

    I tried the MemoryFailPoint and it did prevent the described aborrent behavior. Good call. 

    I also agree once you hit this OOM it seems impossible to recover. Restarting the process seems to be about all one can do. I don't know if App domains use the same heap or if they each get their own. But in our case I am not sure using an unloadable secondary app domain would help, because even if they were in seperate heaps both heaps would still be in the same process address space and once it gets marked as "no access", its game over...

    You also raise some good questions about the free space in the VM. Maybe its a debugging tripwire someone put in place to catch accesses after object release? Who knows....

    Keith

     

    Wednesday, October 18, 2006 3:54 PM
  • Hi Keith

    Unfortunately at this point I can't say if or when this fix will be released for XP.

    I did see the cyclic behavior, and that is part of the bug, basically a miscalculation of the amount of avaialble heap space.

    -Chris

    Wednesday, October 18, 2006 4:57 PM
  • Chris/Keith

    I think I have a similar problem. I'd be very keen to know when a patch is available. Is there a MS bug number (or anything like that) that I can use to track this? Does the problem happen on Windows 2003 (32 bit)?

    BTW: Allocating the same size object each time didn't result in the OOM problem. It sounds to me like a classic memory fragmentation issue. My first thought is that the GC isn't freeing up all the memory and it's more a case of no contiguous memory than being out of memory. No doubt this is overly simplistic and it doesn't explain why pass 1 and 2 are quite different from passes 3-5.

    Graeme

    Thursday, November 16, 2006 7:44 AM
  • I was told that it is a bug in the XP build for CLR 2.0, fixed for vista, don't know about CLR 3.0, doesn't happen on 64 bit versions of OS. No info on patch for broken versions.

    see my article on code project for some workarounds...

    http://www.codeproject.com/useritems/Large_Objects____Trouble.asp

    good luck

    Keith

     

    Friday, November 17, 2006 12:20 AM
  • I'm having the exact same problem - I'm working on a server application that needs some very large arrays describing a set of unique terms (words). One of the arrays is a byte array of concatenated utf8-encoded and length prefixed strings - the content of the terms, while the other is an array of structs holding some other attributes for the terms. The struct size is 40 bytes, and average term size is around 12 chars - typically 1 byte in utf8.

    We're talking about something like 10 million terms.

    The problem arose when growing the arrays - obviously just copying the contents to a new larger array is insufficient, a swap file must be used because it might (understandably) not be possible to have both of the large arrays in memory at one time.

    But even with the use of a swap file I get the problems, even if plenty of free space is available - because some memory had been alocated after the first array I couldn't get a large enough contiguous block of data.

    The solution used was to use arrays of arrays of smaller size (2^15 structs, around 1.3 megabytes) but the price is a large performance hit, as these arrays are used extensively in some tight loops.

    A forced LOH compact would be preferable in our situation.

    Monday, December 18, 2006 3:36 PM
  • Hi,

    Did you read the CP article I did? http://www.codeproject.com/csharp/Large_Objects____Trouble.asp It details a work around that allows you to know that you have hit a memory wall. The LOH becomes corrupted only if you exceed its available memory. Also have you looked into the 3GB switch in the boot INI file, when used with the /LargeAddressAware switch? It can give you a fair amount more of memory. Or 64 bit windows for that matter? Finally, the CLR version 3.0 supposedly fixes the issue as well....

    Good luck,

    Keith

    Tuesday, December 19, 2006 1:57 AM
  •  Chris Lyon - MS wrote:

    Hi Keith

    Your sample app appears to be a bug in v2.0 of the .NET CLR that we discovered and fixed for the Vista release. 

    For your PictureBox/Image application, I recommend trying out the suggestions I gave above (pooling objects if possible, judicously calling GC.Collect), and seeing if that helps.

    -Chris

    Now, is this just me that find this disturbing? Are there any more issues resolved in the CLR 2.0 in Vista that aren't yet resolved in Windows XP?

    Tuesday, December 19, 2006 3:06 AM
  • Yes I read it, figured it was your article since I recognized the code you posted here. MemoryFailPoint is probably exactly what we need for "grow" scenarios, since they are quite rare. It doesn't solve the problem itself of course :) But it lets you exit cleanly.

    I might even be able to defragment all of the LOH arrays in the system on memory failure.

    We tried the /3GB switch once, but the system refused to start (Not sure why? The /3GB switch is only for the virtual memory of processes- it shouldn't affect how much memory is actually available to the kernel?)

    Do you know if the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is set by default for .NET executables?

     

    Tuesday, December 19, 2006 7:57 AM
  • HI ...

    The 3GB switch allows the 4GB process address space to split 3GB for the program / 1 GB for the OS vs the typical 2 / 2 split. As long as your process doesn't needs lots of I/O from the OS using 3GB sjould not effect performance very much. Typically not being able to run with the 3GB switch is caused by a driver conflit some driver can't deal with + to - transisitions that 3GB implies. Think signed vs unsigned 32 bit ints... No 32 bit exe have the /largeaddressaware flag by default. DotNet exe will run with the flage set in their EXE but an external microsoft tool must be used to set it. Google for it and you will find it, you can also get more info as to why you could not start with 3GB active as well as what the true meaning of all these flags are.....

    happy hunting,

    K

    Wednesday, December 20, 2006 8:52 PM
  • Thanks, that helped a lot. I have been googling for IMAGE_FILE_LARGE_ADDRESS_AWARE and .NET but found nothing useful. Googling for largeaddressaware gave me much better results, one of them was this page: http://tonesnotes.com/blog/2005/10/avoiding_out_of_memory_excepti.html
    Wednesday, December 20, 2006 10:55 PM
  • I could not agree more Keith with your suggestion to allow us to compact the Large Object Heap.  Here we are 3 years later -- and still the LOH won't allow you to compact it.

    Why can't Microsoft provide us with a simple API in .NET called:

    GC.CompactLargeObjectHeap()

    even if this method causes the application to hang for 10 seconds or more -- it beats the heck out of crashing and requiring Application restart to clean up the memory!

    There are a great many applications that use 3rdparty libraries which make allocations on the LOH. For example, even Microsoft's own XNA fragments the heck out of the LOH each time you do "new Texture2D()" or "Texture.FromFile()" without any remedy!

    So if you are trying to create a "Game Engine" in .NET -- this is a real pain in the neck, to have our LOH leak away memory without any way to stop it. All we can do is "slow it down", unless we decide to "make our own library" and forget about XNA. Surely this is not what Microsoft wants!

    Many months of headache would be resolved for us (and others!) if MS would simply provide us with this one method; I'll say it again:

    GC.CompactLargeObjectHeap()

    And while I'm making this suggestion, I would suggest the MS take it one step further, and allow this command:

    GC.DoLargeObjectHeapCompactionWhenOutOfMemory = true;   // would be false by default

    So we would just set this to true for "Visual3D Game Engine" (our product), and then when the fragmentation causes the first OOM exception, .NET would automatically compact the LOH, and then retry the allocation!  Now that would aligned with all the .NET motto "it just works" and "easy to use".  It only cause this "performance hit" for "those who choose it", to avoid the alternative OOM crash, forcing a restart of the application!

    Please Microsoft -- do this for us!  The solution we are requesting would be harmless to those who don't want to use it, and provides a refreshing salvation to those who really do want to use it. Please, Please, provide this in .NET 4.0+.

     

    Wednesday, October 21, 2009 5:28 AM
  • hi im trying to find out if i was to change net framework 4.0 64bit to 2.0 64bit. will it damage my programs or slow mt laptop down. i using windows 7. any advice or info on this would be really helpful. thanks

    Friday, July 02, 2010 9:16 PM
  • +1 for GC.CompactLargeObjectHeap()

    We've had a massive issue with the fact that a double[1001] array gets put on the LOH! We pass a lot of double[1024] arrays around, as well as using a lot of actually large (10s of MB) integer arrays for images and other data. We've had to implement a buffer pooling mechanism for all of these buffers, and it's a total PITA.

    Even after all this buffer pooling, we still get OOM exceptions after we've been running for several days. Not least because we try to avoid releasing buffers from the pools because there's no way to get around the fact that doing that will probably fragment the LOH.

    PLEASE give us the option of taking the performance hit to compact the LOH (at our own request). That way we could use MemoryFailPoint to detect an OOM condition, release all our pooled buffers, and compact the LOH.

    Ours is an interactive client application. Recycling the process is NOT an option. We unfortunately cannot (yet) switch to 64bit either, because we need to do (performance sensitive) interop with Delphi code (for which there is no 64bit compiler).

    Thursday, April 28, 2011 10:04 AM

  • Many months of headache would be resolved for us (and others!) if MS would simply provide us with this one method; I'll say it again:

    GC.CompactLargeObjectHeap()

    And while I'm making this suggestion, I would suggest the MS take it one step further, and allow this command:

    GC.DoLargeObjectHeapCompactionWhenOutOfMemory = true;   // would be false by default

    So we would just set this to true for "Visual3D Game Engine" (our product), and then when the fragmentation causes the first OOM exception, .NET would automatically compact the LOH, and then retry the allocation!  Now that would aligned with all the .NET motto "it just works" and "easy to use".  It only cause this "performance hit" for "those who choose it", to avoid the alternative OOM crash, forcing a restart of the application!

    Please Microsoft -- do this for us!  The solution we are requesting would be harmless to those who don't want to use it, and provides a refreshing salvation to those who really do want to use it. Please, Please, provide this in .NET 4.0+.

    +1 for this suggestion.  +1000 actually.

    It makes absolutely no sense to us why this is not an option.  The amount of copying of blocks of data .NET does is already huge... fortunately our hardware is really good at that these days.  Running out of memory prematurely is the worst thing in the world for our users... defeating the whole point of managed code!

    (Sorry for resurrecting an old thread... but there's good info in it, and unfortunately it is still relevant today.)

    Friday, April 05, 2013 5:20 PM