none
Maximum Memory Allocation

    Question

  • I've encountered an interesting memory allocation limit in a C# project, and I'm hoping to find an explanation.  When I started to experience System.OutOfMemoryException exceptions being thrown in my application, I was sure that I did not have enough memory allocated to hit the 2 GB user space memory limit for 32-bit processes.  So I created a simple test application to see how much memory I could allocate, and to my surprise I started to get exceptions after only 1.4 GB (internally counted), with allocating 4,096 bytes at a time.  If I allocate 1 MB at a time, I can get up to 1.8 GB, but this is still less than the theoretical 2 GB I should be able to allocate.  My first thought is CLR overhead, but that's 200 - 600 MB of overhead.  Even with the 4096 allocation block size, that's only around 350,000 allocations to reach the 1.4 GB limit that I am experiencing.  I highly doubt it takes 600 MB to store a List<> of 350,000 byte arrays.  My only other thought is that the CLR is keeping some memory reserved for various purposes, but again, 600 MB is a lot of memory to prevent allocation for.

    Does anyone here know if this is CLR overhead, a limitation in the way .NET handles memory in a 32-bit environment, or some settings I'm missing somewhere?  Being limited to 1.4 - 1.8 GB implies a 10 - 30% overhead with the .NET memory manager, and that just seems awfully high!

    As a sanity check, I ran the same test with native C++ and my allocation limit was 2 GB, as expected.

    The machine I'm using has 4 GB of RAM on Vista 64, so the physical memory is available.
    Monday, March 17, 2008 2:09 AM

All replies

  • I think you're encountering two limitations of the managed heap.

     

    Old school memory managers implement a flat memory model where every malloc/new is allocated on the lowest memory address where the requested size fits.

    - allocate 10 bytes --> address 0

    - allocate 20 bytes --> address 10

    - free the first 10 bytes -->heap fragmentation, only 10 bytes available at p=0.

    This gets gradually worse as more malloc/free cycles are completed.

     

    Modern memory managers are more sophisticated. They would for example allocate a separate array for all small elements. This prevents the situation where an Int32 may cut the max allocation size in half. But it does introduce an additional limitation of the max size, i.e. the size of the small-elements-array is deducted from the total heap.

     

    Second, .NET makes a class from about everything. This increases the memory allocation for each object.

    For example the Int32 class needs only 4 bytes to store the value of the integer. But it also contains a bunch of function pointers like Int32::CompareTo(), Int32::ToString() etc.

    This may explain why the max total allocation depends on the size of each object. Larger objects generally have lower overhead, at least relatively.

     

     

    Monday, March 17, 2008 12:48 PM
  • I understand the overhead of managed objects, it just seems excessive for this test case.

    When allocating 4,096 byte chunks, one test run is able to allocate 1,453,608,960 bytes, or 354,885 total allocations.  That gives us approximately 546,391,040 bytes of additional memory until we hit the 2 GB limit.  Assuming the size of all code stored in user memory is negligible compared to the 546 MB of additional memory available, that gives us about 1,540 bytes of overhead per byte array allocated (plus one List<> instance), which is roughly 38% overhead. 

    That's 1.5 KB for GC book-keeping and managed object overhead, per object.  Is there really that much overhead?
    Monday, March 17, 2008 3:58 PM
  •  FWie wrote:

    Second, .NET makes a class from about everything. This increases the memory allocation for each object.

    For example the Int32 class needs only 4 bytes to store the value of the integer. But it also contains a bunch of function pointers like Int32::CompareTo(), Int32::ToString() etc.

    This may explain why the max total allocation depends on the size of each object. Larger objects generally have lower overhead, at least relatively.


    Structs and Classes don't contain pointers to their methods.

    The methods are, at the assembly level, completely separate functions which inherit a "this" pointer via a register.


    Code Snippet

    class x {
      public int value = 0

      public int GetValue() { return value; }

    }


    An instance of x is only 4 bytes.

    Monday, March 17, 2008 4:35 PM
  • There's still going to be a vtable for the object, and some sort of GC handle for book-keeping, right?
    Monday, March 17, 2008 5:01 PM
  • Trying to gauge .NET memory usage by repeated allocation is a pointless exercise. The CLR manipulates memory internally to such a degree that you cannot hope to get results that you can interpret. If you want to find out how much memory your application is using, run the CLR profiler (download).

     

    That said:

     

     FWie wrote:

    Int32 class needs only 4 bytes to store the value of the integer. But it also contains a bunch of function pointers like Int32::CompareTo(), Int32::ToString() etc.

     

    Int32 is a struct (value type) not a class (reference type). Value types only ever take up as much space as they need to store their data; in the case of an Int32, that means 4 bytes. However:

     

     J Hallam wrote:

    An instance of x is only 4 bytes.

    Actually, an instance of x is 12 bytes on a 32-bit system. This is because every instance of a reference type includes a 4-byte sync block and a 4 byte reference to the method table for the type of the instance. The existence of the sync block has a long and ignoble history which I won't go into here; google it if you are interested. The pointer to the method table is necessary because reference types are always declared on the heap, and only a memory pointer is returned to the context that made the allocation. In order for ,NET's static type system to be possible, each instance must keep track of its own type. Value types, on the other hand, are never allocated on the heap in isolation; they are either allocated on the stack, in which case the current method context can be used to determine the type, or they are allocated on the heap as part of a reference type allocation, in which the case type of the reference can be used to to determine the type.

     

    Here is a quick reference for memory layout in .NET. There are many others available online.

    http://en.csharp-online.net/Common_Type_System%E2%80%94Memory_Layout

    Monday, March 17, 2008 7:02 PM
    Moderator
  •  CommonGenius.com wrote:

    Trying to gauge .NET memory usage by repeated allocation is a pointless exercise. The CLR manipulates memory internally to such a degree that you cannot hope to get results that you can interpret. If you want to find out how much memory your application is using, run the CLR profiler (download).


    The point is not to see how much memory I am allocating, the point is to see how much memory I can allocate for use in my program.  I'm trying to find out why my allocation limit is as low as 1.4 GB.  Due to third-party library restrictions, I am unable to just switch to the 64-bit CLR.



     CommonGenius.com wrote:

    Here is a quick reference for memory layout in .NET. There are many others available online.

    http://en.csharp-online.net/Common_Type_System%E2%80%94Memory_Layout



    Thanks for the link.

    I understand the mechanisms involved in managed allocations, but I cannot see where this extra overhead is coming from.  There should only be 8-12 bytes of overhead per object, as far as I know.

    Monday, March 17, 2008 7:21 PM
  • I understand. My point is that your method of determining how much you can allocate relies on being able to accurately determine how much you are allocating during your test. And that's just not realistic. The only way you could do that is by allocating in one single large chunk, and i don't think that will give you an usable results.

    Monday, March 17, 2008 7:26 PM
    Moderator
  • That's true if I was concerned with how much physical/virtual memory the process uses.  Instead, I am concerned with the amount of usable memory I am able to allocate.  For instance:

    Code Snippet

    byte[] arr = new byte[4096];



    This gives me 4,096 bytes of memory that I can use for processing, no more and no less.  How much actual memory is allocated to hold the array, object header, and whatever else is needed is some arbitrary number of bytes greater than 4,096.  My concern is with how much memory I am able to allocate for my internal processing within a C# program, and why my allocations start to fail after 1.4 GB.
    Monday, March 17, 2008 7:46 PM
  • But the two are related. How much memory you can allocate for use in your program depends both on how much physical/virtual memory exists on the system, and how much overhead is being used for the CLR and the type system. I don't see how you expect to separate the concerns.

     

    In your original application, did you perform any memory profiling when you started to get OOMs to see how much memory your application was using?

    Monday, March 17, 2008 8:10 PM
    Moderator
  • Right, I was trying to say that the method I was using to measure memory allocation is meaningful for my purposes.  I'm not just using some arbitrary method to measure how much total process memory I am using.  Since my allocations are limited to around 1.4 GB for the arrays I'm using, there must be some significant overhead somewhere.  The whole point of my question is to determine how much overhead exists within .NET that would limit the amount of memory I am able to use in my program for internal uses.  Clearly there is an intimate relationship there.

     CommonGenius.com wrote:

    In your original application, did you perform any memory profiling when you started to get OOMs to see how much memory your application was using?


    That's the crux of the whole problem.  No matter how I profiled (CLR Profiler, even Task Manager), I was significantly under 2 GB of total memory usage.
    Monday, March 17, 2008 8:33 PM
  • But total memory being used is not the only limiting factor when talking about memory allocations. Particularly for long running applications or applications which allocate large chunks of memory, memory fragmentation, especially in the large object heap, can cause you to get OOM even when you have memory available.

     

    Task manager is completely useless when profiling .NET applications; don't even bother opening it. The CLR profiler should be accurate, but its results can be complicated and hard to intrepret. Did you try using the .NET performance counters?

    Monday, March 17, 2008 8:43 PM
    Moderator
  •  CommonGenius.com wrote:

    But total memory being used is not the only limiting factor when talking about memory allocations. Particularly for long running applications or applications which allocate large chunks of memory, memory fragmentation, especially in the large object heap, can cause you to get OOM even when you have memory available.


    I can see fragmentation being an issue with my original code, but my test code is designed to minimize its effects, yet I get the same results.


    I was hoping to avoid memory fragmentation as much as possible by doing sequential byte[] allocations with a List<> instance that is allocated with a sufficient initial capacity.  I'm not discounting that memory fragmentation could be the issue.  What strikes me as odd though, is if it is memory fragmentation, why do I get so much more fragmentation in .NET, where the GC is supposed to actively minimize fragmentation, than I do with C++ on a native heap, with the allocation patterns being equal?  Is the per-object book-keeping information held in a different location in the heap than the byte[] array, causing fragmentation?


    Let's assume for a second there is 12 bytes of total overhead per .NET-object, including the object header, and any GC book-keeping information.  That's only 4.3 MB of overhead for all of my allocations.  That still leaves over 500 MB.  If memory fragmentation is the cause, then I'm losing 25% of my available memory, and that's with purposely writing allocations to minimize fragmentation.  The same allocation pattern in C++ gives me practically zero fragmentation, that's what concerns me.


    In my test program, the allocations are small and easily compactible.  My gut feeling is that .NET should be able to do better than 75% efficiency in this case, and I want to know why it's not.


     CommonGenius.com wrote:

    Task manager is completely useless when profiling .NET applications; don't even bother opening it. The CLR profiler should be accurate, but its results can be complicated and hard to intrepret. Did you try using the .NET performance counters?



    Definitely, Task Manager doesn't tell you much, I just threw that in there to show I tried using every possible tool I could find.

    Thanks for your help/suggestions, by the way!
    Monday, March 17, 2008 9:27 PM
  • We'd appreciate if you can provide a sample project to let's reproduce this issue and investigate in house.
    Tuesday, March 18, 2008 7:49 AM
  • Sure thing.  I packaged up the project sources and uploaded them here.

     

    The .suo options file is not included, so the project build target should be manually set to x86 Release before building.

     

    To run the test, you can run it on the command-line without arguments, in which case it will attempt byte array and HGlobal allocations of various sizes and print the results to stdout.  Or, you can pass arguments of the form:

     

    "AllocTest byte <array size>"  or  "AllocTest hglobal <chunk size>"

     

    The first form will attempt to allocate byte arrays of the specified size, and the second will attempt to allocate HGlobal chunks (Marshal.AllocHGlobal/Marshal.FreeHGlobal) of the specified size.

     

    The HGlobal allocations are much more consistant and I am able to allocate significantly more memory using that approach, the problem is that this does not help when working with managed objects, as far as I know.

     

    Thanks for the taking the time to look into this.

    Tuesday, March 18, 2008 4:51 PM
  •  

    If you are going to be using that much ram you might want to investigate using the System.Runtime.MemoryFailPoint class.  The MemoryFailPoint class throws a InsufficientMemoryException which is recoverable while a OutOfMemoryException is most of the time not recoverable.

     

    try

    {

    using (MemoryFailPoint _FailPoint = new MemoryFailPoint(1500))

    {

    // Big memory usage here.

    }

    }

    catch(InsufficientMemoryException)

    {

    // Handle error here

    }

    Wednesday, March 19, 2008 10:40 PM
  • Hi,

    Did you get a solution for this problem? I am encountering a similar issue, my remoted server will not allocate more than 1 GB of memory.

    Thanks,
    Catalina
    Tuesday, June 10, 2008 11:46 PM
  • A similar issue doesn't mean it's the same, please start a new thread and give us some more details about the issues you have.

    Willy.
    C# MVP

    Wednesday, June 11, 2008 8:20 AM
  • Hello,

    Are there any explanations for ShawMishrak's problem?

    On our server, we are not able to allocate more than 1.6GB.

    Of course, I know the situation is different, but I guess that ShawMishrak's situation is generic enough to help a lot of us understand what's involved behind the scenes, and why we can't use all the memory we would have believed was available to us.

    Thanks,

    Lionel.
    Saturday, August 16, 2008 11:43 PM