locked
Does an object allocate more memory after method-calls? RRS feed

  • Question

  • Hi,

    I have made a windows service that uses a Timer-object to do some work at regular timeintervals. When I follow the service in the task manager, I can see that its memory-use increase, when the work-method is called. However the memory-use does not drop, when the method has returned.

    Does the Timer-object allocate more memory to itself after the first call to the method?

    I have tried these test-methods:

    1. memory increase by approximately 1600
    void DoWork(object o)
    {
       ArrayList arr=new ArrayList();
       for int i=0; i<100000;i++)
       {
          arr.Add(i);
       }
       arr=null;
       GC.Collect();
    }

    2. Memory increase by approximately 8
    void DoWork(object o)
    {
       //nothing
    }

    The reason I ask, is that my systems administrator will not install my window service untill he can see, that memory in the task manager is back to pre-method-call level after each method-call.

    /Solvej
    Monday, November 14, 2005 9:22 AM

Answers

  • Firstly you should let your sys admin know that assuming that pre-call memory and post-call memory should be equivalent is a flawed thought pattern even if there are no memory leaks.  It sounds like the sysadmin is use to C/C++ programmers who don't free memory and therefore cause memory to be leaked but this isn't the case in .NET and even modern C++ apps have similar behavior due to the frameworks they are built on.  The sys admin should be concerned more about processes that take up excessive system resources like memory or CPU or that exhibit continual growth in memory size.  These are indicators of memor leaks or programming issues. 

    Secondly most memory managers (including the OS I believe) will not necessarily release memory back to the system once it is freed.  This is because it is more expensive to allocate the initial memory then it is to simply keep track of freed memory internally.  Many memory managers therefore define a minimal working set that identifies the minimal size of memory that the process will always use.  This is normally not allocated in advance.  Instead most memory managers simply don't release memory back to the OS until the minimum working set is achieved.  This reduces the amount of work the app has to do and reduces the CPU usage.  I know that some .NET apps do this automatically (like ASP.NET) but I'm not quite sure about normal Windows apps.  I wouldn't be surprised however.  This is easily seen because the memory will continue to grow as needed until the threshold is reached.  Then you'll see memory allocated and released as needed by the app.

    Your code is probably causing more harm than good.  By forcing a GC you are automatically promoting many vars that would normally be released into a later generation causing them to hang around longer.  Here is a probable situation that you have:

    void Foo ( )
    {
       string str = "Ready to run DoWork";
       object o = new object();
       DoWork(o);
    }

    Now normally when DoWork returns the locals (of which you allocated 100,001) would be released back for GC, eventually.  However by forcing a GC you have caused all 100,001 to be freed now but you have also caused str and o to be moved to Gen 2 which means that they (although they are locals and won't be referenced anymore) will probably not be released anytime soon.  When the GC runs, at least as of v1.1, it will first release any free memory in Gen 1 objects (locals).  If enough memory is available then it is done.  Otherwise it'll move on to Gen 2 objects.  As a result the higher generation an object the less likely it is to be freed.  Not quite what you want. 

    However as I mentioned earlier the GC won't necessarily run until memory becomes tight so if your app is taking up a couple hundred MB then the GC probably won't run because you probably have at least 1GB on the machine.  Therefore you'll see memory expand each time your function is called (probably) until it reaches the memory threshold and .NET starts doing GC.  Even then you'll probably not see memory drop below a certain level.

    Finally, unless your service is continually doing work you also have the OS memory manager working for you.  In this case the OS will ensure that your service doesn't eat up too much memory when it is idle by swapping it out.  It is important to distinguish between the working set and virtual memory usage of your app because they are different.  Oftentimes the virtual memory of your app is really high even though the working set is small.

    In answer to your original question a timer shouldn't allocate any memory (once it has been created and the underlying OS object (if needed) is created).  The allocation is probably happening in the delegate invocation and your method.  Given the forced GC in your example this would cause the delegate arguments to be pushed to Gen 2.  Are you resubscribing to the event to cause the delegate list to get larger?  This would cause the memory usage to go up as well.  As for your test app it is actually an example of something that you'd never do and therefore not a realistic test case for memory usage.  You should try to mimic the scenario you have.  You should also use the facilities built into .NET to debug the memory usage of your app.  WinDbg is also a good tool to use if needed.

    Hope this clarifies some things,
    Michael Taylor - 11/14/05
    Monday, November 14, 2005 12:31 PM

All replies

  • Firstly you should let your sys admin know that assuming that pre-call memory and post-call memory should be equivalent is a flawed thought pattern even if there are no memory leaks.  It sounds like the sysadmin is use to C/C++ programmers who don't free memory and therefore cause memory to be leaked but this isn't the case in .NET and even modern C++ apps have similar behavior due to the frameworks they are built on.  The sys admin should be concerned more about processes that take up excessive system resources like memory or CPU or that exhibit continual growth in memory size.  These are indicators of memor leaks or programming issues. 

    Secondly most memory managers (including the OS I believe) will not necessarily release memory back to the system once it is freed.  This is because it is more expensive to allocate the initial memory then it is to simply keep track of freed memory internally.  Many memory managers therefore define a minimal working set that identifies the minimal size of memory that the process will always use.  This is normally not allocated in advance.  Instead most memory managers simply don't release memory back to the OS until the minimum working set is achieved.  This reduces the amount of work the app has to do and reduces the CPU usage.  I know that some .NET apps do this automatically (like ASP.NET) but I'm not quite sure about normal Windows apps.  I wouldn't be surprised however.  This is easily seen because the memory will continue to grow as needed until the threshold is reached.  Then you'll see memory allocated and released as needed by the app.

    Your code is probably causing more harm than good.  By forcing a GC you are automatically promoting many vars that would normally be released into a later generation causing them to hang around longer.  Here is a probable situation that you have:

    void Foo ( )
    {
       string str = "Ready to run DoWork";
       object o = new object();
       DoWork(o);
    }

    Now normally when DoWork returns the locals (of which you allocated 100,001) would be released back for GC, eventually.  However by forcing a GC you have caused all 100,001 to be freed now but you have also caused str and o to be moved to Gen 2 which means that they (although they are locals and won't be referenced anymore) will probably not be released anytime soon.  When the GC runs, at least as of v1.1, it will first release any free memory in Gen 1 objects (locals).  If enough memory is available then it is done.  Otherwise it'll move on to Gen 2 objects.  As a result the higher generation an object the less likely it is to be freed.  Not quite what you want. 

    However as I mentioned earlier the GC won't necessarily run until memory becomes tight so if your app is taking up a couple hundred MB then the GC probably won't run because you probably have at least 1GB on the machine.  Therefore you'll see memory expand each time your function is called (probably) until it reaches the memory threshold and .NET starts doing GC.  Even then you'll probably not see memory drop below a certain level.

    Finally, unless your service is continually doing work you also have the OS memory manager working for you.  In this case the OS will ensure that your service doesn't eat up too much memory when it is idle by swapping it out.  It is important to distinguish between the working set and virtual memory usage of your app because they are different.  Oftentimes the virtual memory of your app is really high even though the working set is small.

    In answer to your original question a timer shouldn't allocate any memory (once it has been created and the underlying OS object (if needed) is created).  The allocation is probably happening in the delegate invocation and your method.  Given the forced GC in your example this would cause the delegate arguments to be pushed to Gen 2.  Are you resubscribing to the event to cause the delegate list to get larger?  This would cause the memory usage to go up as well.  As for your test app it is actually an example of something that you'd never do and therefore not a realistic test case for memory usage.  You should try to mimic the scenario you have.  You should also use the facilities built into .NET to debug the memory usage of your app.  WinDbg is also a good tool to use if needed.

    Hope this clarifies some things,
    Michael Taylor - 11/14/05
    Monday, November 14, 2005 12:31 PM
  • Thank you for your answer, Michael. This has been very helpfull.

    I presented the two test-methods because the real DoWork-method is quite complex and involves unmanaged memory (e.g. SqlConnection-objects), which I DO remember to close and dispose and what not.

    When I run my service (and test-services) the memory-use follow a pattern like this:
    before DoWork=low memory-use
    during 1. DoWork=high memory-use
    after 1. DoWork=intermediate memory-use
    ...
    ...
    during n. DoWork=during 1. DoWork
    after n. DoWork=after 1. DoWork

    I find it acceptable and with your explanation I hope my Sys Adm will too.

    /Solvej
    Monday, November 14, 2005 1:20 PM