none
Hashtable insert failed. Load factor too high - NET 2.0 bug ?

    Question

  • Hi Guys

    We are having an intermittent problem with an ASP 2.0 website which renders the entire site unusable until we restart I.I.S.

    Exception Details: System.InvalidOperationException: Hashtable insert failed. Load factor too high.

    This is a known issue with the .NET framework 1.1 and there is a knowledge base article and associated hot fix (http://support.microsoft.com/kb/831730/)

    however I can't find any reference to this being an issue with .NET 2.0, which is what we are currently running  (in conjunction with Windows Server 2003)

    Has anybody else experienced this, if so how did you resolve it ?

    Thanks in advance

    Matt Williams

     

     

    Monday, August 21, 2006 12:28 AM

All replies

  • You'll need to contact Microsoft Support about this.  There is no entry for this problem in Product Feedback.
    Wednesday, August 30, 2006 8:44 PM
  • Hi there,

    We are having similar problem. Sometimes IIS reset fixes it. Sometimes full restart. Sometimes IIS reset plus cleaning ASP.NET temporary files.

    Are there any more information in Product Feedback on this one?

     

    Monday, January 08, 2007 6:56 PM
  • FWIW (not much, I'm sure), there is the relevant code in Rotor's src\ bcl\ system\ collections\ hashtable.cs source code file:

                // If you see this assert, make sure load factor & count are reasonable.
                // Then verify that our double hash function (h2, described at top of file)
                // meets the requirements described above. You should never see this assert.
                BCLDebug.Assert(false, "hash table insert failed!  Load factor too high, or our double hashing function is incorrect.");
                throw new InvalidOperationException(Environment.GetResourceString("InvalidOperation_HashInsertFailed"));

    These are the chatty comments at the top of the file:
    Monday, January 08, 2007 7:17 PM
  • Sorry, this freakin' new forums software bug is messing up my post.  Don't know yet how to fix it.
    Monday, January 08, 2007 8:48 PM
  • Okay, got it, it is deleting text between C++ comments.  Here's it is without comments:

              Implementation Notes:
              The generic Dictionary was copied from Hashtable's source - any bug
              fixes here probably need to be made to the generic Dictionary as well.
       
              This Hashtable uses double hashing.  There are hashsize buckets in the
              table, and each bucket can contain 0 or 1 element.  We a bit to mark
              whether there's been a collision when we inserted multiple elements
              (ie, an inserted item was hashed at least a second time and we probed
              this bucket, but it was already in use).  Using the collision bit, we
              can terminate lookups & removes for elements that aren't in the hash
              table more quickly.  We steal the most significant bit from the hash code
              to store the collision bit.

              Our hash function is of the following form:
       
              h(key, n) = h1(key) + n*h2(key)
       
              where n is the number of times we've hit a collided bucket and rehashed
              (on this particular lookup).  Here are our hash functions:
       
              h1(key) = GetHash(key);  // default implementation calls key.GetHashCode();
              h2(key) = 1 + (((h1(key) >> 5) + 1) % (hashsize - 1));
       
              The h1 can return any number.  h2 must return a number between 1 and
              hashsize - 1 that is relatively prime to hashsize (not a problem if
              hashsize is prime).  (Knuth's Art of Computer Programming, Vol. 3, p. 528-9)
              If this is true, then we are guaranteed to visit every bucket in exactly
              hashsize probes, since the least common multiple of hashsize and h2(key)
              will be hashsize * h2(key).  (This is the first number where adding h2 to
              h1 mod hashsize will be 0 and we will search the same bucket twice).
             
              We previously used a different h2(key, n) that was not constant.  That is a
              horrifically bad idea, unless you can prove that series will never produce
              any identical numbers that overlap when you mod them by hashsize, for all
              subranges from i to i+hashsize, for all i.  It's not worth investigating,
              since there was no clear benefit from using that hash function, and it was
              broken.
       
              For efficiency reasons, we've implemented this by storing h1 and h2 in a
              temporary, and setting a variable called seed equal to h1.  We do a probe,
              and if we collided, we simply add h2 to seed each time through the loop.
       
              A good test for h2() is to subclass Hashtable, provide your own implementation
              of GetHash() that returns a constant, then add many items to the hash table.
              Make sure Count equals the number of items you inserted.

              Note that when we remove an item from the hash table, we set the key
              equal to buckets, if there was a collision in this bucket.  Otherwise
              we'd either wipe out the collision bit, or we'd still have an item in
              the hash table.


    Monday, January 08, 2007 9:10 PM
  • Hi Matt,

    In .NET 2.0, this error is almost always caused by multiple threads modifying the Hashtable at the same time. The fix is to insert locks before modifying the Hashtable, since Hashtable isn't multiple writer threadsafe. Another possible solution is to work with the synchronized wrapper via Hashtable.Synchronized, however we recommend the former for finer control.

    So that's the fix if it's your code that's modifying the Hashtable. Based on the info you provided, I think this isn't the case. You mentioned that you're encountering this bug with an ASP 2.0 website, so this could be caused by a downstream Hashtable caller. For example, if the call stack looks something like the following, note that this is a bug that has been fixed for the latest release.

    Thanks,
    Kim

    Stack trace: at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add)
    at System.Collections.Hashtable.set_Item(Object key, Object value)
    at System.ComponentModel.TypeDescriptor.CheckDefaultProvider(Type type)
    at System.ComponentModel.TypeDescriptor.NodeFor(Type type, Boolean createDelegator)
    at System.ComponentModel.TypeDescriptor.GetDescriptor(Type type, String typeName)
    at System.ComponentModel.TypeDescriptor.GetAttributes(Type componentType)
    at System.Web.UI.ThemeableAttribute.IsTypeThemeable(Type type)
    at System.Web.UI.Control.ApplySkin(Page page)
    at System.Web.UI.Control.InitRecursive(Control namingContainer)
    at System.Web.UI.Control.InitRecursive(Control namingContainer)
    at System.Web.UI.Control.InitRecursive(Control namingContainer)
    at System.Web.UI.Control.InitRecursive(Control namingContainer)
    at System.Web.UI.Control.InitRecursive(Control namingContainer)
    at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Saturday, January 20, 2007 8:44 PM
  • Ooo, I like that explanation.  Hashtable is particularly sensitive to multi-threading abuse...
    Saturday, January 20, 2007 10:44 PM
  • Fixed in the latest release of what? .Net Framework 2.0?  3.0?  Service pack?

     

    Just wondering because we are seeing this error with the exact callstack as above...

     

    Colin

    Friday, April 27, 2007 8:36 PM
  • Youch.  A hotfix that made it into the next version and needed another hotfix that doesn't look like a fix.  Behold the joys of multithreaded programming...
    Friday, April 27, 2007 10:34 PM
  • I think 927579 is included in .NET 2.0 SP1.  Make sure this service pack has been applied.

    Monday, May 05, 2008 9:36 PM
  • We're getting this same issue in .NET 3.5
    Friday, January 09, 2009 1:32 AM
  • I'm seeing this issue in a production .NET 3.5 environment too. We're not directly using HashTables in our code (but that's not to say that I'm not calling something in the Framework that is).  I wonder if anyone has had any luck resolving this. In my scenario, the crash seems to be centered around the ViewState. Here's the call stack

    System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add) 
       at System.Collections.Hashtable.set_Item(Object key, Object value) 
       at System.ComponentModel.ReflectTypeDescriptionProvider.ReflectGetAttributes(Type type) 
       at System.ComponentModel.ReflectTypeDescriptionProvider.ReflectedTypeData.GetAttributes() 
       at System.ComponentModel.TypeDescriptor.TypeDescriptionNode.DefaultTypeDescriptor.System.ComponentModel.ICustomTypeDescriptor.GetAttributes() 
       at System.ComponentModel.TypeDescriptor.GetAttributes(Type componentType) 
       at System.Web.UI.ViewStateModeByIdAttribute.IsEnabled(Type type) 
       at System.Web.UI.Control.SaveViewStateRecursive() 
       at System.Web.UI.Page.SaveAllState() 
       at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) 
     

    Any help/ideas would be appreciated

    Pete Mourfield - www.mourfield.com
    Thursday, January 15, 2009 6:30 PM
  • We're seeing the same issue with an ASP.NET 3.5 SP1 project ...
    Friday, January 30, 2009 8:04 AM
  • I wanted to post an update on this issue. We've been working with PSS and there is a HotFix pending for this issue. It turns out that the HashTable.set_Item wasn't threadsafe and under a heavy load the HashTable could get corrupted.
    Pete Mourfield - www.mourfield.com
    Friday, May 22, 2009 1:42 PM
  • I wanted to post an update on this issue. We've been working with PSS and there is a HotFix pending for this issue. It turns out that the HashTable.set_Item wasn't threadsafe and under a heavy load the HashTable could get corrupted.
    Pete Mourfield - www.mourfield.com

    I have recently modified my program to go multi threaded using a list view to store some on screen detail. It seems that there is a bug somewhere that is causing this issue but I only see it when I call Application.DoEvents. I had this error happen yesterday and once it happened it repeatedly happened.


    Has the fix been completed? I'm in need of an update if so as I've never seen it occur during my testing only on the production server after about 10 hours of use.


    It is a bit annoying that I've gone to all the hard work of making my code as thread-safe as I can possibly imagine, and something under the hood is biting back that I can do nothing to fix as I don't know what component has fallen over. If I knew, I could at least put some synclock's around it.


    Stack trace:
    at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add)
    at System.Collections.Hashtable.set_Item(Object key, Object value)
    at System.Windows.Forms.Application.ThreadContext..ctor()
    at System.Windows.Forms.Application.ThreadContext.FromCurrent()
    at System.Windows.Forms.Application.DoEvents()
    at YFOrders.Main.LogAppend(CompanyCodes Company, String Group, String Section, String LogText)
    Thursday, August 27, 2009 8:07 AM
  • I was seeing the same issues, and after talking with MS support got a patch for Windows Server x64.

    Its the KB article# 968432. The supporting code has an installable for Windows Server 2003/2008 x32 & x64.
    http://code.msdn.microsoft.com/KB968432/Release/ProjectReleases.aspx?ReleaseId=2958

    - Amey Bordikar
    Wednesday, March 10, 2010 3:04 PM