Freigeben über


Guidelines and rules for GetHashCode

"The code is more what you'd call guidelines than actual rules" - truer words were never spoken. It's important when writing code to understand what are vague "guidelines" that should be followed but can be broken or fudged, and what are crisp "rules" that have serious negative consequences for correctness and robustness. I often get questions about the rules and guidelines for GetHashCode, so I thought I might summarize them here.

What is GetHashCode used for?

It is by design useful for only one thing: putting an object in a hash table. Hence the name.

Why do we have this method on Object in the first place?

It makes perfect sense that every object in the type system should provide a GetType method; data's ability to describe itself is a key feature of the CLR type system. And it makes sense that every object should have a ToString, so that it is able to print out a representation of itself as a string, for debugging purposes. It seems plausible that objects should be able to compare themselves to other objects for equality. But why should it be the case that every object should be able to hash itself for insertion into a hash table? Seems like an odd thing to require every object to be able to do.

I think if we were redesigning the type system from scratch today, hashing might be done differently, perhaps with an IHashable interface. But when the CLR type system was designed there were no generic types and therefore a general-purpose hash table needed to be able to store any object.

How do hash tables and similar data structures use GetHashCode?

Consider a "set" abstract data type. Though there are many operations you might want to perform on a set, the two basic ones are insert a new item into the set, and check to see whether a given item is in the set. We would like these operations to be fast even if the set is large. Consider, for example, using a list as an implementation detail of a set:

class Set<T>
{
private List<T> list = new List<T>();

public void Insert(T item)
{
if (!Contains(t))
list.Add(item);
}

public bool Contains(T item)
{
foreach(T member in list)
if (member.Equals(item))
return true;
return false;
}
}

(I've omitted any error checking throughout this article; we probably want to make sure that the item is not null. We probably want to implement some interfaces, and so on. I'm keeping things simple so that we concentrate on the hashing part.)

The test for containment here is linear; if there are ten thousand items in the list then we have to look at all ten thousand of them to determine that the object is not in the list. This does not scale well.

The trick is to trade a small amount of increased memory burden for a huge amount of increased speed. The idea is to make many shorter lists, called "buckets", and then be clever about quickly working out which bucket we're looking at:

class Set<T>
{
private List<T>[] buckets = new List<T>[100];

public void Insert(T item)
{
int bucket = GetBucket(item.GetHashCode());
if (Contains(item, bucket))
return;
if (buckets[bucket] == null)
buckets[bucket] = new List<T>();
buckets[bucket].Add(item);
}

public bool Contains(T item)
{
return Contains(item, GetBucket(item.GetHashCode());
}

private int GetBucket(int hashcode)
{
unchecked
{
// A hash code can be negative, and thus its remainder can be negative also.
// Do the math in unsigned ints to be sure we stay positive.
return (int)((uint)hashcode % (uint)buckets.Length);
}
}

private bool Contains(T item, int bucket)
{
if (buckets[bucket] != null)
foreach(T member in buckets[bucket])
if (member.Equals(item))
return true;
return false;
}
}

Now if we have ten thousand items in the set then we are looking through one of a hundred buckets, each with on average a hundred items; the Contains operation just got a hundred times cheaper.

On average.

We hope.

We could be even more clever here; just as a List<T> resizes itself when it gets full, the bucket set could resize itself as well, to ensure that the average bucket length stays low. Also, for technical reasons it is often a good idea to make the bucket set length a prime number, rather than 100. There are plenty of improvements we could make to this hash table. But this quick sketch of a naive implementation of a hash table will do for now. I want to keep it simple.

By starting with the position that this code should work, we can deduce what the rules and guidelines must be for GetHashCode:

Rule: equal items have equal hashes

If two objects are equal then they must have the same hash code; or, equivalently, if two objects have different hash codes then they must be unequal.

The reasoning here is straightforward. Suppose two objects were equal but had different hash codes. If you put the first object in the set then it might be put into bucket #12. If you then ask the set whether the second object is a member, it might search bucket #67, and not find it.

Note that it is NOT a rule that if two objects have the same hash code, then they must be equal. There are only four billion or so possible hash codes, but obviously there are more than four billion possible objects. There are far more than four billion ten-character strings alone. Therefore there must be at least two unequal objects that share the same hash code, by the Pigeonhole Principle. (*)

Guideline: the integer returned by GetHashCode should never change

Ideally, the hash code of a mutable object should be computed from only fields which cannot mutate, and therefore the hash value of an object is the same for its entire lifetime.

However, this is only an ideal-situation guideline; the actual rule is:

Rule: the integer returned by GetHashCode must never change while the object is contained in a data structure that depends on the hash code remaining stable

It is permissible, though dangerous, to make an object whose hash code value can mutate as the fields of the object mutate. If you have such an object and you put it in a hash table then the code which mutates the object and the code which maintains the hash table are required to have some agreed-upon protocol that ensures that the object is not mutated while it is in the hash table. What that protocol looks like is up to you.

If an object's hash code can mutate while it is in the hash table then clearly the Contains method stops working. You put the object in bucket #5, you mutate it, and when you ask the set whether it contains the mutated object, it looks in bucket #74 and doesn't find it.

Remember, objects can be put into hash tables in ways that you didn't expect. A lot of the LINQ sequence operators use hash tables internally. Don't go dangerously mutating objects while enumerating a LINQ query that returns them!

Rule: Consumers of GetHashCode cannot rely upon it being stable over time or across appdomains

Suppose you have a Customer object that has a bunch of fields like Name, Address, and so on. If you make two such objects with exactly the same data in two different processes, they do not have to return the same hash code. If you make such an object on Tuesday in one process, shut it down, and run the program again on Wednesday, the hash codes can be different.

This has bitten people in the past. The documentation for System.String.GetHashCode notes specifically that two identical strings can have different hash codes in different versions of the CLR, and in fact they do. Don't store string hashes in databases and expect them to be the same forever, because they won't be.

Rule: GetHashCode must never throw an exception, and must return

Getting a hash code simply calculates an integer; there's no reason why it should ever fail. An implementation of GetHashCode should be able to handle any legal configuration of the object.

I occasionally get the response "but I want to throw NotImplementedException in my GetHashCode to ensure that the object is never put into a hash table; I don't intend for this object to ever be put into a hash table."  Well, OK, but the last sentence of the previous guideline applies; this means that your object cannot be a result in many LINQ-to-objects queries that use hash tables internally for performance reasons.

Since it doesn't throw an exception, it has to return a value eventually. It's not legal or smart to make an implementation of GetHashCode that goes into an infinite loop.

This is particularly important when hashing objects that might be recursively defined and contain circular references. If hashing object Alpha hashes the value of property Beta, and hashing Beta turns right around and hashes Alpha, then you're going to either loop forever (if you're on an architecture that can optimize tail calls) or run out of stack and crash the process.

Guideline: the implementation of GetHashCode must be extremely fast

The whole point of GetHashCode is to optimize a lookup operation; if the call to GetHashCode is slower than looking through those ten thousand items one at a time, then you haven't made a performance gain.

I classify this as a "guideline" and not a "rule" because it is so vague. How slow is too slow? That's up to you to decide.

Guideline: the distribution of hash codes must be "random"

By a "random distribution" I mean that if there are commonalities in the objects being hashed, there should not be similar commonalities in the hash codes produced. Suppose for example you are hashing an object that represents the latitude and longitude of a point. A set of such locations is highly likely to be "clustered"; odds are good that your set of locations is, say, mostly houses in the same city, or mostly valves in the same oil field, or whatever. If clustered data produces clustered hash values then that might decrease the number of buckets used and cause a performance problem when the bucket gets really big.

Again, I list this as a guideline rather than a rule because it is somewhat vague, not because it is unimportant. It's very important. But since good distribution and good speed can be opposites, it's important to find a good balance between the two.

I know this from deep, personal, painful experience. Over a decade ago I wrote a string hash algorithm for a table used by the msn.com backend servers. I thought it was a reasonably randomly distributed algorithm, but I made a mistake and it was not. It turned out that all of the one hundred thousand strings that are five characters long and contain only numbers were always hashed to one of five buckets, instead of any of the six hundred or so buckets that were available. The msn.com guys were using my table to attempt to do fast lookups of tens of thousands of US postal codes, all of which are strings of five digits. Between that and a threading bug in the same code, I wrecked the performance of an important page on msn.com; this was both costly and embarrassing. Data is sometimes heavily clustered, and a good hash algorithm will take that into account.

In particular, be careful of "xor". It is very common to combine hash codes together by xoring them, but that is not necessarily a good thing. Suppose you have a data structure that contains strings for shipping address and home address. Even if the hash algorithm on the individual strings is really good, if the two strings are frequently the same then xoring their hashes together is frequently going to produce zero. "xor" can create or exacerbate distribution problems when there is redundancy in data structures.

Security issue: if the hashed data can be chosen by an attacker then you might have a problem on your hands

When I wrecked that page on msn.com it was an accident that the chosen data interacted poorly with my algorithm. But suppose the page was in fact collecting data from users and storing it in a hash table for server-side analysis. If the user is hostile and can deliberately craft huge amounts of data that always hashes to the same bucket then they can mount a denial-of-service attack against the server by making the server waste a lot of time looking through an unbalanced hash table. If that's the situation you are in, consult an expert. It is possible to build hostile-data-resistant implementations of GetHashCode but doing so correctly and safely is a job for an expert in that field.

Security issue: do not use GetHashCode "off label"

GetHashCode is designed to do only one thing: balance a hash table. Do not use it for anything else. In particular:

* It does not provide a unique key for an object; probability of collision is extremely high.
* It is not of cryptographic strength, so do not use it as part of a digital signature or as a password equivalent
* It does not necessarily have the error-detection properties needed for checksums.

and so on.

Getting all this stuff right is surprisingly tricky.


(*) If you have ten pigeons kept in nine pigeonholes, then at least one pigeonhole has more than one pigeon in it.

Comments

  • Anonymous
    February 28, 2011
    Thanks for this blog post.  It's good to see an authoritative source provide guidelines for GetHashCode.  There's such a hodgepodge of misinformation out there right now.  Sadly I must report that I'm not currently in adherence with your guidelines. Thanks also for the great blog :-)

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    This is just amazing. I've always been curious (and have tried various articles) to understand the GetHashCode() logic. None of the blog articles come close to this with regards to the explanation. Thanks a lot Eric.

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    Excellent article, thanks.  Your point about the risk of hashing algorithms' susceptibility to denial of service was especially enlightening.

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    If you can't have an IHashable interface, why not a HashableObject class? I understand that multiple inheritance problems can creep up, and interfaces were introduced to solve this; but surely they can't be worse than putting it directly in Object.

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    Yeah, getting GetHashCode() right is really important... At work we ran into a problem once, where a Dictionary had linear lookup time (which ruined everything performance-wise). Of course, GetHashCode() was the culprit. It turns out it wasn't our fault: the keys of the dictionary were binary data serialized as a string, and the .Net runtime for 64bit has that bug in String.GetHashCode() where it doesn't look after the first '�' byte (and our keys mostly begun by such bytes). What we did is, we just XOR'ed all our keys with some constant that minimized the probability of '�' bytes occurring.

  • Anonymous
    February 28, 2011
    By the way, from the class System.Tuple: internal static int CombineHashCodes(int h1, int h2) {    return (((h1 << 5) + h1) ^ h2); } Possibly, 33 is used because it is slightly quicker than actually multiplying and is 'good enough'. I usually use something like 15 or 31 when I implement it myself; this messes up lots of bits, I think, and it's worked well for me so far. Of course, there's no substitute for type-specific hashes; if you have a boolean, only make it change one bit, and if you have an enum with 10 possible values, there's no reason to give it 8 bits of space.

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    February 28, 2011
    Thanks for article, Eric! Unfortunately, the default implementations of GetHashCode for some value types violate the rules as well. For example, the decimal type (see Connect ID 314630 for details). I've also experienced such a violation with a simple struct of primitive value types. So the guideline is the following: if you want your type to play well with LINQ, take care of its GetHashCode method.

  • Anonymous
    February 28, 2011
    @paul_sh thanks. this is a bit silly: connect.microsoft.com/.../single-double-break-contract-for-equals-and-gethashcode is marked as fixed when it's still broken for NaN's Connect will not let me re-open a bug so I actually think I'll raise another

  • Anonymous
    February 28, 2011
    raised: connect.microsoft.com/.../double-single-gethashcode-methods-do-not-obey-the-equals-contract-for-certain-nan-values

  • Anonymous
    February 28, 2011
    "I think if we were redesigning the type system from scratch today..." Please do! And get rid of null-references while you're at it. Use Option<T> or Maybe<T> instead. I hate checking for null. P.S. I'm not really serious. I just hate nulls.

  • Anonymous
    February 28, 2011
    How is it with the default GetHashCode of a struct, can we use that or should we write our own?

  • Anonymous
    February 28, 2011
    The comment has been removed

  • Anonymous
    March 01, 2011
    The comment has been removed

  • Anonymous
    March 01, 2011
    The comment has been removed

  • Anonymous
    March 01, 2011
    I've never seen a good reason to implement your own GetHashCode - thus my objects always have the default Object implementation (as a corollary, never override default Equals() either).  As a follow up, could you explain when you might want to do this?

  • Anonymous
    March 01, 2011
    >> when you might want to do this My take on that would be: For structs, you should implement Equals and GetHashCode as a matter of efficiency (the default implementation supplied by ValueType uses reflection and is slow). For classes, you should implement Equals and GetHashCode when your class is an "reference value type" (i.e. the interesting part about it is solely the data that it represents, not its object identity; this should generally also imply immutability). Good examples from the Framework itself are Tuple, String, Uri and XName.

  • Anonymous
    March 01, 2011
    The comment has been removed

  • Anonymous
    March 01, 2011
    The comment has been removed

  • Anonymous
    March 01, 2011
    @BinaryCode: Wow, that was fast, thanks!

  • Anonymous
    March 01, 2011
    Correction. I was thinking of the Thread ID, not the Process ID.

  • Anonymous
    March 01, 2011
    The comment has been removed

  • Anonymous
    March 02, 2011
    @Pavel, so for structs it is a matter of the ValueType.GetHashCode() being inefficient - ok - why isnt this fixed in the CLR? But what is the advantage of what you propose for classes that 'is an "reference value type" (i.e. the interesting part about it is solely the data that it represents, not its object identity; this should generally also imply immutability).'?  What does that add?

  • Anonymous
    March 02, 2011
    The comment has been removed

  • Anonymous
    March 03, 2011
    @Random / Pavel I think I understand your points better now.  For structs, as they are value types you'd expect Equals() to work when two have the same content, therefore the GetHashCode should be the same between two with equal content.  Likewise for strings, "a" and "a" should be equal (even though the object ref may not be) - Tuple, String, Uri and XName, I see.  I think then you could say that you ought to implement GetHashCode when you want the Equals() operator to be a logical Equals and not a ReferenceEquals?  is that a fair characterization?

  • Anonymous
    March 03, 2011
    The comment has been removed

  • Anonymous
    March 03, 2011
    Oh, and here's one more very good reason why you should always override Equals and GetHashCode on your structs in practice: connect.microsoft.com/.../valuetype-equals-is-incorrect

  • Anonymous
    May 06, 2011
    Oliver, yes, I reported that bug here: http://bit.ly/aSkUIm (and as usual, Microsoft decided to close as Won't Fix) I've written extensively about hashing here: http://bit.ly/9qNcga

  • Anonymous
    July 10, 2012
    The comment has been removed