Rainbow Tables (probably) aren’t what you think — Part 2: Probability, Efficiency, and Chain Collisions

Ryan Sheasby
7 min readApr 12, 2021

You should read Part 1 before continuing with this article, if you haven’t already. Also, fair warning that this article is gonna get a fair bit more complex than the last one.

In the last article we discussed the difference between lookup tables and Precomputed Hash Chains. In this article we’ll discuss the problems with basic PHC tables, and how Rainbow Tables solve them.

Probability & Efficiency

Try this experiment: roll a dice repeatedly, with the goal of rolling each possibility at least once. On the first roll, the chance of a “collision” (rolling something you’ve already rolled before, therefore wasting that roll) is zero. The next roll will have a 1 in 6 chance of colliding, since you might roll the same number as before. After 5 numbers have been discovered, you’re likely to need several attempts before discovering the final number, since you only have a 1 in 6 chance every time you roll.

The reduce function we use for the hash chains has the exact same problem. Remember that the reduce function is essentially a deterministic pseudo-random password generator. The more passwords we’ve already hashed, the greater the probability of a collision when we run our reduce function, as seen in the graph below:

As you can see, if we want to crack 95% of 4 digit md5 hashes, we need about 25,000 total hash computations. If we wanted to crack 100%, we need about 100,000 hash computations. We can still adjust our chain length to choose between longer lookup times or larger storage requirements though, so this doesn’t completely ruin the efficacy of a PHC/Rainbow Table approach. However, there is a bigger problem that does make PHC tables almost completely useless when unmitigated…

Chain Collisions

Remember that one requirement for a hash chain is that it’s deterministic, because we need to be able to reproduce the chain from scratch. However, this creates a problem if a hash from one chain collides with one in another. Consider the following example:

Here we have two chains where one of the reduce operations has resulted in a “merge”. The reduce operation has collided with one of the values in another chain, resulting in not only that one value colliding, but all subsequent values too. This isn’t a major issue when our chains only have a length of 3, but when we’re using a chain with a length of 100,000, a single collision could waste a lot of compute power, and greatly reduce our coverage.

Even worse, we cannot efficiently detect this collision at generation time and remove one of the colliding chains, since that would require holding on to the full contents of the previous chains. And even if we could remove colliding chains, because of the probabilities we discussed earlier, it would be almost impossible to find enough non-colliding chains to get decent coverage of the search space. Ultimately, collisions are here to stay. Instead of removing them, we need to embrace them, and mitigate their impact.

Rainbow Tables

This is where the innovation of rainbow tables comes in. With a rainbow table we add a second parameter to our reduce function, as a sort of “salt”, which changes depending on our position in the chain. Just like before, the reduce function is still deterministic, it just takes one extra piece of information.

Note: most technical literature (including the initial research paper) describes a different reduce function per chain position, instead of a single salted reduce function. This is just semantics, since in reality nobody would implement a rainbow table by manually defining several unique reduce functions.

Here’s how a collision looks in a rainbow table:

As you can see, even though 7353 did collide, the rainbow table prevented it from merging because they didn’t use the same reduce salt afterwards. Merges are still possible, but only if the collision “lines up”, like so:

However, when the collision position lines up, the end of the chain (which is stored on disk) will be the same, meaning it’s easy to detect and remove these collisions. There is one problem though. We have to change our strategy slightly for looking up a hash. We can’t simply continuously reduce and hash like with a PHC table, since our reduce function isn’t static. Instead, we need to make several “lookup chains”, each starting with a different salt.

Once again, imagine we are looking for the password that corresponds with the hash 4a7d1ed414474e4033ac29ccb8653d9b. First we create a lookup chain that starts with R2, then lookup the resulting hash, on the hypothesis that our hash is at the second-last position in a chain. If it’s not found, we try again starting with R1, on the hypothesis that our hash is at the third-last position in a chain:

Once we’ve found the chain that contains our hash, we follow the exact same process as a PHC, starting from the beginning to reproduce the chain and get our hash.

The reason they’re called “rainbow tables” is because the original illustration assigned a different color to each reduce function, making a full chain look like a rainbow.

The addition of different reduce functions does mean that our lookup time is made a fair bit longer, but it’s still a lot better than a brute-force. To be specific, the amount of hashing operations required to lookup a hash is the Triangle Number of the chain length. For example, if we have a table with a chain length of 100,000, we need to perform approximately 5 billion hashing operations to do a lookup. Although this is definitely a lot, it’s about 1.3 million times better than a brute-force, and can be done in about 70 milliseconds on an RTX 3090.

So now that you know how they work, you might be wondering why rainbow tables aren’t more popular. There’s a few reasons that I can point to:

  • Secure hashes are immune to rainbow tables, since they use salting
  • Insecure password hashes are capable of being cracked with pretty reasonable speed on modern cracking rigs (You can do a full 8 character NTLM crack in less than a day on an RTX 3090, and you can crack thousands of hashes simultaneously, all without storing TeraBytes of Data)
  • Rainbow Tables are still heavily constrained by modern hardware. Most people don’t have TeraBytes of storage available, and you still need to do the equivalent of several brute-forces to generate the rainbow table in the first place. The best rainbow tables publicly available only go up to 8 characters for a full character set, and 10 characters for a lower-case only character set (which is useless with modern complexity rules).
  • Brute-force attacks aren’t that effective anymore. Back when most services and users had very short passwords, the ability to crack any 8 character password resulted in excellent results. However, due to the popularity of password policies with complexity rules, most users pick passwords longer than 8 characters. The most commonly cracked passwords these days are similar to “Password123” or “Summer@2021”, which are far longer than 8 characters. In fact, the average password length is over 9 characters, for which an 8 character brute-force will do no good.
  • Wordlist attacks with manipulation rules are far more effective at getting actual user-picked passwords. Real users don’t randomly generate passwords (with the exception of security-minded folks, who know better than to generate only 8 characters). Real users pick one or two words, then add a few extra numbers and symbols to satisfy the complexity requirements. An attack with a massive wordlist and ruleset (weakpass2a with OneRule is my go-to for NTLM, which an RTX 3090 can crunch in a couple hours) gets approximately 60% of real user passwords.

Well, now you know. Rainbow tables are really cool. They’re also not that useful anymore for password cracking, which makes this series of articles entirely academic. But hey, at least you won’t mix them up with lookup tables anymore. 😁

--

--

Ryan Sheasby

Information Security Engineer, Passionate about AppSec, algorithms, Go, and ZFS, among lots of other things.