Skip to content

Double HMAC Verification

I'd like to thank everyone who came to see my talk at RSA 2011 this morning on Common Flaws and Failures of Distributed Authentication Systems.  Watch this site for the slides and full whitepaper to be published next week.  In the meantime, when time ran short on the session I promised the audience a broader treatment of double HMAC verification here on the blog.

In the last year, Nate Lawson brought wide attention to a common timing side channel attack against HMACs.  Although fixed in many places, the flaw is still common, and iSEC has found that the most straightforward solution doesn't work in every context.  This blog post gives a quick overview of the bug and describes a new, and we believe better, solution to the issue: double HMAC verification.

To acquaint readers unfamiliar with the bug, HMAC is an algorithm for producing a keyed message authentication code using a hash function.  (The output of the HMAC algorithm is also referred to as an HMAC.)  HMAC takes as input a message and key, and produces a fixed-length digest output which can be sent with the message.  Anyone who knows the key can repeat the algorithm and compare their calculated HMAC with one they have received, to verify a message originated from someone with knowledge of the key and has not been tampered with.  The idea is simple enough - the problem comes when performing the comparison.

The standard algorithm to compare byte arrays for equality looks like:

if(byteArrayA.length != byteArrayB.length) { return false; }
for(int i = 0; i < byteArrayA.length; i++) {
  if(byteArrayA[i] != byteArrayB[i]) { return false; }
}
return true;

The problem with this algorithm is that the more bytes match in the two arrays, the more comparisons are performed, and the longer it takes to return.  The consequence is that an attacker who can measure this difference in timing can submit guesses at a valid HMAC for an arbitrary text, and check the correctness of the guesses one byte at a time.  The attacker submits guesses for the first byte until the value is discovered which takes longer to return an error, then that value is "locked in" and guessing can proceed to the next byte.  This makes brute force trivial, even for arbitrarily long HMAC values. The timing differences are very subtle and difficult to measure in some contexts, but successful attacks have been mounted against real systems using this technique.   For additional details and background, see Lawson's post on the problem.    

Let's talk about solutions.

Lawson recommends using a constant-time comparison function to avoid this flaw when verifying HMACs.  This is a good recommendation for native code, but building a true constant-time function is not as simple as it sounds for the .Net CLR, JVM or other high level languages (which Lawson acknowledges).  I wrote test code in Java and C#, and found that often source code which reads as constant time doesn't run that way.  The compiler, the JIT, and runtime optimizers on these platforms do not generally recognize timing as a program semantic.

In light of this, at iSEC we've begun recommending a different solution to our clients: double HMAC verification.  After receiving and calculating an HMAC for a given message, simply apply an additional round of HMAC to both byte arrays before comparing them.   Instead of removing the timing side channel, this deterministically randomizes the bits of an attacker's guess in a way he or she can't predict.  Without knowledge of the actual bytes being compared, the guesses can't be adapted and the timing side channel is useless. 

We like this solution for a few reasons:

  • It is easy to implement.  Usually it requires only two lines of code be added to an existing verification routine.
  • It is still relatively fast.  HMAC is a speedy operation, and the additional round starts with an input already at the block size of the hash algorithm.
  • Its security derives from the fundamental cryptographic properties of HMAC, so is not dependent on runtime behavior, optimizations, or the lack thereof.

Here's some C# code to implement double HMAC verification:

public void validateHMACSHA256(byte[] receivedHMAC, byte[] message, byte[] key)
  {
HashAlgorithm hashAlgorithm = new HMACSHA256(key);

 

// size and algorithm choice are not secret; no weakness in failing fast here.
if(receivedHMAC.Length != hashAlgorithm.HashSize / 8){ 
    Throw new CryptographicException("HMAC verification failure.");
}           
        byte[] calculatedHMAC = hashAlgorithm.ComputeHash(message);
        // Now we HMAC both values again before comparing to randomize byte order.
        // These two lines are all that is required to prevent many existing implementations
        // vulnerable to adaptive chosen ciphertext attacks using the timing side channel.
        receivedHMAC   = hashAlgorithm.ComputeHash(receivedHMAC);
        calculatedHMAC = hashAlgorithm.ComputeHash(calculatedHMAC);

 

        for (int i = 0; i < calculatedHMAC.Length; i++)  {
           if (receivedHMAC[i] != calculatedHMAC[i])  {
               throw new CryptographicException("HMAC verification failure.");
            }
        }
  }

I would love to see cryptographic library developers  begin including wrapped functions to perform safe HMAC comparisons using this algorithm, instead of leaving unsuspecting developers to write their own - a task they will almost always get wrong.

-Brad Hill

Written by iSEC Partners at 00:00

0 Comments :

Comment

Comments closed