Security with Certainty

 

Security with Certainty

A future algorithmic breakthrough could render insecure all electronic commerce protocols currently in use. However, theoretical computer scientists expect to remove this threat one day — by designing protocols that have unconditional proofs of security.


Summary

Modern research in cryptography has transformed a significant portion of computer security from an art into a science. Instead of merely hoping that a clever adversary will not find a way to break the system, we can now prove the security of cryptographic protocols (encryption, digital signatures, etc.) on precise computational conjectures. For example, we can construct algorithms for encrypting messages so that it is computationally infeasible for anyone other than the intended recipient to learn anything about the message from its encryption, under the assumption that there are no fast algorithms for factoring large integers.

However, these conjectures can turn out to be false, as was discovered in 2004 for the widely used hash function MD5, which had been conjectured to be “collision-resistant” (i.e. it should be infeasible to find two inputs that hash to the same value). If a similar discovery were made in the future and kept secret, devastating damage could be inflicted on the electronic economy and cyberinfrastructure before the vulnerability is noticed. Thus, an important research challenge is to eliminate the need for conjectures in cryptography. Achieving this goal will require major advances in theoretical computer science, including resolving the famous P vs. NP question. Along the way, researchers continue to improve our understanding of the conjectures on which cryptography is based, thereby putting computer security on firmer foundations.

Rationale

Most electronic transactions on the Internet are protected by cryptography, which is the subfield of theoretical computer science that develops protocols for communication and computation that maintain secrecy and correctness in the face of adversarial attacks. For example, an encryption algorithm aims to encode data in such a way that only the intended recipient (in possession of a “decryption key”) can recover the data, but an adversary listening in learns nothing about the data from seeing only its encoded form. Encryption has the longest history of cryptographic concepts (dating at least back to the time of Caesar), but modern cryptography now includes providing security for a vast array of much more complex tasks than communication (such as auctions, datamining, voting, and electronic cash).

Historically, the security of encryption was largely a battle of wits between the designers and the attackers. A scheme might last a few years, until a clever attacker came along and found a way to break it, after which a new scheme would be designed to prevent that particular attack (but with no guarantee of security against future attacks). It was not until the development of modern theoretical computer science (TCS) in the 1970’s that it finally became conceivable to escape this cycle of attacks and countermeasures. TCS demonstrated that there are computational problems that are inherently intractable — they require inordinate computational resources to solve, no matter how sophisticated or clever an algorithm is used. Thus, we could try to design encryption algorithms such that “breaking” the code could be mathematically proven to be one of these computationally intractable problems.

Cryptography was enormously successful in this effort during the 80’s and 90’s, providing not only encryption algorithms but protocols achieving goals that had not even been imagined before (public-key encryption, privacy-preserving datamining, secure multiparty computation, zero-knowledge proofs). However, these protocols were not proven to be secure unconditionally, but rather it was shown that breaking the protocols is as hard as solving some computational problem that is conjectured to be intractable (such as factoring large integers). Thus, there is still the possibility that some of these conjectures will turn out to be incorrect and an attacker will be able to break the protocols that protect our cyberinfrastructure.

Thus, a grand challenge for cryptography is to design protocols that can be proven secure without relying on conjectures. This is indeed a lofty goal, because proving the security of cryptographic protocols (in standard communication settings) requires resolving the famous “P vs. NP Question,” which is the foremost open problem of theoretical computer science. Providing “Security with Certainty” as described here is but one of many reasons that P vs. NP Question is so important, and will continue to receive substantial effort by researchers in the decades ahead.

Along the way, there are numerous interim goals that theoretical computer scientists are continuing to make progress on and can increase society’s confidence in the security of the cryptography we use every day. Below are some examples.

  •  As mentioned above, resolving the P vs. NP question (specifically proving NP != P) is necessary for proving security, but it is not known to be sufficient and thus cryptographers currently need to rely on assumptions that are stronger than NP != P. An important goal is to understand whether we can build cryptographic protocols whose security relies only on the answer to the P vs. NP question.
  •  Some cryptographic primitives seem to require stronger conjectures than other ones. For example, “public-key cryptography,” which enables secure communication between parties who have never met before, seems to require intractable problems with a lot of structure, for which there are relatively few candidates (such as factoring large integers). But traditional “private-key cryptography,” for communication between parties who share a secret key in advance, does not require this and can be based on any “one-way function” (a function that is easy to compute but hard to invert), for which there are many candidates. Can this gap be closed? More generally, we should try to base cryptographic protocols on assumptions that are as general as possible, so as to maximize the number of alternative instantiations if some are broken.
  •  Some protocols can only be analyzed when certain components of the protocol are treated in an idealized way. For instance, cryptographic hash functions such as SHA-256 are often assumed to behave as a so-called “random oracle” would. These idealizations leave gaps in our understanding of how security can arise computationally. At the same time, intriguingly few attacks are known against real-world systems that are analyzed with such idealizations. Thus, it is important to try to both eliminate the need for such idealizations, while at the same time trying to gain a better understanding for why these idealized analyses often seem to be resilient to attack.
  •  Another concern with the “structured” problems currently used in public-key cryptography is that many of these would be easy to break if scalable “quantum computers” can be built. Thus, another goal is to find more candidates for public-key cryptography that withstand quantum attacks.
  •  There are currently two general flavors of building blocks used to build cryptographic protocols. There are mathematically “natural” problems, such as factoring large integers, which have been well-studied by computer scientists and mathematicians for many years, giving us significant confidence in their intractability. On the other hand, there are “man-made” primitives, such as the “block ciphers” DES and AES and the “hash functions” MD5 and SHA-1, which result in extremely fast cryptography but for which we have much less understanding of their security. Can we achieve the best of both worlds, and obtain very fast cryptography from natural computational problems?
  •  Can we get a better understanding of the relationship between the various computational problems we use to build cryptographic protocols? For example, the Factoring problem and Discrete Logarithm problem seem to have a “hidden connection” that we do not yet understand. We have no results showing that the computational difficulties of these two problems are related, yet progress on one seems to always be accompanied by similar progress on the other. Uncovering the hidden connection at work here would likely have significant consequences for both cryptography and pure mathematics.

Contributors and Credits

Boaz Barak, Cynthia Dwork, Dick Lipton, Kristin Rozier, Amit Sahai, Salil Vadhan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: