Secret-sharing
To understand public-key cryptography fully, one must first understand the essentials of one of the basic tools in contemporary cryptology: secret-sharing. There is only one way to design systems whose overall reliability must be greater than that of some critical components—as is the case for aircraft, nuclear weapons, and communications systems—and that is by the appropriate use of redundancy so the system can continue to function even though some components fail. The same is true for information-based systems in which the probability of the security functions being realized must be greater than the probability that some of the participants will not cheat. Secret-sharing, which requires a combination of information held by each participant in order to decipher the key, is a means to enforce concurrence of several participants in the expectation that it is less likely that many will cheat than that one will.
The RSA cryptoalgorithm described in the next section is a two-out-of-two secret-sharing scheme in which each key individually provides no information. Other security functions, such as digital notarization or certification of origination or receipt, depend on more complex sharing of information related to a concealed secret.
RSA encryption
The best-known public-key scheme is the Rivest–Shamir–Adleman (RSA) cryptoalgorithm. In this system a user secretly chooses a pair of prime numbers p and q so large that factoring the product n = pq is well beyond projected computing capabilities for the lifetime of the ciphers. At the beginning of the 21st century, U.S. government security standards called for the modulus to be 1,024 bits in size—i.e., p and q each were to be about 155 decimal digits in size, with n roughly a 310-digit number. However, over the following decade, as processor speeds grew and computing techniques became more sophisticated, numbers approaching this size were factored, making it likely that, sooner rather than later, 1,024-bit moduli would no longer be safe, so the U.S. government recommended shifting in 2011 to 2,048-bit moduli.
Having chosen p and q, the user selects an arbitrary integer e less than n and relatively prime to p − 1 and q − 1, that is, so that 1 is the only factor in common between e and the product (p − 1)(q − 1). This assures that there is another number d for which the product ed will leave a remainder of 1 when divided by the least common multiple of p − 1 and q − 1. With knowledge of p and q, the number d can easily be calculated using the Euclidean algorithm. If one does not know p and q, it is equally difficult to find either e or d given the other as to factor n, which is the basis for the cryptosecurity of the RSA algorithm.
We will use the labels d and e to denote the function to which a key is put, but as keys are completely interchangeable, this is only a convenience for exposition. To implement a secrecy channel using the standard two-key version of the RSA cryptosystem, user A would publish e and n in an authenticated public directory but keep d secret. Anyone wishing to send a private message to A would encode it into numbers less than n and then encrypt it using a special formula based on e and n. A can decrypt such a message based on knowing d, but the presumption—and evidence thus far—is that for almost all ciphers no one else can decrypt the message unless he can also factor n.
Similarly, to implement an authentication channel, A would publish d and n and keep e secret. In the simplest use of this channel for identity verification, B can verify that he is in communication with A by looking in the directory to find A’s decryption key d and sending him a message to be encrypted. If he gets back a cipher that decrypts to his challenge message using d to decrypt it, he will know that it was in all probability created by someone knowing e and hence that the other communicant is probably A. Digitally signing a message is a more complex operation and requires a cryptosecure “hashing” function. This is a publicly known function that maps any message into a smaller message—called a digest—in which each bit of the digest is dependent on every bit of the message in such a way that changing even one bit in the message is apt to change, in a cryptosecure way, half of the bits in the digest. By cryptosecure is meant that it is computationally infeasible for anyone to find a message that will produce a preassigned digest and equally hard to find another message with the same digest as a known one. To sign a message—which may not even need to be kept secret—A encrypts the digest with the secret e, which he appends to the message. Anyone can then decrypt the message using the public key d to recover the digest, which he can also compute independently from the message. If the two agree, he must conclude that A originated the cipher, since only A knew e and hence could have encrypted the message.
Thus far, all proposed two-key cryptosystems exact a very high price for the separation of the privacy or secrecy channel from the authentication or signature channel. The greatly increased amount of computation involved in the asymmetric encryption/decryption process significantly cuts the channel capacity (bits per second of message information communicated). As a result, the main application of two-key cryptography is in hybrid systems. In such a system a two-key algorithm is used for authentication and digital signatures or to exchange a randomly generated session key to be used with a single-key algorithm at high speed for the main communication. At the end of the session this key is discarded.
Block and stream ciphers
In general, cipher systems transform fixed-size pieces of plaintext into ciphertext. In older manual systems these pieces were usually single letters or characters—or occasionally, as in the Playfair cipher, digraphs, since this was as large a unit as could feasibly be encrypted and decrypted by hand. Systems that operated on trigrams or larger groups of letters were proposed and understood to be potentially more secure, but they were never implemented because of the difficulty in manual encryption and decryption. In modern single-key cryptography the units of information are often as large as 64 bits, or about 131/2 alphabetic characters, whereas two-key cryptography based on the RSA algorithm appears to have settled on 1,024 to 2,048 bits, or between 310 and 620 alphabetic characters, as the unit of encryption.
A block cipher breaks the plaintext into blocks of the same size for encryption using a common key: the block size for a Playfair cipher is two letters, and for the DES (described in the section History of cryptology: The Data Encryption Standard and the Advanced Encryption Standard) used in electronic codebook mode it is 64 bits of binary-encoded plaintext. Although a block could consist of a single symbol, normally it is larger.
A stream cipher also breaks the plaintext into units, normally of a single character, and then encrypts the ith unit of the plaintext with the ith unit of a key stream. Vernam encryption with a onetime key is an example of such a system, as are rotor cipher machines and the DES used in the output feedback mode (in which the ciphertext from one encryption is fed back in as the plaintext for the next encryption) to generate a key stream. Stream ciphers depend on the receiver’s using precisely the same part of the key stream to decrypt the cipher that was employed to encrypt the plaintext. They thus require that the transmitter’s and receiver’s key-stream generators be synchronized. This means that they must be synchronized initially and stay in sync thereafter, or else the cipher will be decrypted into a garbled form until synchrony can be reestablished. This latter property of self-synchronizing cipher systems results in what is known as error propagation, an important parameter in any stream-cipher system.
Cryptanalysis
Cryptanalysis, as defined at the beginning of this article, is the art of deciphering or even forging communications that are secured by cryptography. History abounds with examples of the seriousness of the cryptographer’s failure and the cryptanalyst’s success. In World War II the Battle of Midway, which marked the turning point of the naval war in the Pacific, was won by the United States largely because cryptanalysis had provided Admiral Chester W. Nimitz with information about the Japanese diversionary attack on the Aleutian Islands and about the Japanese order of attack on Midway. Another famous example of cryptanalytic success was the deciphering by the British during World War I of a telegram from the German foreign minister, Arthur Zimmermann, to the German minister in Mexico City, Heinrich von Eckardt, laying out a plan to reward Mexico for entering the war as an ally of Germany. American newspapers published the text (without mentioning the British role in intercepting and decoding the telegram), and the news stories, combined with German submarine attacks on American ships, accelerated a shift in public sentiment for U.S. entry into the war on the side of the Allies. In 1982, during a debate over the Falkland Islands War, a member of Parliament, in a now-famous gaffe, revealed that the British were reading Argentine diplomatic ciphers with as much ease as Argentine code clerks.
Basic aspects
While cryptography is clearly a science with well-established analytic and synthetic principles, cryptanalysis in the past was as much an art as it was a science. The reason is that success in cryptanalyzing a cipher is as often as not a product of flashes of inspiration, gamelike intuition, and, most important, recognition by the cryptanalyst of pattern or structure, at almost the subliminal level, in the cipher. It is easy to state and demonstrate the principles on which the scientific part of cryptanalysis depends, but it is nearly impossible to convey an appreciation of the art with which the principles are applied. In present-day cryptanalysis, however, mathematics and enormous amounts of computing power are the mainstays.
Cryptanalysis of single-key cryptosystems (described in the section Cryptography: Key cryptosystems) depends on one simple fact—namely, that traces of structure or pattern in the plaintext may survive encryption and be discernible in the ciphertext. Take, for example, the following: in a monoalphabetic substitution cipher (in which each letter is simply replaced by another letter), the frequency with which letters occur in the plaintext alphabet and in the ciphertext alphabet is identical. The cryptanalyst can use this fact in two ways: first, to recognize that he is faced with a monoalphabetic substitution cipher and, second, to aid him in selecting the likeliest equivalences of letters to be tried. The table shows the number of occurrences of each letter in the text of this article, which approximates the raw frequency distribution for most technical material. The following cipher is an encryption of the first sentence of this paragraph (minus the parenthetical clause) using a monoalphabetic substitution:
Letter frequency distribution for a sample English text | |||||
letter | number of occurrences | frequency | letter | number of occurrences | frequency |
E | 8,915 | .127 | Y | 1,891 | .027 |
T | 6,828 | .097 | U | 1,684 | .024 |
I | 5,260 | .075 | M | 1,675 | .024 |
A | 5,161 | .073 | F | 1,488 | .021 |
O | 4,814 | .068 | B | 1,173 | .017 |
N | 4,774 | .067 | G | 1,113 | .016 |
S | 4,700 | .067 | W | 914 | .013 |
R | 4,517 | .064 | V | 597 | .008 |
H | 3,452 | .049 | K | 548 | .008 |
C | 3,188 | .045 | X | 330 | .005 |
L | 2,810 | .040 | Q | 132 | .002 |
D | 2,161 | .031 | Z | 65 | .001 |
P | 2,082 | .030 | J | 56 | .001 |
UFMDHQAQTMGRG BX GRAZTW PWM
UFMDHBGMGHWOG VWDWAVG BA BAW
GRODTW XQUH AQOWTM HCQH HFQUWG
BX GHFIUHIFW BF DQHHWFA RA HCW
DTQRAHWLH OQM GIFJRJW WAUFMDHRBA
QAV SW VRGUWFARSTW RA HCW
URDCWFHWLH.
W occurs 21 times in the cipher, H occurs 18, and so on. Even the rankest amateur, using the frequency data in the table, should have no difficulty in recovering the plaintext and all but four symbols of the key in this case.
It is possible to conceal information about raw frequency of occurrence by providing multiple cipher symbols for each plaintext letter in proportion to the relative frequency of occurrence of the letter—i.e., twice as many symbols for E as for S, and so on. The collection of cipher symbols representing a given plaintext letter are called homophones. If the homophones are chosen randomly and with uniform probability when used, the cipher symbols will all occur (on average) equally often in the ciphertext. The great German mathematician Carl Friedrich Gauss (1777–1855) believed that he had devised an unbreakable cipher by introducing homophones. Unfortunately for Gauss and other cryptographers, such is not the case, since there are many other persistent patterns in the plaintext that may partially or wholly survive encryption. Digraphs, for example, show a strong frequency distribution: TH occurring most often, about 20 times as frequently as HT, and so forth. With the use of tables of digraph frequencies that partially survive even homophonic substitution, it is still an easy matter to cryptanalyze a random substitution cipher, though the amount of ciphertext needed grows to a few hundred instead of a few tens of letters.
Types of cryptanalysis
There are three generic types of cryptanalysis, characterized by what the cryptanalyst knows: (1) ciphertext only, (2) known ciphertext/plaintext pairs, and (3) chosen plaintext or chosen ciphertext. In the discussion of the preceding paragraphs, the cryptanalyst knows only the ciphertext and general structural information about the plaintext. Often the cryptanalyst either will know some of the plaintext or will be able to guess at, and exploit, a likely element of the text, such as a letter beginning with “Dear Sir” or a computer session starting with “LOG IN.” The last category represents the most favourable situation for the cryptanalyst, in which he can cause either the transmitter to encrypt a plaintext of his choice or the receiver to decrypt a ciphertext that he chose. Of course, for single-key cryptography there is no distinction between chosen plaintext and chosen ciphertext, but in two-key cryptography it is possible for one of the encryption or decryption functions to be secure against chosen input while the other is vulnerable.
One measure of the security of a cryptosystem is its resistance to standard cryptanalysis; another is its work function—i.e., the amount of computational effort required to search the key space exhaustively. The first can be thought of as an attempt to find an overlooked back door into the system, the other as a brute-force frontal attack. Assume that the analyst has only ciphertext available and, with no loss of generality, that it is a block cipher (described in the section Cryptography: Block and stream ciphers). He could systematically begin decrypting a block of the cipher with one key after another until a block of meaningful text was output (although it would not necessarily be a block of the original plaintext). He would then try that key on the next block of cipher, very much like the technique devised by Friedrich Kasiski to extend a partially recovered key from the probable plaintext attack on a repeated-key Vigenère cipher. If the cryptanalyst has the time and resources to try every key, he will eventually find the right one. Clearly, no cryptosystem can be more secure than its work function.
The 40-bit key cipher systems approved for use in the 1990s were eventually made insecure, as is mentioned in the section Cryptology: Cryptology in private and commercial life. There are 240 40-bit keys possible—very close to 1012—which is the work function of these systems. Most personal computers (PCs) at the end of the 20th century could execute roughly 1,000 MIPS (millions of instructions per second), or 3.6 × 1012 per hour. Testing a key might involve many instructions, but even so, a single PC at that time could search a 240-key space in a matter of hours. Alternatively, the key space could be partitioned and the search carried out by multiple machines, producing a solution in minutes or even seconds. Clearly, by the year 2000, 40-bit keys were not secure by any standard, a situation that brought on the shift to the 128-bit key.
Because of its reliance on “hard” mathematical problems as a basis for cryptoalgorithms and because one of the keys is publicly exposed, two-key cryptography has led to a new type of cryptanalysis that is virtually indistinguishable from research in any other area of computational mathematics. Unlike the ciphertext attacks or ciphertext/plaintext pair attacks in single-key cryptosystems, this sort of cryptanalysis is aimed at breaking the cryptosystem by analysis that can be carried out based only on a knowledge of the system itself. Obviously, there is no counterpart to this kind of cryptanalytic attack in single-key systems.
Similarly, the RSA cryptoalgorithm (described in the section Cryptography: RSA encryption) is susceptible to a breakthrough in factoring techniques. In 1970 the world record in factoring was 39 digits. In 2009 the record was a 768-digit RSA challenge. That achievement explains why standards in 2011 called for moving beyond the standard 1,024-bit key (310 digits) to a 2,048-bit key (620 digits) in order to be confident of security through approximately 2030. In other words, the security of two-key cryptography depends on well-defined mathematical questions in a way that single-key cryptography generally does not; conversely, it equates cryptanalysis with mathematical research in an atypical way.