17
$\begingroup$

I've been taught that one-time pads are the only perfect encryption since the only way to recover the message is by knowing the key.

For example, for a target bitstring of 100 bits, I cannot scan all bitstrings of 100 bits and XOR each with the target, hoping to recover the message. This approach will produce all messages that can be expressed with 100 bits.

However, not all bitstrings are random, e.g. 11111111111111 is less random than 01101001001101. This observation seems to contradict the idea of an unbreakable one time pad.

We know two random bitstrings are independent of each other, as well as independent from non-random bitstrings. So, knowing a random bitstring will not allow you to shorten the description of a second bitstring and vice versa. Thus, if we randomly generate a bitstring BS1, then when XORed with the key BS2 it will produce a third bitstring BS3 that is not compressible.

Proof: If BS3 is compressible, then knowing BS1 would allow us to describe BS2 with a short description (i.e. BS3). Then, either BS1 and BS2 are not both random or are not independent. The only case where they are random, but not independent, is if one is a part of the other.

This means that if XORing BS1 with ciphertext BS4 results in a compressible BS5, then BS1 is at least part of BS2 or contains part of BS2.

So, at least in theory, it seems that one-time pads are breakable, although this approach is not computable since we'd need to compute that a bitstring is truly random.

This argument contradicts what I've been taught, and I'm wondering if OTPs are only said to be perfect because randomness is not computable.

$\endgroup$
2
  • $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
    – e-sushi
    Commented Jun 14, 2017 at 19:35
  • 2
    $\begingroup$ It would probably be a good idea to read about unicity distance. "For a one time pad of unlimited size, given the unbounded entropy of the key space, we have U=infinity, which is consistent with the one-time pad being unbreakable." en.wikipedia.org/wiki/Unicity_distance $\endgroup$
    – user48899
    Commented Jun 15, 2017 at 6:53

12 Answers 12

67
$\begingroup$

For example, for a target bitstring of 100 bits, I cannot scan all bitstrings of 100 bits and XOR each with the target, hoping to recover the message. This approach will produce all messages that can be expressed with 100 bits.

That's not the reason why one-time-pads are considered secure. The reason is that even if you try all possible keys that you get all possible plaintext and you have no method of choosing which one is right. The size of the plaintext/ciphertext/keystream doesn't matter.

However, not all bitstrings are random, e.g. 111111111111 is less random than 01101001001101. This observation seems to contradict the idea of an unbreakable one time pad.

111111111111 is equally likely as 01101001001101 or any other value of the same size when chosen randomly.

Proof: If BS3 is compressible, then knowing BS1 would allow us to describe BS2 with a short description (i.e. BS3). Then, either BS1 and BS2 are not both random or are not independent. The only case where they are random, but not independent, is if one is a part of the other.

Values are compressible if they are part of a set where certain values are more likely than others. This is not the case for $C = P \oplus K$ if $K$ is fully random. This is because $C$ will be fully random as well, independent of $P$ being random or not.

This means that if XORing BS1 with cyphertex BS4 results in a compressible BS5, then BS1 is at least part of BS2 or contains part of BS2.

This will never be the case as BS5 will not be compressible; each possible value of BS5 will be as likely as the other values of BS5 because of the reasons explained above.

$\endgroup$
2
  • $\begingroup$ What if only one of the brute-forced posited plaintext is not gibberish? For example English text. Seems one-time pad is not always unbreakable by brute force? What am I missing? $\endgroup$ Commented Dec 19, 2023 at 19:06
  • $\begingroup$ A one time pad randomizes over the entire input domain. So all messages (of a specific size) are possible. It therefore cannot be that only one is "not gibberish", unless there is only meaningful message in the domain. That's obviously not the case for text messages as any character will be in there. If there is only a "choice" of one message then encryption won't help you. Don't encrypt 0 bit messages ;) $\endgroup$
    – Maarten Bodewes
    Commented Dec 20, 2023 at 2:18
25
$\begingroup$

To begin with, your definition of perfect secrecy is non-standard. The standard definition is given in an excellent answer to the question how is the OTP perfectly secure?.

Essentially, perfect secrecy means that observing the ciphertext does not affect the relative likelihoods of various plaintexts under the unknown key. So the fact that different bitstrings may have different randomness is irrelevant from the point of view of the theory.

Also, you may be conflating Kolmogorov Complexity (aka Algorithmic Randomness) with the randomness in the OTP argument. Kolmogorov complexity is indeed uncomputable.

$\endgroup$
0
14
$\begingroup$

I'll try a practical example:

I trade stocks. Instructions to my broker use a simple Caesar shift cipher, but the shift varies by values in a one-time encryption pad. Common 8-char instructions include: "buy more" "sell all" and "short it".

You intercept an instruction to my broker: "AAAAAAAA"

What is my instruction? Buy, sell, or short?

$\endgroup$
3
  • 1
    $\begingroup$ With high probability, I can say that it is not 'sell all' since the key won't seem random (4 l's). similarly, for 'short it' (2 t's). $\endgroup$
    – kelalaka
    Commented Oct 1, 2018 at 16:18
  • $\begingroup$ kelalaka The point of a key is for it to be random ENOUGH to mask the original message. All 3 messages are 8 chars long [7 letters, 1 space]. The possibility that the same char [eg. "L" in SELL ALL] could be encrypted the same way, 4 times in an 8-character key, could even CAUSE people to dismiss an encryption possibilty [as you did]. It's the same as a coin-toss being 7 heads in a row - no more unlikely than any other series, and WILL happen if you continue long enough. But if you prefer, think of the message as: "ABCDEFGH". That's no more breakable than my original code. $\endgroup$ Commented Oct 3, 2018 at 5:06
  • $\begingroup$ @Alan_Campbell there were two problems; 1 the message space was limited and, 2. if you are preparing a one-time pad key, you will apply some tests. like the number of 0 = 1, the long runs etc. you don't expect to send all 1 as a key, though it is randomly selected from the key space. When this two is combined, I made my comment. If the message is 'ABCDEFGH' there is no problem. what are the other messages in your short message space? $\endgroup$
    – kelalaka
    Commented Oct 3, 2018 at 7:55
8
$\begingroup$

However, not all bitstrings are random, e.g. 11111111111111 is less random than 01101001001101. This observation seems to contradict the idea of an unbreakable one time pad.

When cryptographers use the word random they use it in the sense of probability theory. What you're calling "randomness," however, is Kolmogorov complexity, "the length of the shortest possible description of the string in some fixed universal description language." What you're calling a "compressible string" is a string whose complexity is lower than the length of the description that just hardcodes the string. And when you say "random string" you're equivocating between non-compressible string and randomly selected string, two different concepts.

A bitstring selected uniformly at random from all bitstrings of length $n$ has an expected complexity proportional to $n$—meaning that most strings by far of length $n$ are non-compressible. But such a random selection is equally likely to result in 11111111111111 as 01101001001101, so you can't categorically assert that a string so selected will be complex—only that the chance of drawing a compressible string is low.

But none of that is relevant to the security of one-time pads, which has nothing to do with the complexity or compressibility of the keys, only with the uniform random choice thereof. The secrecy of one-time pads rests on the theorem that if $p$ is a random variable with any distribution over $\{0,1\}^n$, and $k$ is a uniform random variable over $\{0,1\}^n$, then $c = p \oplus k$ is a uniform random variable over $\{0,1\}^n$. It's possible that either the randomly drawn $k$ or the computed $c$ could turn out to be compressible strings, but unlikely and irrelevant.

$\endgroup$
7
$\begingroup$

The reason you can't crack a one-time-pad is because brute forcing will just end up generating every possible solution. But you'll be no closer to knowing which of those solutions is the right one! To give an example, say that someone encodes an ip address using a one time pad. You intercept the message. So you start brute forcing it. Most of the results you get look nothing like an ip address, and so you know those can't be right. And then your cracking software comes up with "127.0.0.1"! What, that's a legit ip, so that must be right! Well then the cracking software comes up with "127.0.0.2", and then "127.0.0.3", etc, etc. Because it turns out that every single possible ip address is an equally likely solution.

The stuff you were saying about randomness is not related at all.

$\endgroup$
5
$\begingroup$

It depends on what you mean by "crack". If you meant "decipher", as you hint in your question's detail, then, it depends on your implementation of the cipher.

From a purely mathematical POV, OTP's are indecipherable. But math isn't the only thing that keeps the ciphertext safe: there's the implementation of the cipher, implementation of the key, management of the key, the hardware used in the implementation of the cipher, and social aspects of the key and the plaintext.

There are many ways to crack ciphertext.

As to cipher implementation, if you used non-true random numbers, bits of the key could be deduced, and if one had access to the kind of plaintext originally enciphered, it might be possible to deduce more of the key. For instance, knowing that the plaintext was a text document, or an executable, that can help decipher more of the ciphertext.

As to key implementation, if you reuse the key within the algorithm (for example, you use a key shorter in length than the plaintext, then start at the beginning of the key again), that is a death knell to the ciphertext. In a whitebox crack, having access to the algorithm being implemented, this can give away whether you started to reuse a key from the beginning, the end, every other character, etc, and that's more information that can be used to decipher.

As to key management, if the key were stored on your hard drive, it's a only a matter of time before that key is found. You might store the key on a piece of paper - but where do you put that paper? And what if the thing you're encrypting is over a megabyte in size - you gonna type all that in (error-free)?

As to hardware, getting actual random bits is part of it. But storing the key, storing intermediate data during the enciphering process, and what happens to the original plaintext once it has been enciphered is an issue. If you have a text file containing the password to your Netflix account, then you encrypt it with OTP, then use a right-click and delete for the original text file, you've not secured anything.

As to the social part, having someone surreptitiously install a key logger, video camera, drug you, phish you, or otherwise trick you into giving up the content or location of the cipher key is another way to "crack" a code.

Sometimes, and I guess this applies to any attempt at decipher no matter the algorithm, having access to partial plaintext, or the kind of plaintext it is, is enough to get the rest of it. Such might not stand up in court if you were accused of peddling state secrets to the enemy. There are times when that is all that's needed, especially if it can be proven. Example: the netflix password is known to be four digits; if partial decipher leads to three of the 4 digits because you didn't use a truly random number generator, then it's trivial to brute force your way to the missing digit. The proof? Just use it and see if it works.

The length of ciphertext is also helpful information. If you were under investigation of downloading illegal music, 1MB enciphered files would probably be interesting. Files that were a few kilobytes - or a few gigabytes - in length would probably be ignored, unless the investigator also had access to the implementation source, and determined that the file size of the ciphertext was also part of obfuscation.

Having access to the metadata - the kind of plaintext being enciphered - can also help. If it is known that the plaintext was an executable or a text file, then, partial decryption can rule out some bits.

OTP's are mathematically proven unbreakable. But that's not the whole picture, and is never the only consideration in cracking ciphertext. The important details (isn't the devil always in the details?) are in the implementation of the cipher, hardware, key management, and social aspects of it all. Each of these things are just as important as the other in ensuring the security of the ciphertext, and poor implementation of any one of them can give away the plaintext.

A chain, they say, is only as strong as its weakest link.

As to randomness... it is true that today's PC's are only marginally better than its predecessors at generating truly random numbers - which is to say they can't. But that's not entirely correct, either. It depends on the effort needed to get a random number. If you used a chip completely dedicated to throwing out a random bit, then I'd say that the result isn't looking good. Computers, after all, tend to be consistent in their output. But if the algorithm relied on analog data, such as room or CPU temperature, the current nanosecond (tick) in the day, or the typing speed of the operator, or the selection of nth character an operator is asked to randomly type in, etc, that can create true random results - but at a strong cost of time and effort to implement it, and some users may not want it.

OTP's are very sensitive to the randomness of a result. It isn't that 11111111 is any more or less random than 00101101; rather, if it can be deduced that every 8th bit, for example, is always a "1", then that is helpful to someone cracking your ciphertext. Knowing patterns like this can give away the implementation of the algorithm, which in turn could give up more information about other bits that can be deduced. In this case, knowing that bit 8 is always "0" might give away that the plaintext is text (as opposed to binary).

EDIT:

OTP's are not always the best algorithm. Such is a double-edged sword. If you were charged with a crime - say, possessing state secrets - and the prosecutor knew you used OTP, and the prosecutor was unscrupulous, he can make up a "key" to decrypt the ciphertext, and show made up state secrets in evidence against you.

You could do the exact same thing, except produce a different key to decrypt the ciphertext into a recipe for blackened chicken wings.

What you have both proven is that the evidence presented is doubtful without additional supporting evidence: The prosecutor will need your actual key, as well as the ciphertext and plaintext, and prove you were in possession of all three. In addition, he'll need to prove you possessed no other OTP key.

So in theory, you could leave two OTP keys completely visible in your My Documents folder, one decrypts to the state secrets, the other decrypts to the blackened chicken wings recipe. He'll have a lot of explaining to do in order to convince a judge and jury that a single ciphertext is BOTH the damning illegal state secrets, as well as a useful and tasty blackened chicken wing recipe.

With any other algorithm, the prosecutor needs only the cracked ciphertext (plaintext) - and the key is irrelevant. He doesn't even have to show that you possessed the key, or even the plaintext, since there can only be one key, which can only lead to the state secrets.

In cases like this, it is irrelevant whether or not your ciphertext is crackable, or the algorithm is crackable: even if it were, additional supporting evidence is needed. In fact, it would seem that a prosecutor would not want to mention anything about the ciphertext - even with the "actual" keys (note plural) in plain sight.

Besides the prosecutor scenario, this could be an employee/employer scenario - being in possession of health records. Or being in the middle of a nasty divorce.

It could also be the case that the ciphertext yields similar outcomes: for example, there are 10,000 OTP keys, each produces plaintext to a different key to your Netflix account. Someone trying to steal your Netflix account will have a better time with brute force than to figure out which of the 10,000 keys you're really using to hide your Netflix password.

Any time you have a scenario where having one ciphertext which produces multiple useful or intelligible plaintext, which OTP's can do, you have a use for OTP's (or a headache... if you were the one having to decide which is real and which is the red herring).

Full disclosure: I'm no lawyer. Nor do I have any experience in the field of law. So my answer relating to law is just an opinion.

$\endgroup$
1
  • $\begingroup$ "if you used non-true random numbers" - good point. $\endgroup$
    – mentallurg
    Commented Aug 4, 2019 at 9:37
3
$\begingroup$

I somewhat disagree with the consensus argument here. As to

11111111111111 is less random than 01101001001101

yes, that actually makes some sense. You've already discussed this in the comments to David Schwartz' answer. The evident difference is in Kolmogorov complexity. He pointed out the problem that Kolmogorov complexity depends on the language, and the language is ultimately arbitrary. Well, true in principle, however for any practical language one might choose 11111111111111 does in fact have a lower complexity than 01101001001101. I could go on here...but, see bottom because —

None of this is really important for the question, since such “degenerate strings” are exponentially improbable to come out of a random algorithm. (That's in fact the reason they have lower Kolmogorov complexity: any language can only ever hope to compress data that's highly unusual.)
And that means, for realistically-sized messages, these degenarate cases will never happen in the lifetime of the universe. This includes the original plaintext, which means a brute-force attack on a one-time-pad can in a real physical world never succeed to even consider the correct key.

Now you say, what if, theoretically, we had a meta-universe in which these exponentially unlikely possibilities can exhaustively be checked. Well, that doesn't help you though, and that's what's special about the one-time pad.
Because, since the pad is as large as the message, you need to consider all possible keys. And that just leads you to the infinite monkey theorem: yes, these guys will manage to produce the original plaintext. But they will also produce the full Hamlet screenplay, and an only slightly changed form of the message where the Germans expect the main attack in the Normandy and not Pas-de-Calais.


In physics, we deal all of the time with the concept of states that could in principle be random and just like any other but are, in a very real sense, distinguished. It is these phenomena that prompted the concept of entropy.

An example: consider a couple million of nitrogen molecules with the typical energy such molecules have at room temperature. Statistical mechanics thermal equilibrium now is the macrostate where each possible microstate is equally likely. So that includes the state where some region looks like this:

Snapshot of nitrogen gas molecules

and the state where the same region looks like this:

Other snapshot of nitrogen gas molecules

and the state where the same region looks like this:

Snapshot of solid nitrogon

Well, that last one certainly seems to be a different beast. It's a solid state. But we all know that nitrogen is not solid at room temperature, don't we?
– actually we don't. The microscopic laws of physics don't in any way say a brick of solid air can't suddenly condense and drop on your head. However, such a state has an exponentially lower entropy than a gas state. Therefore, we can be utterly confident that this won't ever happen throughout our lifetimes, or the Universe's.

$\endgroup$
0
3
$\begingroup$

Assuming that the key is chosen at random, that the key and the message are the same length, and that keys are not reused between messages, then possession of the ciphertext gives you exactly 0 information about the plaintext.

For every bit of ciphertext, there are two possibilities: either the corresponding key bit is 0 and the corresponding plaintext bit is the same as the ciphertext bit; or the key bit is 1 and the plaintext bit is the opposite of the ciphertext bit. By the assumption that the key is unknown and random, both possibilities are equally likely — and by symmetry this is true whether the ciphertext bit was 0 or 1. Therefore our knowledge of the ciphertext cannot do anything to increase our knowledge of the plaintext unless we also have knowledge of the key. The definition of perfection in cryptography is that all security lies in the key, so by this standard OTP is perfect.

This means that if XORing BS1 with cyphertex BS4 results in a compressible BS5, then BS1 is at least part of BS2 or contains part of BS2.

This statement isn't generally valid. You're doing two things implicitly here. First, you're assuming a compression algorithm (hopefully it's a fixed one, otherwise we will never agree what "compressible" means). Then, you're assuming that the correct plaintext is compressible using this algorithm. Perhaps this is true, perhaps it's not. If you assume that it is, then you're assuming prior knowledge about the plaintext. An attempt at "breaking" OTP by decrypting the ciphertext with various keys and seeing which one gives you the best compressibility doesn't give you any additional information, because the OTP decryption is equally likely to produce any given string as input to the compressor, so you might as well simply give the compressor random inputs and see which ones compress well. This will work exactly as well. And if an algorithm that doesn't require the ciphertext as input performs exactly as well as the one you proposed, you can't actually be decrypting anything, can you? You're simply selecting random plausible messages from the prior distribution implied by the compression algorithm itself.

$\endgroup$
3
$\begingroup$

When you have OTP like “00000000…”, it might look like you are providing some extra info the plaintext (since plaintext is equal to ciphertext in such case), but that's actually an illusion. When attacker sees message “How are you?”, it could have been a message “How are you?” with OTP “000000000000”. But it could have been any other message of the same length, e.g., “You stink!!!”. Since any OTP can be generated with the same probability, the attacker can't make any further guess from the message than he could make just from the message length.

(Maybe attacker can guess if “You stink!!!” is more likely than “How are you” or vice versa, but this must be an apriori knowledge. She can make the same guesses even if the knows just the message length.)

Actually, ensuring that there aren't some types of regularities in the OTP would make the scheme weaker (at least theoretically), not stronger, because attacker could exclude some possible plaintexts.

$\endgroup$
3
$\begingroup$

The concept of compressibility only applies to schemes in which some bit patterns are more probable than others. If all bit patterns are equally likely, no compression scheme can be more efficient than simply producing the input as your output.

Since we are dealing here with every possible output of the specified number of bits and each is equally likely, no compression scheme can do better than identity. Thus, no bit pattern is compressible.

You would be correct if BS3 were compressible. But given the properties of a random, one-time pad, no bit pattern is compressible.

$\endgroup$
0
0
$\begingroup$

You have plenty of beautiful answers already, but let me add another angle:

The key

In many arguments about encryption algorithms, the key is regarded in a very different light from everything else. The key is where there is usually made a "cut" between technicalities (mathematical cryptanalysis) and the "real world".

A light-hearted way to think about cryptography is "how much can I achieve if I truly do not know the key". In bad algorithms, a lot can be achieved without even the slightest idea about the key (up to and including being able to figure out the original key). But still, when doing the technical / mathematical part of cryptanalysis, you'd assume that the key is truly secret and not known - mostly because this is the most interesting application of the art.

Getting the key is a whole different angle of attack - you can do social engineering; you can try to install key loggers; you can freeze the victim's laptop after he entered the PW and hope to find it in unallocated RAM somewhere and so on. All these techniques are perfectly valid and probably all of them are also easier/less costly than cryptanalysis. So we separate the key management from the actual encryption by assuming that we do not know the key, as in that case everything else would be moot.

The algorithm

As an aside, the opposite is true for the algorithm. Hiding the algorithm is never wise, that's simply security by obscurity and does not increase actual security at all. So when assessing the security of an encryption scheme, we can safely assume that we do know the complete algorithm. If an algorithm gets less secure because it is known, then it is unsecure in the first place.

OTP

So, OTPs are special because they bring this line of thought to the very extreme: there is only the key. The encryption algorithm as such is not even complex enough to deserve the name of "algorithm", it is a single trivial bit-wise operation. The OTP is made such that under the assumption that the key is safe, there is not even anything to attack. The algorithm is so simple that as soon as you have a plaintext together with the cyphertext, you immediately have the full key. Which is not a problem, because in a proper OTP, you will use the key only once anyways. So if the attacker already knows cleartext plus ciphertext, he does not gain anything else by being able to trivially calculate the key.

Your question

I find it exceedingly hard to follow the "proof" you have given. I guess what you are actually asking is "what if the OTP turns out to be 111111111.... or 0000000.... or something pathological like 0001000000000000....". Well, nothing keeps you from checking your generated OTP for some very basic properties; for example that for a large number of truly random bits, you'd expect the count of 0's to be roughly the same as 1's; and the count of each possible pair of digits roughly the same and so on. You could argue back and forth whether it is necessary to make sure that a case of "1 in 2^1024", or whatever number of bits you have, does not occur, but if you are paranoid enough, and why not, then it is pretty easy to check many different properties of your OTP.

$\endgroup$
3
  • $\begingroup$ "The encryption algorithm as such is not even complex enough to deserve the name of "algorithm", it is a single trivial bit-wise operation." - The complexity of the algorithms expression does not necessarily reflect the conceptual depth of the algorithm. $g^x\mod P$ is not a complicated expression either; It's the realization of what you can use it for and why that is sophisticated. $\endgroup$
    – Ella Rose
    Commented Jun 13, 2017 at 23:51
  • $\begingroup$ That was kind of tongue-in-cheek, @EllaRose. If you think it was taking it too far, let me know. And I believe bitwise XOR is quite a bit removed from exponentiation mod prime... ;) $\endgroup$
    – AnoE
    Commented Jun 14, 2017 at 0:01
  • $\begingroup$ OTP is a concept rather than an algorithm. Even bitwise XOR is an algorithm. You can do OTP with modular addition as well. So I don't agree with that part of the answer. Checking if something is an OTP by analyzing the ciphertext is not possible; any function that creates a (pseudo) random sequence could have been used, and those are certainly not always OTPs. $\endgroup$
    – Maarten Bodewes
    Commented Dec 9, 2018 at 12:56
-3
$\begingroup$

There are two levels to this answer: generic and specific. The specific problem addresses the decryption of a single encrypted message, the generic the decryption of all messages produced by a particular encryption system.

The starting point is the presumption that the message actually contains data. That argues for some kind of pattern to it, commonly a language of some sort, which then offers a handle to the decryption. In fact, this is what we do anyway in reading plaintext; we interpret what is written into semantic meaning our brains can assimilate.

What single-use pads do is stop the development of first-level cracks into generic cracks of the entire system, as there should be no common ground between the pads. However, this is not entirely guaranteed, as some kind of seed is needed to generate a pseudo-random sequence.

The most important thing about encryption is not that it preserves the data content eternally, it's that it keeps it hermetic for long enough for its relevance to degrade, so that if, for instance, a military force were to plan a surprise attack, the attack would happen before the code is broken.

I'll leave it to others to discuss the maths, however the fact there is a key means that in theory any message is breakable under a brute-force attack. Sometimes double-encryption actually creates exploitable lacunae, and sometimes just plain sloppy thinking exposes the key. I personally cracked a major system through simple inspection once, working with one of the system managers, so the exploit was rapidly corrected: that was done on a kudos motivation. Then again, I have a genius IQ, was schooled by one of the founding fathers, and actually made systems manager at Dollis Hill, after the code-breakers had left.

The inverse of the problem is that the encrypted message must also be decryptable in short order: it's pointless if a time-critical message takes longer to decrypt than the period it is important for. Equally, the decryption equipment itself must also be secure in all locations, the loss of several cypher machines gave Bletchley Park huge pointers towards a generic crack of Enigma. It's a subset of the approach of the security gurus, whose ideal machine is disconnected from a power supply, encased in lead, covered in a concrete sarcophagus twenty feed thick and dropped into the crater of your nearest volcano (the Frodo solution!). It may be perfectly secure, it's also perfectly unusable. And in the trade-off lies the weakness. A single-use pad is used twice, to code and decode. Lose either and the game is in play. The Man Who Never Was was a case in point, where the code sold the gambit.

$\endgroup$
0

Not the answer you're looking for? Browse other questions tagged or ask your own question.