24
\$\begingroup\$

Computer programmers often recite the mantra that x86 instructions are totally opaque: Intel tells us they are doing something, but there is no hope that anyone can verify what's happening, so if the NSA tells them to backdoor their RNGs, then we can't really do anything about it.

Well, I believe that computer programmers can't do anything about this problem. But how would an electric engineer attack it? Are there techniques an electrical engineer could use to verify that a circuit actually performs the operations described in its spec, and no other operations?

\$\endgroup\$
15
  • 5
    \$\begingroup\$ You'd have to do something like xray the die and analyze everything to see what it's actually doing. Basically reverse engineer the chip and account for trhe function of every circuit. Totally impractical. \$\endgroup\$
    – DKNguyen
    Commented May 22, 2019 at 14:12
  • 7
    \$\begingroup\$ No electrical circuit performs to an exact spec because of noise and the slight possibility that one day there will be a glitch that is "big enough". \$\endgroup\$
    – Andy aka
    Commented May 22, 2019 at 14:13
  • 5
    \$\begingroup\$ Fun info: This is vaguely related to Laplace's demon. \$\endgroup\$ Commented May 22, 2019 at 15:33
  • 7
    \$\begingroup\$ It's going to be easier to steal internal documents from Intel's content database than it would be to reverse engineer even a single modern, complex Intel CPU. \$\endgroup\$
    – forest
    Commented May 23, 2019 at 0:44
  • 15
    \$\begingroup\$ @Harper your attitude is unconstructive, and your assertion that a backdoor can't be concealed in hardware is not true. \$\endgroup\$
    – pjc50
    Commented May 23, 2019 at 15:46

6 Answers 6

14
\$\begingroup\$

The best paper I have read on the subject is "Stealthy Dopant-Level Hardware Trojans" (Becker et al) from 2014.

Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon,) our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against “golden chips." We demonstrate the effectiveness of our approach by inserting Trojans into two designs — a digital post-processing derived from Intel’s cryptographically secure RNG design used in the Ivy Bridge processors and a side-channel resistant SBox implementation — and by exploring their detectability and their effects on security.

The paper describes how the change is made, how it's extremely hard to detect from inspecting the silicon, techniques for hiding it from the production test, and how it can be made to either reduce the security of a hardware crypto RNG or to leak key information through a power-rail side-channel of an AES implementation.

Side-channels are an emerging field of interest. Intel have been plagued by problems relating to speculative execution leaking information from memory that wasn't even being used by the program. Could that have been a deliberate design flaw? It's almost impossible to tell.

\$\endgroup\$
1
  • \$\begingroup\$ Wouldn't a side channel require some sort of transmitter to send the information to the NSA? Otherwise I surely would notice someone measuring the power rail current on my laptop while I'm working on it. \$\endgroup\$ Commented May 29, 2019 at 7:47
24
\$\begingroup\$

Are there techniques an electrical engineer could use to verify that a circuit actually performs the operations described in its spec, and no other operations?

In theory, yes, I think this is possible. However, for a complex CPU it will take a lot of time and money. Also, if you do not fully know and understand the design, you will be unable to judge if any activity is "legit" or not.

A CPU is "just" a complex digital circuit consisting of many logic cells.

It is possible to reverse engineer the chip and reconstruct the design by observing the metal connections. There can be many of these connection layers like up to 8 layers or more.

You will need experts in the field to recognize the logic cells and then maybe some software can figure out how they're all connected so you can reconstruct the netlist.

Once you have the netlist you "know" the design. That doesn't mean you now also know how it works!

It could be that a certain function activates 2 sections of the design while you think one should be enough so you then suspect some suspicious activity is going on. However, the design does some clever trick you do not know about to speed up operations.

Without knowing and understanding the design, any conclusion you draw might still be wrong. Only the engineers who designed the CPU have all the design information and stand the best chance of being able to figure out or guess what actually goes on or should go on in a CPU.

\$\endgroup\$
10
  • 77
    \$\begingroup\$ Only the engineers which designed the CPU know everything that goes on - I happen to be an engineer working in this industry and let me assess this statement as being a very optimistic one :) \$\endgroup\$
    – Eugene Sh.
    Commented May 22, 2019 at 14:27
  • 18
    \$\begingroup\$ No, the CPU designers would not know everything that goes on - design at that level is dependent on synthesis tools, and those could inject behavior beyond that in the HDL design. To take a non-nefarious example, a lot of FPGA tools will let you compile in a logic analyzer. \$\endgroup\$ Commented May 22, 2019 at 16:02
  • 9
    \$\begingroup\$ Reverse engineering a chip with "billions of transistors" would present a challenge. spectrum.ieee.org/semiconductors/processors/… \$\endgroup\$
    – Voltage Spike
    Commented May 22, 2019 at 17:26
  • 4
    \$\begingroup\$ @Wilson Because complex circuits (including CPUs) will contain many proprietary (and secret, trademarked / patented even) designs which are not made available to the general public because the companies which own those designs want to benefit (earn money) from them. The 6502 is an old design, it does not have any valuable design information anymore so yeah, that's fully open and available to everyone. \$\endgroup\$ Commented May 23, 2019 at 7:57
  • 3
    \$\begingroup\$ @Bimpelrekkie: If they're patented, they're by definition not secret. That's the point of a patent. You trade a secret for a temporary monopoly. \$\endgroup\$
    – MSalters
    Commented May 23, 2019 at 12:00
9
\$\begingroup\$

Well, I believe that computer programmers can't do anything about this problem. But how would an electric engineer attack it?

There are not good ways to find back doors, one way to find a hardware backdoor would be to test combinations or undocumented instructions. Here's a good talk of someone who actually does this and does audits on x86 hardware. This can be done without cracking the chip. One problem with intel (I'm not sure about other chips) is it actually has a processor with linux running on it so there is also software running on some processors, and you don't have access to that supposedly.

Are there techniques an electrical engineer could use to verify that a circuit actually performs the operations described in its spec, and no other operations?

There are ways to test to use the hardware itself to test functionality. Since x86 has an undocumented portion of its instruction set, it would be unusual to introduce backdoors in normal instructions because it would introduce the possibility of bugs (like if you had a backdoor in an add or mult instruction), so the first place to look would be in the undocumented instructions.

If you did need to test the functionality of regular instructions you could watch the time it takes to execute instructions, watch the amount of power it takes to run instructions to see if there are differences from what you'd expect.

\$\endgroup\$
15
  • 3
    \$\begingroup\$ I would disagree, its not impossible that someone would do this, but unlikely. lets say you backdoored an regular instruction like an add instruction, and if you executed an additional instruction lets say it opened a backdoor. Then a customer develops a program that has exactly that combination, they look into it, find the back door and everyone gets mad and you get sued. Much safer to put a backdoor in the undocumented instructions (or the linux computer built into CPU's) \$\endgroup\$
    – Voltage Spike
    Commented May 22, 2019 at 15:38
  • 4
    \$\begingroup\$ IME runs Minix which is not Linux and is much smaller and simpler. Linux was inspired by the existence of Minix and originally used its filesystem and was announced on its newsgroup, but they were quite different then and are extremely so now. \$\endgroup\$ Commented May 22, 2019 at 16:07
  • 5
    \$\begingroup\$ @user14717 - the nasty possibility would be a trigger sequence in a jailed native executable, something like native client. But there's no reason it has to be code and not data. \$\endgroup\$ Commented May 22, 2019 at 16:10
  • 5
    \$\begingroup\$ @laptop2d Bugs where CPUs don't do what the theoretical documentation of the instruction set say happen all the time; nobody gets sued, usually: Read the errata section in the intel 7th gen Core i7 family doc update, for example. Using an undocumented instruction would immediately sound the alarm of any malware researcher. Using an unusual combination of rhythmic ADDs with the right inter-register MOVs is less likely to trigger any alarm. \$\endgroup\$ Commented May 22, 2019 at 16:46
  • 6
    \$\begingroup\$ @laptop2d I was stun by the "embedded linux within the CPU" statement. So I made a bit of research, I guess you talk about the Intel ME engine. Well, it doesn't run on the CPU itself, but on the north bridge chipset. It seems there has been a lot of misinformation about that, see itsfoss.com/fact-intel-minix-case \$\endgroup\$
    – dim
    Commented May 23, 2019 at 8:26
6
\$\begingroup\$

The only way would be to strip down the chip layer by layer and record every transistor with an electron microscope, and then enter that into some kind of simulation program and then watch it run.

This is essentially the Black Box problem in which you try and reconstruct the internals from measuring inputs and outputs. Once the complexity of the internals, or number of I/O, gets beyond the trivial there is a combinatorial explosion where the number of possible internal states becomes astronomical. Where numbers like Googol get thrown about.

\$\endgroup\$
11
  • 2
    \$\begingroup\$ ...and it is easier to steal the design using social engineering :) \$\endgroup\$
    – Eugene Sh.
    Commented May 22, 2019 at 14:20
  • 8
    \$\begingroup\$ No. The glaring mistake here is that simulation would not be sufficient. Even if you were given an accurate simulation model, you still would not be able to find carefully hidden behavior, because you have no idea how to trigger it. \$\endgroup\$ Commented May 22, 2019 at 15:58
  • 4
    \$\begingroup\$ @ChrisStratton I wouldn't call that mistake glaring. It's a reasonable assumption that the design was based on doing simplifications that are physically usual, e.g. that you don't put two metallization traces so close together that they couple inductively sufficiently to change the state of a MOSFET gate. That is only a mistake if a) your simplifications don't match the physical model of what the designer used or b) the designer is intentionally hiding something by intentionally breaking the requirements for these simplifications in non-obvious ways. \$\endgroup\$ Commented May 22, 2019 at 16:39
  • 7
    \$\begingroup\$ @ChrisStratton ah, sorry, ok, I think now I'm getting your point. You say that even the digital/behavioural clocked models of a CPU are complex enough to hide cases where the programmer's understanding / assumptions simply do not apply. That's true. One could have documented the effects leading to SPECTRE in excruciating detail, and most people would have never thought of caching to having data- or program flow-relevant side effects. Indeed! \$\endgroup\$ Commented May 22, 2019 at 16:54
  • 3
    \$\begingroup\$ Thanks :) Your argument brings the whole topic of formal verification of the correctness of ISAs back into view ("does this ISA actually guarantee that a compliant CPU does not grant RING 0 privileges to unprivileged code?") and of formal verification of HDL/RTL against such ISA specifications (I like this RISC-V CPU Core verification project especially.) \$\endgroup\$ Commented May 22, 2019 at 17:00
5
\$\begingroup\$

Proving that the CPU isn't doing something sneaky is extraordinarily hard. The classic example is a voting machine. If it has a single bit in it that takes a copy of your vote and later sneaks it out to some dictator, it could be life or death for you in some places. And proving there isn't a single bit like that in among the billions is rather hard.

You might think about isolating the chip physically, so it is practical to see that there are no improper wire connections to it. And putting another chip, or more than one chip in series (from different sources) in its network connection that guarantees it only connects to the right place. Then power cycling it after it has delivered your vote. And hoping that there are no nonvolatile bits in there. Or sneaky wireless connections. But would you trust your life to it?

\$\endgroup\$
5
\$\begingroup\$

Transmitting any data to the NSA will require network access, so it will be quite easy to spot such a backdoor by running an OS with network services disabled and checking the network interfaces for traffic. For an open-source OS it's even possible to run with full network support and spot rogue connection by their destination IP which will not match any address the OS could legitimately access.

A backdoor based on RNG with no data transmission will have very limited usefulness. Unless the CPU RNG is the only entropy source, the chances that such backdoor will provide any advantage to the attacker while not being obvious at the same time is practically zero. Unless you insist that Russel's teapot is out there despite having no good reason to exist, you should be able to apply the same argument to the hardware RNG backdoors.

\$\endgroup\$
8
  • 5
    \$\begingroup\$ So you assume that the adversary has the time, money, and skill to create and hide a hardware trojan horse, but the first thing they do is telnet www.nsa.gov? This seems like a very naive point of view. \$\endgroup\$ Commented May 23, 2019 at 16:23
  • 1
    \$\begingroup\$ If the NSA had hidden a vulnerability, then yes they would be hoping that people used rdrand or rdseed as Intel suggested: as the only entropy source for a PRNG seed. Linux (the kernel) chose not to do that for /dev/random, but glibc / libstdc++'s current std::random_device does use just rdrand if it's available at runtime instead of opening /dev/random. Step into standard library call with godbolt \$\endgroup\$ Commented May 23, 2019 at 20:44
  • \$\begingroup\$ @ElliotAlderson What's your point of view then? How can someone steal valuable data without ever transmitting it somewhere? \$\endgroup\$ Commented May 29, 2019 at 7:39
  • \$\begingroup\$ @PeterCordes std::random_device is not a cryptographically strong RNG. C++ standard allows you to implement it with a PRNG, effectively returning the same sequence every time, so it's quite obvious nobody should use it for encryption. \$\endgroup\$ Commented May 29, 2019 at 7:41
  • \$\begingroup\$ Oh right, I forgot there's no guarantee that it's any good, xD. It is good on many implementations, but MinGW is the standout exception to the design intent that it gives you as good quality random numbers as the platform is capable of, defeating the main purpose of the library. (Which as you say is not crypto, but seeding PRNGs for other purposes). (Why do I get the same sequence for every run with std::random_device with mingw gcc4.8.1?). That would be acceptable on a platform without any entropy (minimal embedded device), but not on x86 Windows! \$\endgroup\$ Commented May 29, 2019 at 7:45

Not the answer you're looking for? Browse other questions tagged or ask your own question.