Book Review: Red Team Blues

Book Review: Red Team Blues

As a rule, book reviews are not a thing I usually do.

So when I received an out-of-the-blue email from Cory Doctorow last week asking if I would review his latest book, Red Team Blues, it took a minute to overcome my initial skepticism. While I’m a fan of Cory’s work, this is a narrow/nerdy blog about cryptography, not a place where we spend much time on literature. Moreover, my only previous attempt to review a popular cryptography novel — a quick sketch of Dan Brown’s abysmal Digital Fortress — did not go very well for anyone.

But Cory isn’t Dan Brown. And Red Team Blues is definitely not Digital Fortress.

This became obvious in the middle of the first chapter, when a character began explaining the operation of a trusted execution environment and its various digital signing keys. While it’s always fun to read about gangsters and exploding cars, there’s something particularly nice about a book whose plot hangs around a piece of technology that most people don’t even think about. (And if that isn’t your thing, there are exploding cars and gangsters.)

This still leaves the question of how a cryptography blog reviews a work of fiction, even one centered on cryptography. The answer is pretty simple: I’m not going to talk much about the story. If you want that, there are other reviews out there. While I did enjoy the book immensely and I’m hopeful Cory will write more books in this line (with hopefully more cryptography), I’ll mainly focus on the plausibility of the core technical setup.

But even to do that, I have to provide a few basic details about the story. (Note: minor spoilers below, but really only two chapters’ worth.)

The protagonist of Red Team Blues is 67-year-old Martin Hench, an expert forensic accountant with decades of experience tracing and recovering funds for some of the most powerful people in Silicon Valley. Martin is on the brink of retirement, lives in a bus named “the Unsalted Hash” and loves bourbon nearly as much as he despises cryptocurrency. This latter position is presumably a difficult one for someone in Martin’s line of work, and sure enough his conviction is quickly put to the test.

Before long Martin is hired by his old friend Danny Lazer — sort of a cross between Phil Zimmerman, David Chaum and (maybe) Max Levchin — who begs him to take one last career-defining job: namely, to save his friend’s life by saving his newest project: a cryptocurrency called TrustlessCoin.

TrustlessCoin is a private cryptocurrency: not terribly different from real ones like Monero or Zcash. (As a founding scientist of a private cryptocurrency, let me say that none of the things in this novel have ever happened to me, and I’m slightly disappointed in that.)

Unlike standard cryptocurrencies, TrustlessCoin contains one unusual and slightly horrifying technological twist. Where standard cryptocurrencies rely on consensus algorithms to construct a public ledger (and zero-knowledge proofs for privacy), TrustlessCoin bases its integrity on the security of mobile Trusted Execution Environments (TEEs). This means that its node software runs inside of systems like Intel’s SGX, ARM’s TrustZone, or Apple’s Secure Enclave Processor.

Now, this idea isn’t entirely unprecedented. Indeed, some real systems like MobileCoin, Secret Network and Intel’s PoET take a fairly similar approach — although admittedly, these rely mainly on server-based TEEs rather than mobile ones. It is, however, an idea that makes me want to scream like a child who just found a severed human finger in his bowl of cornflakes.

You see, TEEs allow you to run software (more) securely inside of your own device, which is a good and respectable thing to do. But distributed systems often require more: they must ensure that everyone else in the network is also running the software in a similarly-trustworthy environment. If some people aren’t doing so — that is, if they’re running the software on a computer they can tamper with and control — then that can potentially harm the security of the entire network.

TEE designers have been aware of this idea for a long time, and for years have been trying to address this using secure remote attestation. Attestation systems provision each processor with a digital signing key (in turn certified by the manufacturer’s root signing key) that allows the processor to produce attestations. These signed messages “prove” to remote parties that you’re actually running the software inside a valid TEE, rather than on some insecure VMWare image or a Raspberry Pi. Provided these systems all work perfectly, everyone in the system can communicate with everyone else and know that they are running the software on secure hardware as well.

The problems crop up when that assumption breaks down. If even a single person can emulate the software inside a TEE on their own (non-trusted device or VM) then all of your beautiful assumptions may go out the window. Indeed, something very similar to this recently happened to Secret Network: clever academic researchers found a way to extract a master decryption key from (one) processor, and were then able to use that key to destroy privacy guarantees across the whole network. (Some mitigations have since been deployed.)

It goes without saying that Red Team Blues is not about side-channel attacks on processors. The problem in this novel is vastly worse: Danny Lazer has gone and bribed someone to steal the secret root signing keys for every major mobile secure enclave processor: and, of course, they’ve been all been stolen. Hench’s problem is to figure out whether it’s even possible to get them back. And that’s only the beginning of the story.

As its name implies, Red Team Blues is a novel about the difference between offense and defense: about how much more difficult it is to secure a system than it is to attack one. This metaphor applies to just about every aspect of life, from our assumptions about computer security to the way we live our lives and build our societies.

But setting all these heavy thoughts aside, mostly Red Team Blues is a quick fun read. You can get the eBook without DRM, or listen to an audiobook version narrated by Wil Wheaton (although I didn’t listen to it because I couldn’t put the book down.)

Why the FBI can’t get your browsing history from Apple iCloud (and other scary stories)

Why the FBI can’t get your browsing history from Apple iCloud (and other scary stories)

It’s not every day that I wake up thinking about how people back up their web browsers. Mostly this is because I don’t feel the need to back up any aspect of my browsing. Some people lovingly maintain huge libraries of bookmarks and use fancy online services to organize them. I pay for one of those because I aspire to be that kind of person, but I’ve never been organized enough to use it.

In fact, the only thing I want from my browser is for my history to please go away, preferably as quickly as possible. My browser is a part of my brain, and backing my thoughts up to a cloud provider is the most invasive thing I can imagine. Plus, I’m constantly imagining how I’ll explain specific searches to the FBI.

All of these thoughts are apropos a Twitter thread I saw last night from the Engineering Director on Chrome Security & Privacy at Google, which explains why “browser sync” features (across several platforms) can’t provide end-to-end encryption by default.

This thread sent me down a rabbit hole that ended in a series of highly-scientific Twitter polls and frantic scouring of various providers’ documentation. Because while on the one hand Justin’s statement is mostly true, it’s also a bit wrong. Specifically, I learned that Apple really seems to have solved this problem. More interestingly, the specific way that Apple has addressed this problem highlights some strange assumptions that make this whole area unnecessarily messy.

This munging of expectations also helps to explain why “browser sync” features and the related security tradeoffs seem so alien and horrible to me, while other folks think these are an absolute necessity for survival.

Let’s start with the basics.

What is cloud-based browser “sync”, and how secure is it?

Most web browsers (and operating systems with a built-in browser) incorporate some means of “synchronizing” browsing history and bookmarks. By starting with this terminology we’ve already put ourselves on the back foot, since “synchronize” munges together three slightly different concepts:

  1. Synchronizing content across devices. Where, for example you have a phone, a laptop and a tablet all active and in occasional use and want your data to propagate from one to the others.
  2. Backing up your content. Wherein you lose all your device(s) and need to recover this data onto a fresh clean device.
  3. Logging into random computers. If you switch computers regularly (for example, back when we worked in offices) then you might want to be able to quickly download your data from the cloud.

(Note that the third case is kind of weird. It might be a subcase of #1 if you have another device that’s active and can send you the data. It might be a subcase of #2. I hate this one and am sending it to live on a farm upstate.)

You might ask why I call these concepts “very different” when they all seem quite similar. The answer is that I’m thinking about a very specific question: namely, how hard is it to end-to-end encrypt this data so that the cloud provider can’t read it? The answer is different between (at least) the first two cases.

If what we really want to do is synchronize your data across many active devices, then the crypto problem is relatively easy. The devices generate public keys and register them with your cloud provider, and then each one simply encrypts relevant content to the others. Apple has (I believe) begun to implement this across their device ecosystem.

If what we want is cloud backup, however, then the problem is much more challenging. Since the base assumption is that the device(s) might get lost, we can’t store decryption keys there. We could encrypt the data under the user’s device passcode or something, but most users choose terrible passcodes that are trivially subject to dictionary attacks. Services like Apple iCloud and Google (Android) have begun to deploy trusted hardware in their data centers to mitigate this: these “Hardware Security Modules” (HSMs) store encryption keys for each user, and only allow a limited number of password guesses before they wipe the keys forever. This keeps providers and hackers out of your stuff. Yay!

Except: not yay! Because, as Justin points out (and here I’m paraphrasing in my own words) users are the absolute worst. Not only do they choose lousy passcodes, but they constantly forget them. And when they forget their passcode and can’t get their backups, do they blame themselves? Of course not! They blame Justin. Or rather, they complain loudly to their cloud backup providers.

While this might sound like an extreme characterization, remember: when you have a billion users, the extreme ones will show up quite a bit.

The consequence of this, argues Justin, is that most cloud backup services don’t use default end-to-end encryption for browser synchronization, and hence your bookmarks and in this case your browsing history will be stored at your provider in plaintext. Justin’s point is that this decision flows from the typical user’s expectations and is not something providers have much discretion about.

And if that means your browsing history happens to get data-mined, well: the spice must flow.

Except none of this is quite true, thanks to Apple!

The interesting thing about this explanation is that it’s not quite true. I was inclined to believe this explanation, until I went spelunking through the Apple iCloud security docs and found that Apple does things slightly differently.

(Note that I don’t mean to blame Justin for not knowing this. The problem here is that Apple absolutely sucks at communicating their security features to an audience that isn’t obsessed with reading their technical documentation. My students and I happen to be obsessive, and sometimes it pays dividends.)

What I learned from my exploration (and here I pray the documentation is accurate) is that Apple actually does seem to provide end-to-end encryption for browser data. Or more specifically: they provide end-to-end encryption for browser history data starting as of iOS 13.

Image

More concretely, Apple claims that this data is protected “with a passcode”, and that “nobody else but you can read this data.” Presumably this means Apple is using their iCloud Keychain HSMs to store the necessary keys, in a way that Apple itself can’t access.

What’s interesting about the Apple decision is that it appears to explicitly separate browsing history and bookmarks, rather than lumping them into a single take-it-or-leave-it package. Apple doesn’t claim to provide any end-to-end encryption guarantees whatsoever for bookmarks: presumably someone who resets your iCloud account password can get those. But your browsing history is protected in a way that even Apple won’t be able to access, in case the FBI show up with a subpoena.

That seems like a big deal and I’m surprised that it’s gotten so little attention.

Why should browser history be lumped together with bookmarks?

This question gets at the heart of why I think browser synchronization is such an alien concept. From my perspective, browsing history is an incredibly sensitive and personal thing that I don’t want anywhere. Bookmarks, if I actually used them, would be the sort of thing I’d want to preserve.

I can see the case for keeping history on my local devices. It makes autocomplete faster, and it’s nice to find that page I browsed yesterday. I can see the case for (securely) synchronizing history across my active devices. But backing it up to the cloud in case my devices all get stolen? Come on. This is like the difference between backing up my photo library, and attaching a GoPro to my head while I’m using the bathroom.

(And Google’s “sync” services only stores 90 days of history, so it isn’t even a long-term backup.)

One cynical answer to this question is: these two very different forms of data are lumped together because one of them — browser history — is extremely valuable for advertising companies. The other one is valuable to consumers. So lumping them together gets consumers to hand over the sweet, sweet data in exchange for something they want. This might sound critical, but on the other hand, we’re just describing the financial incentive that we know drives most of today’s Internet.

A less cynical answer is that consumers really want to preserve their browsing history. When I asked on Twitter, a bunch of tech folks noted that they use their browsing history as an ad-hoc bookmarking system. This all seemed to make some sense, and so maybe there’s just something I don’t get about browser history.

However, the important thing to keep in mind here is that just because you do this doesn’t mean it should drive a few billion people’s security posture. The implications of prioritizing the availability of browser history backups (as a default) is that vast numbers of people will essentially have their entire history uploaded to the cloud, where it can be accessed by hackers, police and surveillance agencies.

Apple seems to have made a different calculation: not that history isn’t valuable, but that it isn’t a good idea to hold the detailed browser history of a billion human beings in a place where any two-bit police agency or hacker can access it. I have a very hard time faulting them in that.

And if that means a few users get upset, that seems like a good tradeoff to me.

Why Antisec matters

A couple of weeks ago the FBI announced the arrest of five members of the antisec-logo-100337062-orighacking group LulzSec. We now know that these arrests were facilitated by ‘Anonymous’ leader* “Sabu“, who, according to court documents, was arrested and ‘turned’ in June of 2011. He spent the next few months working with the FBI to collect evidence against other members of the group.

This revelation is pretty shocking, if only because Anonymous and Lulz were so productive while under FBI leadership. Their most notable accomplishment during this period was the compromise of Intelligence analysis firm Stratfor — culminating in that firm’s (rather embarrassing) email getting strewn across the Internet.

This caps off a fascinating couple of years for our field, and gives us a nice opportunity to take stock. I’m neither a hacker nor a policeman, so I’m not going to spend much time why or the how. Instead, the question that interests me is: what impact have Lulz and Anonymous had on security as an industry?

Computer security as a bad joke

To understand where I’m coming from, it helps to give a little personal background. When I first told my mentor that I was planning to go back to grad school for security, he was aghast. This was a terrible idea, he told me. The reality, in his opinion, was that security was nothing like Cryptonomicon. It wasn’t a developed field. We were years away from serious, meaningful attacks, let alone real technologies that could deal with them.

This seemed totally wrong to me. After all, wasn’t the security industry doing a bazillion dollars of sales ever year? Of course people took it seriously. So I politely disregarded his advice and marched off to grad school — full of piss and vinegar and idealism. All of which lasted until approximately one hour after I arrived on the floor of the RSA trade show. Here I learned that (a) my mentor was a lot smarter than I realized, and (b) idealism doesn’t get you far in this industry.

Do you remember the first time you met a famous person, and found out they were nothing like the character you admired? That was RSA for me. Here I learned that all of the things I was studying in grad school, our industry was studying too. And from that knowledge they were producing a concoction that was almost, but not quite, entirely unlike security.

Don’t get me wrong, it was a rollicking good time. Vast sums of money changed hands. Boxes were purchased, installed, even occasionally used. Mostly these devices were full of hot air and failed promises, but nobody really cared, because after all: security was kind of a joke anyway. Unless you were a top financial services company or (maybe) the DoD, you only really spent money on it because someone was forcing you to (usually for compliance reasons). And when management is making you spend money, buying glossy products is a very effective way to convince them that you’re doing a good job.

Ok, ok, you think I’m exaggerating. Fair enough. So let me prove it to you. Allow me to illustrate my point with a single, successful product, one which I encountered early on in my career. The product that comes to mind is the Whale Communications “e-Gap“, which addressed a pressing issue in systems security, namely: the need to put an “air gap” between your sensitive computers and the dangerous Internet.

Now, this used to be done (inexpensively) by simply removing the network cable. Whale’s contribution was to point out a major flaw in the old approach: once you ‘gap’ a computer, it no longer has access to the Internet!

Hence the e-Gap, which consisted of a memory unit and several electronic switches. These switches were configured such that the memory could be connected only to the Internet or to your LAN, but never to both at the same time (seriously, it gives me shivers). When data arrived at one network port, the device would load up with application data, then flip ‘safely’ to the other network to disgorge its payload. Isolation achieved! Air. Gap.

(A few pedants — damn them — will try to tell you that the e-Gap is a very expensive version of an Ethernet cable. Whale had a ready answer to this, full of convincing hokum about TCP headers and bad network stacks. But really, this was all beside the point: it created a freaking air gap around your network! This apparently convinced Microsoft, who later acquired Whale for five times the GDP of Ecuador.)

Now I don’t mean to sound too harsh. Not all security was a joke. There were plenty of solid companies doing good work, and many, many dedicated security pros who kept it from all falling apart.

But there are only so many people who actually know about security, and as human beings these people are hard to market. To soak up all that cybersecurity dough you needed a product, and to sell that product you needed marketing and sales. And with nobody actually testing vendors’ claims, we eventually wound up with the same situation you get in any computing market: people buying garbage because the booth babes were pretty.**

Lulz, Anonymous and Antisec

I don’t remember when I first heard the term ‘Antisec’, but I do remember what went through my mind at the time: either this is a practical joke, or we’d better harden our servers.

Originally Antisec referred to the ‘Antisec manifesto‘, a document that basically declared war on the computer security industry. The term was too good to be so limited, so LulzSec/Anonymous quickly snarfed it up to refer to their hacking operation (or maybe just part of it, who knows). Wherever the term came from, it basically had one meaning: let’s go f*** stuff up on the Internet.

Since (per my expanation above) network security was pretty much a joke at this point, this didn’t look like too much of a stretch.

And so a few isolated griefing incidents gradually evolved into serious hacking. It’s hard to say where it really got rolling, but to my eyes the first serious casualty of the era was HBGary Federal, who — to be completely honest — were kind of asking for it. (Ok, I don’t mean that. Nobody deserves to be hacked, but certainly if you’re shopping around a plan to ‘target’ journalists and civilians you’d better have some damned good security.)

In case you’re not familiar with the rest of the story, you can get a taste of it here and here. In most cases Lulz/Anonymous simply DDoSed or defaced websites, but in other cases they went after email, user accounts, passwords, credit cards, the whole enchilada. Most of these ‘operations’ left such a mess that it’s hard to say for sure which actually belonged to Anonymous, which were criminal hacks, and which (the most common case) were a little of each.

The bad

So with the background out of the way, let’s get down to the real question of this post. What has all of this hacking meant for the security industry?

Well, obviously, one big problem is that it’s making us (security folks) look like a bunch of morons. I mean, we’ve spent the last N years developing secure products and trying to convince people if they just followed our advice they’d be safe. Yet when it comes down to it, a bunch of guys on the Internet are walking right through it.

This is because for the most part, networks are built on software, and software is crap. You can’t fix software problems by buying boxes, any more than, say, buying cookies will fix your health and diet issues. The real challenge for industry is getting security into the software development process itself — or, even better, acknowledging that we never will, and finding a better way to do things. But this is expensive, painful, and boring. More to the point, it means you can’t outsource your software development to the lowest bidder anymore.

Security folks mostly don’t even try to address this. It’s just too hard. When I ask my software security friends why their field is so terrible (usually because they’re giving me crap about crypto), they basically look at me like I’m from Mars. The classic answer comes from my friend Charlie Miller, who has a pretty firm view of what is, and isn’t his responsibility:

I’m not a software developer, I just break software! If they did it right, I’d be out of a job.

So this is a problem. But beyond bad software, there’s just a lot of rampant unseriousness in the security industry. The best (recent) example comes from RSA, who apparently forgot that their SecurID product was actually important, and decided to make the master secret database accessible from a single compromised Windows workstation. The result of this ineptitude was a series of no-joking-around breaches of US Defense Contractors.

While this has nothing to do with Anonymous, it goes some of the way to explaining why they’ve had such an easy time these past two years.

The good

Fortunately there’s something of a silver lining to this dark cloud. And that is, for oncepeople finally seem to be taking security seriously. Sort of. Not enough of them, and maybe not in the ways that matter (i.e., building better consumer products). But at least institutionally there seems to be a push away from the absolute stupid.

There’s also been (to my eyes) a renewed interest in data-at-rest encryption, a business that’s never really taken off despite its obvious advantages. This doesn’t mean that people are buying good encryption products (encrypted hard drives come to mind), but at least there’s movement.

To some extent this is because there’s finally something to be scared of. Executives can massage data theft incidents, and payment processors can treat breaches as a cost of doing business, but there’s one thing that no manager will ever stop worrying about. And that is: having their confidential email uploaded to a convenient, searchable web platform for the whole world to see.

The ugly 

The last point is that Antisec has finally drawn some real attention to the elephant in the room, namely, the fact that corporations are very bad at preventing targeted breaches. And that’s important because targeted breaches are happening all the time. Corporations mostly don’t know it, or worse, prefer not to admit it.

The ‘service’ that Antisec has provided to the world is simply their willingness to brag. This gives us a few high-profile incidents that aren’t in stealth mode. Take them seriously, since my guess is that for every one of these, there are ten other incidents that we never hear about.***

In Summary

Let me be utterly clear about one thing: none of what I’ve written above should be taken as an endorsement of Lulz, Anonymous, or the illegal defacement of websites. Among many other activities, Anonymous is accused of hacking griefing the public forums of the Epilepsy Foundation of America in an attempt to cause seizures among in its readers. Stay classy, guys.

What I am trying to point out is that something changed a couple of years ago when these groups started operating. It’s made a difference. And it will continue to make a difference, provided that firms don’t become complacent again.

So in retrospect, was my mentor right about the field of information security? I’d say the jury’s still out. Things are moving fast, and they’re certainly interesting enough. I guess we’ll just have to wait and see where it all goes. In the meantime I can content myself with the fact that I didn’t take his alternative advice — to go study Machine Learning. After all, what in the world was I ever going to do with that?

Notes:

* Yes, there are no leaders. Blah blah blah.

** I apologize here for being totally rude and politically incorrect. I wish it wasn’t true.

*** Of course this is entirely speculation. Caveat Emptor.

How not to redact a document: NHTSA and Toyota edition

Allow me to apologize in advance for today’s offtopic post, which has nothing to do with crypto. Consider it a reflection on large organizations’ ability to manage and protect sensitive data without cryptography. Report card: not so good.

Some backstory. You probably remember that last year sometime Toyota Motors had a small amount of trouble with their automobiles. A few of them, it was said, seemed prone to sudden bouts of acceleration. Recalls were issued, malfunctioning throttle cables were held aloft, the CEO even apologized. That’s the part most people have heard about.

What you probably didn’t hear too much about (except maybe in passing) was that NASA and NHTSA spent nearly a year poring through the Toyota engine control module code to figure out if software could be at fault. Their report, issued in February of this year, basically said the software was ok. Or maybe it didn’t. It’s not really clear what it said, because major portions — particularly of the software evaluation section — were redacted.

Now, like every major redaction of the digital age, these redactions were done thoughtfully, by carefully chopping out the sensitive portions using sophisticated digital redaction software, designed to ensure that the original meaning could never leak through.

Just kidding!

Seriously, as is par for the course in these things, NHTSA just drew big black squares over the parts they wanted to erase.

And this is where we get to the sophistication of organizations when it comes to managing secure data. You see, NHTSA released these reports online in February 2011, as a response to a FOIA request. They were up there for anybody to see, until about April — when somebody noticed that Google was magically unredacting these documents. Whoops. Time to put up some better documents!

Naturally NHTSA also remembered to contact Archive.org and ask that the old reports be pulled off of the Wayback Machine. No, really, I’m just kidding about that, too.

Of course, they’re all cached there for all to see, in their full un-redactable glory. All it takes is a copy and paste. For example, take this portion:

Where the redacted part decodes to:

The duty % is converted into three boolean flags, a flag describing the sign of the duty, a flag if the absolute value of the duty is greater than or equal to 88%, and a flag if the absolute value of the duty is less than 1%.  The 64 combinations of these flags and their previous values are divided into ten cases. Of the ten cases, five will open the throttle, two of the five will make the throttle more open than currently but not wide open, two will provide 100% duty instantaneously, and one will perpetually open the throttle. Any duty command from the PID controller greater than or equal to 88% will perpetually open the throttle and lead to WOT [wide open throttle]. This also means that any duty greater than 88% will be interpreted by the hardware as a 100% duty command.

Yikes!

So what’s the computer security lesson from this? Once data’s out on the wire, it’s gone for good. People need to be more careful with these kinds of things. On the bright side, this was just information, possibly even information that might be useful to the public. It’s not like it was sensitive source code, which I’ve also seen find its way onto Google.