40

I understand that many open-source projects request vulnerabilities not to be disclosed on their public bug tracker but rather by privately contacting the project's security team, to prevent disclosing the bug before a fix is available. That makes perfect sense.

However, since the code repository of many open-source projects is public, won't fixing the bug in the source code immediately disclose it?

What measures (if any) are taken by open-source projects (e.g. the Linux kernel) to ensure that fixes for security vulnerabilities can be deployed to the end user (e.g. a Samsung Android phone) before the vulnerability is disclosed?

14
  • 2
    How do you deploy open-source code without revealing it? I'm not sure what you want is possible the way you are thinking it. Fixing something that's public discloses what was fixed...
    – schroeder
    Commented Nov 11, 2020 at 9:25
  • @schroeder: Yeah, that's what I thought, too, but just today I read that a Chrome vulnerability has been fixed, but details for the bug are not released yet, which seems weird to me, since Chromium is open source, so details should be easily obtainable by looking at the commit history. Since I'm not involved in any big open-source projects myself, I thought that maybe I am missing something...
    – Heinzi
    Commented Nov 11, 2020 at 9:41
  • 9
    There's a big difference between posting human-readable details and explanations, and updating code. I just think that you are interpreting their words too technically specifically.
    – schroeder
    Commented Nov 11, 2020 at 9:55
  • 1
    @schroeder: "How do you deploy open-source code without revealing it?" – The overwhelming majority of users of open-source software never come into contact with the source. I am a programmer and geek, and yet, the Linux kernel on my phone magically appears over-the-air from Samsung, my browser update gets pushed by Google, and I have never ever even looked at the Darwin source code, let alone compiled it. It would be perfectly possible for Google to update my Chrome without disclosing the vulnerability. Commented Nov 11, 2020 at 20:39
  • 3
    @JörgWMittag Wat? How does your personal disinterest in reviewing source code stop Mal Malicious, evil hacker extraordinaire, from watching changes to source code?
    – 8bittree
    Commented Nov 13, 2020 at 20:41

4 Answers 4

57

They don't. By releasing code, they automatically "disclose" the issue to those who can reverse engineer the patch. But they can delay explaining or providing the details for easy consumption.

If they delay releasing the code, they force users to use known-vulnerable code.

If they release the code and do not announce it as a security fix, then users might not patch and end up running known-vulnerable code.

So, they fix the code, release it, announce a security fix so that people assign the appropriate urgency, but they can delay explaining all the details to make it a little harder for attackers to figure out how to exploit the vulnerability.

Is that effective?

To some degree, "security by obscurity" has a place in a strategy in order to buy some time. Since it costs nothing, and it can have some positive effect, it seems like an easy call to make.

3
  • 38
    An additional note that might be worth making is that this applies to closed source software as well. A source code patch is, by its nature, easier to reverse engineer than a binary patch, but only to the extent that it gives more obscurity, not that it gives a qualitatively different level of security. Depending on the technologies involved, it might be trivial for an expert to spot e.g. a buffer overflow being patched in a compiled executable.
    – IMSoP
    Commented Nov 11, 2020 at 18:01
  • 4
    I wouldn't call that "trivial". It still takes effort to analyze how to execute the proper path to get to the vulnerable code and what inputs to use that exploit the vulnerability, sometimes significant effort.
    – SplashHit
    Commented Nov 11, 2020 at 20:10
  • 4
    "Since it costs nothing" - the cost is a reduced amount of peer reviewing. If you post a commit without explanation, less people can read the code and check if the code actually fixes all instances of the problem and does not introduce new problems. And introducing new bugs/vulnerabilities by providing a quick fix which only a handful people reviewed has happened in the past.
    – Falco
    Commented Nov 12, 2020 at 9:49
20

The same way they prevent disclosing the report: by not disclosing it.

Since you mentioned the Linux kernel specifically: only a vanishingly small number of users build their kernels directly from the master branch of Linus Torvald's Git repository. The vast majority of users simply use whatever kernel their distribution's automatic updater installs.

In turn, the vast majority of distributions don't build their kernels directly from the master branch of Linus Torvald's Git repository either. They use some official release version as the base, backport some new features and fixes from newer kernels, integrate some third-party patches that are not part of Linus's repository, integrate some distribution-specific patches, etc.

Since they integrate patches that are not part of Linus's repository anyway, it makes no difference to them to integrate just one more patch for the most recent vulnerability.

So, basically what happens is that the release of the patch is coordinated such that at the time the vulnerability is fixed in Linus's Git repository and publicly announced, patched kernel images are already pushed out to users from the distribution's update servers.

Note that generally, the Linux developers prefer to publish fixes as quickly as possible, but if you check out the chapter on Security Bugs in the Linux Kernel Documentation, you will find the paragraph on Coordination, which I think explains the specific process for the Linux Kernel pretty well, and is also representative of how some other large projects handle the issue:

Fixes for sensitive bugs […] may need to be coordinated with the private [linux-distros] mailing list so that distribution vendors are well prepared to issue a fixed kernel upon public disclosure of the upstream fix. Distros will need some time to test the proposed patch and will generally request at least a few days of embargo […]. When appropriate, the security team can assist with this coordination, or the reporter can include linux-distros from the start.

So, the trick is:

  • Coordinate with each other to make sure that all distribution vendors are ready to push out updates. This would, for example, also include something like Google coordinating with the downstream handset vendors, etc.
  • Release the fix and the information that it is a serious security vulnerability, but do not necessarily release the nature of the vulnerability. (A serious attacker will be able to figure it out from the patch, but you buy yourself a little bit of time.)

How much of this is done, and what the timeframes are, depends on the nature of the vulnerability. A remotely-exploitable privilege escalation will be treated differently than a DoS that can only be exploited by someone who is already logged into the machine locally.

In the best possible case, the experience of the end user will be that by the time the vulnerability becomes public, their computer will already have greeted them with a message informing them that the system has been rebooted overnight for the installation of a critical security update, and that's it.

2
  • 2
    "the release of the patch is coordinated such that [...] patched kernel images are already pushed out to users from the distribution's update servers." I think you're saying that all well-known Linux vendors coordinate with each other to ensure security binary releases come out before the code is released. Can you attest to this personally, or do you have a source for this?
    – jpaugh
    Commented Nov 11, 2020 at 21:12
  • 5
    @jpaugh That's the whole point of the oss-security distros mailing list. That is a private mailing lists with representatives of all major Linux distros, BSD flavours etc., used to coordinate disclosure and patch releases. Occasionally, somebody messes up and discloses stuff too early (for example, by running a testing build on publicly visible infrastructure, or pushing a private branch with a patch to a public repository), but most of the time it works fairly well.
    – TooTea
    Commented Nov 12, 2020 at 9:31
17

Coincidentally I have a tab open about CVE-2020-17640 in the Eclipse Vert.x project where the product maintainers are discussing this exact issue!

Julien Viet 2020-09-28 13:07:31 EDT

So I just need to provide the details of the CVE to get one ?

If that is so, I don't get how that can remain confidential until we publish a fix.

Wayne Beaton 2020-09-28 23:03:45 EDT

Provide me with the details here and I'll assign a CVE, then you can wrap up your commit and push. I can delay pushing to the central authority [Mitre / NVD] for a day or so, but no longer.


Your question is

How do open-source projects prevent disclosing a bug while fixing it?

I think the answer is with difficulty.

Exactly as you say, you want to keep the details private until you have a patch ready, but there are a number of competing interests that make it difficult to keep information from going public.

  1. The bug will often be reported via a public bug tracker. You need a way to pull that off the public tracker, or mark it private.
  2. You need to give the details to Mitre in order to get a CVE number assigned. While I've never submitted a CVE myself, I assume Mitre will work with all parties involved to delay publication until an appropriate time.
  3. In the commit fixing the issue, you want to reference the CVE number, which is problematic if your project's source is hosted for example on a public git repo.

Personal anecdote:

I've noticed that a lot of projects keep CVE numbers out of commit messages, which is super frustrating for me when I'm trying to decide if a given CVE is a "Take the server down until we can patch", or a "We're good to wait until next cycle". CVE-2020-17640 is one of those that's rated CVSS 9.8, but there's literally no info available to help me determine if this will be exploitable in the deployment I'm investigating.

2
  • 2
    As I understand it: You can request a CVE Id where you give enough details so that the organization (not necessarily MITRE, but for simplicity let's assume) can decide whether the issue deserves an ID. When that's done the ID is marked as "reserved" but not yet published. A CVE must reference public information, so you have to publish that information separately and then inform MITRE which will publish the CVE. So it's under your own control and not MITRE's when the CVE becomes public. Doesn't really match with the linked issue though so there might be some details there.
    – Voo
    Commented Nov 12, 2020 at 10:44
  • Presentation from MITRE about the process that seems to mostly agree with my understanding.
    – Voo
    Commented Nov 12, 2020 at 10:48
-1

One way would be to write the patch in a less than obvious way, and use misleading comments and commit messages, possibly mixing the patch in with multiple other (functionality/stability) patches, documenting something as the vulnerability bugfix that is actually a functionality/stability patch and vice versa. Cleaning the situation up later would of course be recommendable.

2
  • 2
    Security through obscurity? Commented Nov 13, 2020 at 18:38
  • To quote another answer: "To some degree, "security by obscurity" has a place in a strategy in order to buy some time". And that is exactly what I was expanding on. Tripping an attacker's leg will of course not stop them, but will take their momentum. Commented Nov 14, 2020 at 19:59

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .