42

Looking at the count of CVE reports by product, I'm tempted to use it as an indicator of which programs are the most secure, and choose the ones I install accordingly.

But I wonder if these numbers are misleading. For example, the Linux kernel is second in the list and Windows 10 is not even mentioned. I suppose it's in part because of the open source nature of Linux, which makes finding and fixing the flaws easier and faster, increasing the number of CVEs.

Another thing that I find interesting is that, while Chrome has more vulnerabilities listed in 2016 than Firefox, there are a lot more code execution flaws in Firefox, while a big part of Chrome's flaws are DoS attacks, which are way less severe.

Can we say that a software is "more secure" than another, based on the number of CVEs these softwares have ?

11
  • 7
    At the very least you have to consider the age of the product. The linux kernel is 25 years old, hence researches had a lot of time to find (and fix) bugs, while windows 10 is not even 2 years old. The number of open CVE bugs is probably a much more interesting metric (yet still a bad one). Also: windows8.1 has 111 9+ CVEs, the linux kernel only 60, so severity must be accounted for.
    – Bakuriu
    Commented Jan 3, 2017 at 17:46
  • A good indicator could be some sort of metric like this: "Per severity level, percentage of CVEs resolved/closed within 30 or 60 days of opening". But then again, this could cause vendors to purposefully skew the CVE data and report their own CVEs only when a solution has been found so that the CVE can be closed soon after.
    – MonkeyZeus
    Commented Jan 3, 2017 at 20:32
  • @Bakuriu: You consider Windows 10 to be less than 2 years old yet you consider Linux to be 25 years old?! I really want some of whatever you're having!
    – user541686
    Commented Jan 4, 2017 at 3:02
  • 4
    @Mehrdad Yes, but that is related to the fact that the OP wants to determine the security of a product by the number of CVEs. The number of discovered CVEs is proportional to the time of release of the software. In the CVE list linux is 25 years old, windows 10 only 2 years old. If you consider "windows" (which does not exist as a product in that list) you have to add up all its history in order to compare it with older software.
    – Bakuriu
    Commented Jan 4, 2017 at 9:52
  • 1
    @Mehrdad, Linux is a core component of many distributions. You may compare Windows 10 to Ubuntu 16.10 to get a fair comparison, but since the Linux kernel itself is continuously released. It cares not about when distributions will switch to a newer kernel. If a project plans on using the Linux kernel, they can certainly consider it as a still-active, 25 years old project.
    – sleblanc
    Commented Feb 11, 2017 at 17:36

4 Answers 4

68

Can we say that a software is "more secure" than another, based on the number of CVEs these softwares have?

No. CVE entries are not a good source to rank products by their "overall security".

The main idea behind the CVE system is to create unique identifiers for software vulnerabilities. It's not designed to be a complete and verified database of all known vulnerabilities in any product. That is, a vendor or researcher could simply decide to not request a CVE number for a given flaw. Further, entries sometimes combine related bugs under a single ID or don't disclose the exact impact, making a simple "bug count" a rather meaningless security criterion. Also, for a ranking you'd have to find sensible metrics to compare different severities. (How many DoS bugs equal a remote code execution...?)

That said, CVEs do surely give you an idea about what kind of vulnerabilities have been found in a product and they're a good starting point for research. But the amount strongly depends on the age of the software and how much attention it receives through security auditing. You can't really reason if a lot of CVE assignments means that the software is poorly written or if it actually means that it's particularly secure because evidently a lot of vulnerabilities are getting fixed. I personally tend to find it suspicious if an older product has a very short record of patched vulnerabilities because that could indicate it hasn't been audited thoroughly.

So you should think of CVE as a dictionary rather than a database that simply assigns handles to vulnerabilities so that you can reference them easier — don't use it as a tool to compare security.

Here are some better indicators for a secure product:

  • The software is used and developed actively.
  • The vendor encourages people to search for vulnerabilities (and maybe even offers bounties).
  • New security bugs are processed and patched quickly.
4
  • 1
    Well put. Also add that some companies are not even a part of Mitre's updated scope starting in 2016 - that is, they might have had CVEs in the past but they no longer will assign CVEs to those products.
    – thel3l
    Commented Jan 3, 2017 at 15:56
  • 1
    "vendors could simply decide to not request a CVE number for a given flaw" - huh? So only vendors can request CVE numbers for their own products? I don't have time to look for some official documentation right now, but I'm pretty sure I've seen researchers get CVEs for vulnerabilities they found in products.
    – Voo
    Commented Jan 4, 2017 at 10:20
  • @Voo You’re right, but I bet there are vendors who do not publicly report issues they (don’t) fix in their products. Researcher-found issues may still get CVEs, but vendor found may not. That’s simply another issue which skews the "number of CVEs" metrics. Commented Jan 4, 2017 at 12:35
  • @Voo I used vendors as an example. Of course individuals can request CVEs, too.
    – Arminius
    Commented Jan 4, 2017 at 13:54
6

As a voluntary system, CVE is as open to abuse as the software products it tracks and is often highly subjective. This is somewhat compounded by the scoring mechanism that's used to track severity which is typically CVSSv2.

In an ideal world, when a vulnerability is discovered in a product the developers will register a CVE, later publishing it along with whatever fix they produced for their product.

However, as others have pointed out, sometimes CVE's just aren't created or developers will take a bunch of vulnerabilities, cut one release that fixes them all and create on accompanying CVE.

If I'm interested in using a certain software product and had reservations around it's security, I'm more likely to take a look at how they handle security issues, receive reports etc than to look specifically at their CVE database.

One exception to the above is software products that use other components, in these cases, taking a look at how long it took a product to catch up with the CVE's that were issued against it's constituent products/services can be enlightening. Commercial software that re-packages open-source components can often lag many months behind on security fixes. This is probably the most useful thing I make use of the CVE database for.

4

CVE only represent the bugs on applications which people are actively trying to exploit. While open source may or may not be subject to a higher or lower volume of CVEs, it could be assumed reasonably that it does not imply any application is more or less prone to exploitation. Consider many exploits/vulnerabilities are often sold as 0day before even being considered as a CVE entry. Also consider that many people in the open source community tend more towards reasonable disclosure than towards profit. In any case I would measure the vulnerability of two applications based on a couple of metrics:

  1. How severe are the impacts exploits of the past?
  2. Volume of CVEs: As you implied, more CVEs probably implies that the application is audited more frequently by security research.

  3. Open source: I personally consider open source 'generally' more secure. View the source code for yourself, and look at the coding style. Maybe you can find a vulnerability yourself.

4

Maybe slightly, but the relationship between counts and quality is complex if meaningful at all. The useful questions to ask in regards to CVE counts and their implications about a piece of software's security/quality are:

  • Does it have any CVEs at all? If not, that probably means nobody cares enough to look for security bugs in it (i.e. low relevance, not high quality).

  • Does it have repeated CVEs for the same types of bugs year after year? If so, that means the developers/maintainers are probably doing at most the minimum needed to fix a specific bug rather than fixing their codebase as a whole and fixing the processes that led to the bug.

  • Are the CVEs found comparable to CVEs for other similar programs in the same years, and are they even reasonable for the years they're found? For example, for most languages used for web applications, existence of any SQL-injection bugs indicates that completely wrong/backwards APIs/frameworks are being used (i.e. old ones without prepared statements). For C programs though this is a lot harder to judge; CVE summaries are unlikely to tell you whether a bug was subtle (and thus "reasonable" to appear) or came from doing something idiotic and contrary to modern practices.

And so on. Most of these questions depend on more than just a simple count (though some could possibly be modeled as "counts in various classes"), so I'm skeptical that just counts have much value.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .