159

My computing teacher told us that closed source software is more secure than open source software, because with open source "anyone can modify it and put stuff in." This is why they do not want to use open source alternatives for learning to program, such as FreePascal (currently using Embarcadero Delphi, which is slow and buggy.)

I think this is completely wrong. For example Linux seems to be considerably more resilient to exploits than Windows; although it could be down to popularity/market share.

What studies have been performed which show that open source or closed source is better in terms of security?

26
  • 109
    When will people learn that security through obscurity on its own is a deeply flawed concept...
    – Ardesco
    Commented May 20, 2011 at 10:45
  • 60
    I think the very fact that they are teaching in Pascal shows just how knowledgeable and current the teacher is, to say nothing of the luddite attitude towards open source.
    – Dov
    Commented May 20, 2011 at 14:05
  • 29
    Open source is often more secure because anyone can change it. It means anyone can discover and fix bugs. So your teacher is completely wrong. Commented May 20, 2011 at 14:36
  • 31
    ask your teacher what he/she has to say about apache vs IIS servers where apache have a greater marketshare yet more succesfull exploits are done on IIS (microsoft's) and if your teacher doesn't know about them he/she has very little knowledge and only crams up the textbook .
    – Lincity
    Commented May 20, 2011 at 14:51
  • 53
    "Anybody can put stuff in it". Ahhh, so that's why every other day I log into my OS, the welcome message gets changed to "dicks lol" or "n00bz".
    – Lagerbaer
    Commented May 20, 2011 at 16:52

9 Answers 9

140

"Secure design, source code auditing, quality developers, design process, and other factors, all play into the security of a project, and none of these are directly related to a project being open or closed source."

Source : Open Source Versus Closed Source Security

26
  • 40
    It is true that these things are not directly related to the license of a project. However, the tendency is for open-source (OSS) projects is that as they get more popular the source code is reviewed a great deal more. The review ceiling for closed projects is naturally determined by the number of developers that can be hired. There is no such ceiling for an open source project - so therefore an OSS project may be reviewed for bugs more than any closed project could be. Whether this happens depends on it's popularity.
    – Adrian
    Commented May 20, 2011 at 13:20
  • 11
    Also not all users of an open source project are necessarily going to be developers, and even of those who are, only a small fraction of them may actually read its source code. So it's not just about popularity. Commented May 20, 2011 at 15:41
  • 4
    @Adrian - In OSS that's called Linus' Law: en.wikipedia.org/wiki/Linus%27_Law
    – Kit Sunde
    Commented May 20, 2011 at 15:52
  • 16
    I disagree with this answer. In cryptography, a "black box" security system can only be considered insecure. A cryptography system is only considered "safe" if the algorithm it uses is well-understood and proven to be unbreakable (for all intents and purposes, with modern or near-future tech). With OSS, it's possible to determine whether the system uses a secure algorithm, and whether a particular implementation of an algorithm is flawed. Being OSS or closed source doesn't change the actual current code, but it ensures that you to know what you're getting, if you're able to do the analysis.
    – RMorrisey
    Commented May 20, 2011 at 21:52
  • 9
    @RMorrisey: If the NSA induces (through whatever means, hopefully ethically such as hiring the person) the world's 100 foremost cryptography experts to analyze their system, it's as secure as is feasible, despite being hidden. Opening it up for public scrutiny will have negligible additional impact. Your comment assumes that the most brilliant cryptanalysts in the world work contribute (solely) to open source projects, which is suspect.
    – Ben Voigt
    Commented May 20, 2011 at 22:49
69

Software being open source doesn't mean anyone can change it (often anyone can fork it, but that will be new derived software) - only dedicated people have access to the repository. For example, if I want to submit a change to Tortoise SVN I have to mail my change to a dedicated mail list and then developers will see it, review it, and commit it to the codebase.1,2


Still, anyone can read the sources. That's not a big deal either. Look at contemporary cryptography. Algorithms are public and researched and tested by numerous people. How can they be used for protecting data? They use small portions of secret data (encryption keys) to parameterize the algorithm. Everyone knows the algorithm, but only people who need that know the secret keys and algorithms are successfully used for data protection.


That said, software being open source and software being secure (or reliable) are completely independent - comparing those is like comparing apples versus oranges. Yes, open source software can be buggy. So can closed source software. It's how the development process is organized, not whether you disclose the sources.


References:

1

Submit patches (submit enough and you can become a committer!)

2 (Slightly modified)

Technically, a committer is someone who has write access to the SVN repository. A committer can submit his or her own patches or patches from others.

10
  • 5
    Skeptics requires references for all answers. Answers without references are only speculation, not fact. See the FAQs. Thanks!
    – Kevin Peno
    Commented May 20, 2011 at 18:49
  • 1
    So sharptooth should write an article, then get a friend to cite that article.
    – Phil
    Commented May 21, 2011 at 3:40
  • 11
    @Kevin Peno: You can verify these as easily as observing that there are 24 hours in a day: "Software being open source doesn't mean anyone can change it only dedicated people have access to the repository", "if I want to submit a change to Tortoise SVN I have to mail my change to a dedicated mail list and then developers will see it, review it, and commit it to the codebase", "Still, anyone can read the sources", "contemporary cryptography ... Algorithms are public and researched and tested by numerous people", "They use small portions of .... encryption keys ... to parameterize the algorithm"
    – Lie Ryan
    Commented May 21, 2011 at 9:20
  • 2
    @Kevin Peno: do you even know the difference between facts and opinions? Go to Tortoise SVN repository and see if you can gain write access without going through their code review system. This proves fact 1) and 2). "anyone can read the sources" is the definition of open source. "AES" are public and researched and tested by numerous people. "AES" uses encryption key to parameterize the algorithm.
    – Lie Ryan
    Commented May 21, 2011 at 9:52
  • 3
    @Kevin Peno. You don't need to cite sources for well-known facts. @Lie Ryan "anyone can read the sources" is not the definition of Open Source. The definition is given here: programmers.stackexchange.com/questions/21907 (the subtly different definition of Free Software is also given there).
    – TRiG
    Commented May 22, 2011 at 0:59
50

I'm not going to answer this question myself. The United States Department of Defense has done it much better than I could.

Q: Doesn't hiding source code automatically make software more secure?

No. Indeed, vulnerability databases such as CVE make it clear that merely hiding source code does not counter attacks:

  • Dynamic attacks (e.g., generating input patterns to probe for vulnerabilities and then sending that data to the program to execute) don’t need source or binary. Observing the output from inputs is often sufficient for attack.

  • Static attacks (e.g., analyzing the code instead of its execution) can use pattern-matches against binaries - source code is not needed for them either.

  • Even if source code is necessary (e.g., for source code analyzers), adequate source code can often be regenerated by disassemblers and decompilers sufficiently to search for vulnerabilities. Such source code may not be adequate to cost-effectively maintain the software, but attackers need not maintain software.

  • Even when the original source is necessary for in-depth analysis, making source code available to the public significantly aids defenders and not just attackers. Continuous and broad peer-review, enabled by publicly available source code, improves software reliability and security through the identification and elimination of defects that might otherwise go unrecognized by the core development team. Conversely, where source code is hidden from the public, attackers can attack the software anyway as described above. In addition, an attacker can often acquire the original source code from suppliers anyway (either because the supplier voluntarily provides it, or via attacks against the supplier); in such cases, if only the attacker has the source code, the attacker ends up with another advantage.

Hiding source code does inhibit the ability of third parties to respond to vulnerabilities (because changing software is more difficult without the source code), but this is obviously not a security advantage. In general, “Security by Obscurity” is widely denigrated.

This does not mean that the DoD will reject using proprietary COTS products. There are valid business reasons, unrelated to security, that may lead a commercial company selling proprietary software to choose to hide source code (e.g., to reduce the risk of copyright infringement or the revelation of trade secrets). What it does mean, however, is that the DoD will not reject consideration of a COTS product merely because it is OSS. Some OSS is very secure, while others are not; some proprietary software is very secure, while others are not. Each product must be examined on its own merits.

Edit to add: There's an answer to the malicious code insertion question, too:

Q: Is there a risk of malicious code becoming embedded into OSS?

The use of any commercially-available software, be it proprietary or OSS, creates the risk of executing malicious code embedded in the software. Even if a commercial program did not originally have vulnerabilities, both proprietary and OSS program binaries can be modified (e.g., with a "hex editor" or virus) so that it includes malicious code. It may be illegal to modify proprietary software, but that will normally not slow an attacker. Thankfully, there are ways to reduce the risk of executing malicious code when using commercial software (both proprietary and OSS). It is impossible to completely eliminate all risks; instead, focus on reducing risks to acceptable levels.

The use of software with a proprietary license provides absolutely no guarantee that the software is free of malicious code. Indeed, many people have released proprietary code that is malicious. What's more, proprietary software release practices make it more difficult to be confident that the software does not include malicious code. Such software does not normally undergo widespread public review, indeed, the source code is typically not provided to the public and there are often license clauses that attempt to inhibit review further (e.g., forbidding reverse engineering and/or forbidding the public disclosure of analysis results). Thus, to reduce the risk of executing malicious code, potential users should consider the reputation of the supplier and the experience of other users, prefer software with a large number of users, and ensure that they get the "real" software and not an imitator. Where it is important, examining the security posture of the supplier (e.g., their processes that reduce risk) and scanning/testing/evaluating the software may also be wise.

Similarly, OSS (as well as proprietary software) may indeed have malicious code embedded in it. However, such malicious code cannot be directly inserted by "just anyone" into a well-established OSS project. As noted above, OSS projects have a "trusted repository" that only certain developers (the "trusted developers") can directly modify. In addition, since the source code is publicly released, anyone can review it, including for the possibility of malicious code. The public release also makes it easy to have copies of versions in many places, and to compare those versions, making it easy for many people to review changes. Many perceive this openness as an advantage for OSS, since OSS better meets Saltzer & Schroeder's "Open design principle" ("the protection mechanism must not depend on attacker ignorance"). This is not merely theoretical; in 2003 the Linux kernel development process resisted an attack. Similarly, SourceForge/Apache (in 2001) and Debian (in 2003) countered external attacks.

As with proprietary software, to reduce the risk of executing malicious code, potential users should consider the reputation of the supplier (the OSS project) and the experience of other users, prefer software with a large number of users, and ensure that they get the "real" software and not an imitator (e.g., from the main project site or a trusted distributor). Where it is important, examining the security posture of the supplier (the OSS project) and scanning/testing/evaluating the software may also be wise. The example of Borland's InterBase/Firebird is instructive. For at least 7 years, Borland's Interbase (a proprietary database program) had embedded in it a "back door"; the username "politically", password "correct", would immediately give the requestor complete control over the database, a fact unknown to its users. Whether or not this was intentional, it certainly had the same form as a malicious back door. When the program was released as OSS, within 5 months this vulnerability was found and fixed. This shows that proprietary software can include functionality that could be described as malicious, yet remain unfixed - and that at least in some cases OSS is reviewed and fixed.

Note that merely being developed for the government is no guarantee that there is no malicious embedded code. Such developers need not be cleared, for example. Requiring that all developers be cleared first can reduce certain risks (at substantial costs), where necessary, but even then there is no guarantee.

Note that most commercial software is not intended to be used where the impact of any error of any kind is extremely high (e.g., a large number of lives are likely to be immediately lost if even the slightest software error occurs). Software that meets very high reliability/security requirements, aka "high assurance" software, must be specially designed to meet such requirements. Most commercial software (including OSS) is not designed for such purposes.

3
  • of course there are things that, when exposed, create a security risk. Think of encryption algorithms. If an enemy knows what encryption algorithm you're using (and which specific implementation, with its flaws), intercepting and decrypting your data becomes potentially that much easier. Of course that's not software security as such but data and data transport security, but it's part of the entire landscape you need to address.
    – jwenting
    Commented Jun 20, 2011 at 7:52
  • 10
    @jwenting Except there are no modern, accepted crypto standards do not take Kerckhoffs' Principle into account. They all assume the system is known. Once could argue for NTLM password hashes, but that's closed source. There are algorithms (such as AES) which have perfect forward secrecy, which means that even if you know the system (algorithm) and even if you know the private key, once the initial session key is generated and passed you're still back at brute force decryption.
    – Bacon Bits
    Commented Jun 21, 2011 at 17:44
  • 1
    @jwenting I'm pretty sure at this point in time cryptography has evolved well beyond the point where a homebrew "but unknown" encryption algorithm is even remotely more secure than, say, ECDSA. Commented Mar 3 at 9:22
38

Back in 2002, Payne conducted a study comparing three similar Unix-like operating systems, one of which was closed-source (Solaris) and two of which were open-source (Debian and OpenBSD) across a number of security metrics. He concludes:

The results show that, of the three systems, OpenBSD had the most number of security features (18) with Debian second (15) and Solaris third (11). Of these features, OpenBSD's features rated highest scoring 7.03 out of 10 while Debian's scored 6.42 and Solaris’ scored 5.92. A similar pattern was observed for the vulnerabilities with OpenBSD having the fewest (5).
...
Based on these results it would appear that open source systems tend to be more secure, however, ... in scoring 10.2, OpenBSD was the only system of the tree to receive a positive score and, a comparison with the magnitudes of the other two scores suggests this is a relatively high score also. Therefore, the significant differences between Debian and OpenBSD's score support the argument that making a program ‘open source’ does not, by itself, automatically improve the security of the program (Levy, 2000), (Viega, 2000). What, therefore, accounts for the dramatically better security exhibited by the OpenBSD system over the other two? The author believes that the answer to this question lies in the fact that, while the source code for the Debian system is available for anyone who cares to examine it, the OpenBSD source code is regularly and purposefully examined with the explicit intention of finding and fixing security holes (Payne, 1999), (Payne, 2000). Thus it is this auditing work, rather than simply the general availability of source code, that is responsible for OpenBSD's low number of security problems.

Edit: To summarize, Payne explains his results by claiming that it is the culture of security itself that promotes actual security. While that is likely true, I think it is also important to note that, with all else being equal, the general public can't independently audit that which is not open.

That study is a bit dated and of limited breadth, though.

I tried looking for a more comprehensive study, but I couldn't really find anything substantive (there are many "opinion pieces" giving arguments as to why open source is better, but not much data). Therefore, I took a quick look at the National Vulnerability Database, which collects, rates, and posts software vulnerabilities. It has a database dating back into the 1980s. I quickly hacked together this perl script to parse the database:

#!/usr/bin/perl -w
use Cwd 'abs_path';
use File::Basename;
use XML::Parser;
my @csseverity;my @osseverity;my @bothseverity;
my $numNeither = 0;
sub mean {
  my $result; return 0 if(@_ <= 0); foreach (@_) { $result += $_ } return $result / @_;
}
sub stddev {
  my $mean = mean(@_); my @elem_squared; foreach (@_) { push (@elem_squared, ($_ **2));     }
  return sqrt( mean(@elem_squared) - ($mean ** 2));
}
sub handle_start {
    if($_[1] eq "entry") {
        $item = {};
        undef($next) if(defined($next));
        for(my $i=2; $i<@_; $i++) {
            if(!defined($key)) {
                $key = $_[$i];
            } else {
                $item->{$key} = $_[$i];
                undef($key);
            }
        }
    } elsif(defined($item)) {
        $next = $_[1];
    }
}
sub handle_end {
    if($_[1] eq "entry") {
        if(!exists($item->{'reject'}) || $item->{'reject'} != 1) {
            my $score = $item->{'CVSS_score'};
            my $d = $item->{"descript"};
            my $isOS = 0;
            my $isCS = 0;
            $isOS = 1 if($d =~ m/(^|\W)(linux|nfs|openssl|(net|open|free)?bsd|netscape|red hat|lynx|apache|mozilla|perl|x windowing|xlock|php|w(u|f)-?ftpd|sendmail|ghostscript|gnu|slackware|postfix|vim|bind|kde|mysql|squirrelmail|ssh-agent|formmail|sshd|suse|hsftp|xfree86|Mutt|mpg321|cups|tightvnc|pam|bugzilla|mediawiki|tor|piwiki|ruby|chromium|open source)(\W|$)/i);
            $isCS = 1 if($d =~ m/(^|\W)(windows|tooltalk|solaris|sun|microsoft|apple|macintosh|sybergen|mac\s*os|mcafee|irix|iis|sgi|internet explorer|ntmail|sco|cisco(secure)?|aix|samba|sunos|novell|dell|netware|outlook|hp(-?ux)?|iplanet|flash|aol instant|aim|digital|compaq|tru64|wingate|activex|ichat|remote access service|qnx|mantis|veritas|chrome|3com|vax|vms|alcatel|xeneo|msql|unixware|symantec|oracle|realone|real\s*networks|realserver|realmedia|ibm|websphere|coldfusion|dg\/ux|synaesthesia|helix|check point|proofpoint|martinicreations|webfort|vmware)(\W|$)/i);
            if($isOS && $isCS) {
                push(@bothseverity, $score);
            } elsif($isOS) {
                push(@osseverity, $score);
            } elsif($isCS) {
                push(@csseverity, $score);
            } else {
                $numNeither++;
                #print $d . "\n";
            }
        }
        undef($item);
    }
}
sub handle_char {
    $item->{$next} = $_[1] if(defined($item) && defined($next));
    undef($next) if(defined($next));
}
my($scriptfile, $scriptdir) = fileparse(abs_path($0));
sub process_year {
    my $filename = 'nvdcve-' . $_[0] . '.xml';
    system("cd $scriptdir ; wget http://nvd.nist.gov/download/" . $filename) unless(-e $scriptdir . $filename);
    $p = new XML::Parser(Handlers => {Start => \&handle_start,
                                      End   => \&handle_end,
                                      Char  => \&handle_char});
    $p->parsefile($filename);
}
my($sec,$min,$hour,$mday,$mon,$currentyear,$wday,$yday,$isdst) = localtime(time);
$currentyear += 1900;
for(my $year=2002; $year<=$currentyear; $year++) {
    &process_year($year);
}
print "Total vulnerabilities: " . (@osseverity + @csseverity + @bothseverity + $numNeither) . "\n";
print "\t  # Open Source (OS): " . @osseverity . "\n";
print "\t# Closed Source (OS): " . @csseverity . "\n";
print "\t              # Both: " . @bothseverity . "\n";
print "\t      # Unclassified: " . $numNeither . "\n";
print "OS Severity: " . &mean(@osseverity) . "\t" . &stddev(@osseverity) . "\n";
print "CS Severity: " . &mean(@csseverity) . "\t" . &stddev(@csseverity) . "\n";
print "Both Severity: " . &mean(@bothseverity) . "\t" . &stddev(@bothseverity) . "\n";

Feel free to modify the code, if you'd like. Here are the results:

The full database has 46102 vulnerabilities. My script was able to classify 15748 of them as specifically related to open source software, 11430 were related to closed source software, 782 were applicable to both closed source and open source software, and 18142 were unclassified (I didn't have time to optimize my classifier very much; feel free to improve it). Among the vulnerabilities that were classified, the open source ones had an average severity of 6.24 with a standard deviation of 1.74 (a higher severity is worse). The closed source vulnerabilities had an average severity of 6.65 (stddev = 2.21). The vulnerabilities that were classified as both had an average severity of 6.47 (stddev = 2.13). This may not be a completely fair comparison, though, since open source software has become much more popular in recent years. If I restrict the results to the years 2003 to the present, we get:

  • Total vulnerabilities: 39445
  • # Open Source (OS): 14595
  • # Closed Source (CS): 9293
  • # Both: 675
  • # Unclassified: 14882
  • Avg. OS Severity: 6.25 (stddev 1.70)
  • Avg. CS Severity: 6.79 (stddev 2.24)
  • Both Severity: 6.52 (stddev 2.15)

I haven't had time to do any statistical analysis on these results, however, it does look like, on average, the vulnerabilities affecting open source software have a slightly lower severity rating than vulnerabilities affecting closed source software.

When I get some more time, I'll try and generate a graph of the running average of severity over time.

11
  • 25
    Oh no! I can't run that script now. You've made it open-source! :-)
    – Oddthinking
    Commented May 20, 2011 at 16:39
  • 9
    One thing to note about the number of open source vulnerabilities is that security-minded users will notify developers, and more bugs/security holes get recorded.
    – user2514
    Commented May 20, 2011 at 17:25
  • 3
    Your summary of Payne's work is exactly the opposite of his conclusion which you cited. To paraphrase Payne using your verbiage: "That open-source software is freely available allows anyone who is interested to conduct a security audit of the code cannot account for OpenBSD's relatively high security rating, because Debian also is freely available for auditing. The most plausible explanation is that the OpenBSD culture is security-focused and actually performs such audits. Available for auditing != audited!"
    – Ben Voigt
    Commented May 20, 2011 at 19:18
  • @Ben: you're correct; I was trying to emphasize the fact that such a culture is really only possible in an open source environment.
    – ESultanik
    Commented May 20, 2011 at 19:39
  • 5
    @ESultanik: That doesn't follow. I offer the NSA as an example of a very closed organization that also has a security-centric culture.
    – Ben Voigt
    Commented May 20, 2011 at 19:55
33

My computing teacher told us that closed source software is more secure than open source software, because with open source "anyone can modify it and put stuff in."

Your teacher is flat wrong. The correct statement is:

anyone can fork it, and put stuff in their fork.

Open source means that anyone can read the source code corresponding to the distributed binary. Usually it also means that anyone can read from the master repository where development occurs, in order to test new unreleased changes. FreePascal follows this general pattern: "As an alternative to the daily zip files of the SVN sources, the SVN repository has been made accessible for everyone, with read-only access."

It does not require that the general public can write to the master repository, in fact write access being limited to trusted project members is the general rule. In some cases, the repository accepts patches from anyone but they are quarantined to separate branches until a trusted member merges the change into the master (trunk) codebase. It appears that FreePascal follows this latter model, you need only a free account to upload patches, but they won't be integrated into the mainline without review.

Ask your teacher to back up their words with actions -- you have FreePascal installed on your computer, if he thinks that's "insecure", ask him to "modify it and put in" an insulting message that appears next time you run it. Won't happen, there's this huge chasm between the modified copy in his home directory and the version you download and compile on your computer.


Your final sentence, asking for studies performing statistical comparison of open-source vs closed-source, shows that you've adopted one of your teacher's bad practices: the fallacy of applying the law of averages to an individual.

I submit that the utility to you of drawing software from a category which is more secure on average is essentially nil. You should be interested instead in programs which are individually and specifically highly secure, no matter what characteristics they share with insecure software.

23
  • 4
    Skeptics requires references for all answers. Answers without references are only speculation, not fact. See the FAQs. Thanks!
    – Kevin Peno
    Commented May 20, 2011 at 18:48
  • 6
    @Kevin: Your reference doesn't substantiate your claim. The FAQ you linked doesn't even contain the string "refer". Also, before demanding references from someone (e.g. sharptooth above), you might want to first verify that he isn't a recognized expert in the field, whose word is as good as any work you could reference. Furthermore, an argument is only as valid as its premises, showing that the assumed premise is disputed is a valid (under formal logic) means of invalidating an argument.
    – Ben Voigt
    Commented May 20, 2011 at 19:14
  • 8
    @Kevin: No, I would prefer that you demonstrate good answering style by writing some answers. Let the real contributors to the site police it. Oh, and skepticism is also about knowing basic principles of logic such as identification of assumptions.
    – Ben Voigt
    Commented May 20, 2011 at 19:31
  • 2
    @Ben, here you go. Just because I don't have a good way to answer the questions posed on this site doesn't mean that I am not a contributor. Additionally, as a user, I want to see reliable answers so that my own skepticism can be satisfied.
    – Kevin Peno
    Commented May 20, 2011 at 19:36
  • 2
    -1 Your second part doesn't answer the question. The question is if an OSS model would produce more secure software than a closed model, naturally if we pose such a question we would have to control for difference in population. Pointing out that different programs have different needs doesn't answer the question, it merely points out a supposed fallacy about something which should be controlled by in any research that has been done (of which you've cited none).
    – Kit Sunde
    Commented May 21, 2011 at 4:32
15

I think John provides the best answer when he says that many other factors can influence security. However it is worthwhile to see how openness can affect security.

The earliest work in this direction was in 1883 by Auguste Kerckhoffs and is called Kerckhoffs's Principle. He argued that for any system to be secure:

A Cryptosystem should be secure even if everything about the system, except the key, is public knowledge.

An important interpretation from Art of Information Security,

Kerckhoffs’ Principle does not require that we publish or disclose how things work. It does require that the security of the system must not be negatively impacted by such a disclosure.

Most closed-source systems do not actually violate Kerckhoffs' principle, so open-source cannot be said to be inferior or superior to closed-source by this measure.

Two models are often used with regard to software Security through obscurity vs. Security through disclosure/openness. The arguments for and against them are rehashed on wikipedia.

Statistically, Linux suffers from a much lower rate of infection than Windows. This is usually attributed to the open-source model, but some other alternative reasons (like lower market share) are also proposed as the explanation. Firefox also claims to have a lower number of open security exploits than Internet Explorer.

However, it should be noted that more eyes less bugs only works for popular open-source software, and may not be viable for less popular/custom software.

1
  • 1
    "More eyes less bugs [sic]" (aka "Linus' Law") cannot apply to subtle security bugs - that would need to be ammended to "Given enough trained and motivated eyeballs, most bugs are shallow".
    – AviD
    Commented Jun 19, 2011 at 8:41
9

A cursory examination of the controversies in the weekly kernel sections on Linux Weekly News[1] shows just how hard it often is for extremely experienced developers with great reputations to get their code into reputable projects. If you're downloading from a project or distribution that has standards and enforces them on public mailing lists, you can make a more informed decision about the reliability and trustworthiness of the code than if you're buying proprietary software from companies with unknown processes whose development practices you cannot scrutinize. If you're downloading from Will's World of Warez, you're in trouble, regardless of the development model.

[1]: http://lwn.net/ Linux Weekly News. Weekly editions other than the latest are free to non-subscribers.

8

As I've noted in comments, a complete, reasoned analysis is presented at Open-source vs closed-source systems.

However, for the sake of argument, I will present a single example as evidence: the first real rootkit - and, apparently the most widespread - was in very popular open source package.

From Rootkit History (Wikipedia):

Ken Thompson of Bell Labs, one of the creators of Unix, subverted the C compiler in a Unix distribution and discussed the exploit in the lecture he gave upon receiving the Turing award in 1983. The modified compiler would detect attempts to compile the Unix "login" command and generate altered code that would accept not only the user's correct password, but an additional password known to the attacker. Additionally, the compiler would detect attempts to compile a new version of the compiler, and would insert the same exploits into the new compiler. A review of the source code for the "login" command or the updated compiler would not reveal any malicious code.

Reference: http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

In summary, Ken's own words from that paper:

The moral is obvious. You can't trust code that you did not totally create yourself. No amount of source-level verification or scrutiny will protect you from using untrusted code.

Open source would not help you here.
In fact, insisting on the inherent security in opensource is irrelevant.

5
  • Interesting story; but logically there might be other reasons (other threats) why open or closed source software might be safer.
    – ChrisW
    Commented Jun 20, 2011 at 1:13
  • @ChrisW, as explained in the question I linked to on ITSec, there are only anecdotal reasons (at best), and really it comes down to religious beliefs being the only reason to find. Analytically, realistically, open or closed has no effect on the level of security.
    – AviD
    Commented Jun 20, 2011 at 6:39
  • The security.stackexchange question provides more arguments but no references: more reasons/claims why open or closed may be more or less secure, but no studies showing whether they, in fact, are.
    – ChrisW
    Commented Jun 20, 2011 at 13:55
  • @Chris, as I said, the answers there provide analysis, not spouting this or that type of gameable study. Furthermore, any such "study" would be subjective by nature. In this case, I believe sound analytical rationale to be more beneficial and reliable, for numerous reasons. Did you find any flaw in the analysis? Note that there are no arbitrary claims there, either - high school level logic, together with facts that are well known in the security industry, suffice. This even matches the requirements for this site too...
    – AviD
    Commented Jun 20, 2011 at 14:47
  • 1
    You can also extend this to the CPU level, really. If your CPUs are rootkitted, you are screwed :-/
    – Sklivvz
    Commented Aug 17, 2011 at 16:25
7

Another point not already covered, but going in the same direction as most answers:

Even without the source, in many environments, you can place a series of jumps at the beginning of an executable binary to go to a place where you have compiled your own little piece of software, then resuming normal operation of the code.

From Wikipedia:

The binary is then modified using the debugger or a hex editor in a manner that replaces a prior branching opcode with its complement or a NOP opcode so the key branch will either always execute a specific subroutine or skip over it. Almost all common software cracks are a variation of this type.

Of course, as this is what many viruses and cracked versions of commercial software do, it may be detected as suspicious by antivirus utilities or blocked because of checksum verifications by the code itself, the loader/linker, the OS, etc.

2
  • Please provide sources to support your answer.
    – Sklivvz
    Commented Aug 11, 2011 at 16:02
  • @Sklivvz: sorry not to have thought it required external sources, but this is common knowledge for any assembly language programmer.
    – ogerard
    Commented Aug 17, 2011 at 15:53

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .