110

So, I am designing a door authentication system (can't really go into more detail) for our school, so that only authenticated persons can go through a certain internal door. They hold that its inner working should be kept a secret, so that no one can reverse engineer it. I maintain that this would be Security through obscurity, which is bad. I have designed the system such that knowing how it works wouldn't help you get in, only having the key would help you get in, as according to Kerckhoff's principle. They still maintain that instead of more people knowing about it, less should.

Is it a good idea to go with a closed design? If not, exactly why?

P.S. They only told me it was a secret after I had designed most of it and told a bunch of people, if that makes a difference (I had no idea they wanted it to be a secret).
P.P.S. Although it is not open to the outside, the room it is protecting contains more valuable stuff than the rest of the school, so I would expect adversaries to be able to reverse engineer it anyways if we go with a closed design instead of an open one, but I'm not sure how to communicate that.

19
  • 135
    Let me guess... Your school is located in Scotland in an old castle. The headmaster is a quirky old bearded type. The room has a mirror... Looks like some unemployed writer has already spilled the beans about the security system. Commented Feb 2, 2016 at 21:00
  • 6
    The reasons for opening the inner workings of a system is usually for it to get free peer reviews. In a school context, I highly doubt that there is too much security feedback incoming, but rather someone told someone told the someone who was caught in that room.
    – Sebb
    Commented Feb 2, 2016 at 21:58
  • 38
    They hold that its inner working should be kept a secret, so that no one can reverse engineer it. The reason for stuff being reverse engineered? Because it's unknown...
    – WernerCD
    Commented Feb 2, 2016 at 21:59
  • 22
    I would question why the school is designing their own door authentication system when there are already a wide variety of commercial products out there that do the same thing (that the school is probably already using to secure its doors). If you have something too secret for just a card-access or a PIN pad to protect, add a biometrics device to the "secure" room and tightly control who is enrolled to use that device.
    – Johnny
    Commented Feb 3, 2016 at 5:09
  • 60
    Google "layered security". Anyone serious about security will implement layered security, of which secrecy/obscurity is not only a legitimate layer, it is an important layer (though it should not be the CRITICAL layer). It's why really good admins configure their servers to not leak what software version they use. It increases the workload of the attacker since he needs to probe your design first before he can take a crack at it.
    – slebetman
    Commented Feb 3, 2016 at 9:53

8 Answers 8

245

Obscurity isn't a bad security measure to have in place. Relying upon obscurity, as your sole or most substantial security measure, is more or less a cardinal sin.

Kerckhoff's Principle is perhaps the most oft-cited argument against implementing security through obscurity. However, if the system is already properly secure according to that principle, this is mis-guided.

The principle, paraphrased, says "assume the enemy knows everything about the design of your security systems". This does not in any way imply "tell the enemy everything about your system, because they know anyway".

If the system is already well-designed according to Kerckhoff's Principle, the worst that can be said about applying security through obscurity to it is that it adds little to no value. At the same time, for such a system, there is equally little to no harm in protecting the confidentiality of its design.

21
  • 114
    "Security through obscurity isn't bad." Thank you for saying what some people will not admit. Commented Feb 2, 2016 at 21:55
  • 7
    "little to no harm in protecting the confidentiality of its design" is not really true. as mentioned by Tom Leek education is important at a school, also by releasing a design people might point out flaws.
    – Sam
    Commented Feb 2, 2016 at 22:03
  • 42
    I would rephrase the first paragraph as "Obscurity isn't bad. Relying upon obscurity for security, as if it alone is enough, is more or less a cardinal sin." I think this would be a more plain wording of what you mean.
    – jpmc26
    Commented Feb 2, 2016 at 22:03
  • 21
    @jpmc26 "Obscurity isn't security, it's just a thing" Commented Feb 2, 2016 at 22:29
  • 4
    Any time you require a password to access a resource, you are relying on security through obscurity.
    – alexw
    Commented Feb 4, 2016 at 23:15
102

Keeping the design secret does not make the door insecure per se; however, believing that it adds to security is a dangerous delusion.

If revealing the details of the authentication system would allow breaking it, then that system is pure junk and should be discarded. Therefore, if you did your job properly, then revealing the details should be harmless. If you did not do your job properly, then fire yourself, and go hire someone competent.

In all generality, publishing system details promotes external reviews, that result in the break-fix cycle, which ultimately improves security. In a school context, not publishing system details harms security, because schools are full of students, and students are known to be nosy anarchists who will be especially attracted to the reverse engineering of anything that is kept secret from them. It is well-known that in a student computer room, the best way to keep security incidents low is to give the root/Administrator password to a couple of the students -- when a student wants to dabble with computer security, giving him full access removes all incentive for trying to break things, AND turns him into a free police auxiliary to monitor the other students.

Also, detailing the inner workings of a security system could be a highly pedagogical endeavour. I heard that in some schools they actually practice pedagogy, at least occasionally. Your school might want to give it a shot.

9
  • 5
    "If you did not do your job properly, then fire yourself." And it's better to figure that out before implementing it by revealing the design, correct? (I'm really confident in the security design, but still a good precaution, nonetheless.) Commented Feb 2, 2016 at 20:46
  • 22
    @PyRulez: the operational notion is "review". You want some extra analysis by other people. An open publication can help a lot in getting free reviews.
    – Tom Leek
    Commented Feb 2, 2016 at 20:47
  • 11
    Do you have a source for the "give the root/Administrator password to a couple of the students"? I'd like to read more about that. Commented Feb 3, 2016 at 8:03
  • 9
    @Nelson how is that any different from NOT giving them the root password?
    – Aron
    Commented Feb 3, 2016 at 9:02
  • 7
    +1 for "students are known to be nosy anarchists"
    – Sidney
    Commented Feb 3, 2016 at 15:46
29

You have already received several excellent answers, though @TomLeek's and @Iszi's answer (both excellent btw) seem to be in direct contradiction.

They both make excellent points: on the one hand, keeping the design secret will not make the system secure, whereas reviewing it publicly will enable you to (possibly) find certain vulnerabilities you had not considered; on the other hand, it doesn't really hurt to keep the design secret, as long as that is not a key factor in the design's security.

Both sides are absolutely correct - sometimes.

I think it would be fair to say that both sides in the general argument would agree that keeping the design secret does not directly increase security at all.
In the worst case, it merely hides security weaknesses (which may or may not be a good thing, depending on who you consider it to be most hidden from).
In the best case (where there are no trivial vulnerabilities that would be exposed by publishing the design), it still does not increase security - but it does minimize the attack surface.

Minimizing attack surface (even without the presence of a vulnerability) is definitely a good thing; however this needs to be weighed and traded-off against the benefits of publishing (namely being reviewed by additional sets of eyes), and the downside of keeping it secret - e.g. the temptation to rely on it as a security control (the ever popular security by obscurity), as a form of security theater.

It is also worth noting that, as @Tikiman alluded to, merely publishing the design is not enough to ensure it is reviewed - especially by those who are capable to find the vulnerabilities and who are also inclined to disclose them to you. In many cases, a published design would only be reviewed by those malicious individuals with illicit intent, thus you would not achieve the expected benefit. Moreover, often one does not even know if their design falls into the aforementioned best case or worst case.

So, bottom line - as in so many things in security, the straight answer is still: It Depends.

There is a definite trade-off here to be considered - if this was a complex cryptosystem the answer would be clear; if this was an implementation-heavy typical enterprise system, a different answer would be clear.

My leaning in this case is as @Tom said, but for the secondary reasons mentioned - partly the anarchic user base, and mostly the pedagogical goal.

Note that these are actually not really security considerations - at least not directly.

(Oh and as to @Tikiman's point - the pedagogy involved here means that you can actually ensure the design is reviewed, at the least by the entire class ;-) )

2
  • While you're at it, don't forget to expound on Tikiman163's answer. Commented Feb 2, 2016 at 22:31
  • @PyRulez I added a comment or two, such as they are... Though my intent wasn't a point-by-point rebuttal of other answers, but to contrast both sides of the obscurity/opensource debate...
    – AviD
    Commented Feb 2, 2016 at 22:41
14

This article of Daniel Missler is great!

It states that

Security by Obscurity is bad, but obscurity when added as a layer on top of other controls can be absolutely legitimate.

by having that concept, a much better question would be

Is adding obscurity the best use of my resources given the controls I have in place, or would I be better off adding a different (non-obscurity-based) control?

We can also use the anology of camouflage as obscurity as another layer of security

A powerful example of this is camouflage. Consider an armored tank such as the M-1. The tank is equipped with some of the most advanced armor ever used, and has been shown repeatedly to be effective in actual real-world battle.

So, given this highly effective armor, would the danger to the tank somehow increase if it were to be painted the same color as its surroundings? Or how about in the future when we can make the tank completely invisible? Did we reduce the effectiveness of the armor? No, we didn’t. Making something harder to see does not make it easier to attack if or when it is discovered. This is a fallacy that simply must end.

When the goal is to reduce the number of successful attacks, starting with solid, tested security and adding obscurity as a layer does yield an overall benefit to the security posture. Camouflage accomplishes this on the battlefield, and PK/SPA accomplish this when protecting hardened services.

Emphasis mine.

Iszi's comment is great also, he states that it is much better if we change the word adding to enforcing, so in summary it will be look like this

Summary:

Security by Obscurity is bad, but security enforced with obscurity as a layer on top of other controls can be absolutely legitimate. Assuming that you are safe in the battlefield because you think your tank is painted with the same color as the environment is just plain nonsense. But making your tanks' defense great and enforcing the paint which grants you the camouflage ability as another layer of protection is great!

11
  • 6
    Doesn't this analogy break down if you account for the reviews you can have by opening the design? Commented Feb 3, 2016 at 14:01
  • 3
    Maybe in the question of "Is adding obscurity" you should change "adding" for "enforcing". If a single person designs a systems solely in their head, the design is naturally obscure without any additional effort. Even after the designer has documented the system in detail, the design remains reasonably obscure with practically zero additional effort so long as it remains solely within the designer's possession. Putting the design in a shared-access repository, however, threatens (but does not automatically break) the obscurity of the system. That's when effort is needed to enforce it.
    – Iszi
    Commented Feb 3, 2016 at 15:19
  • 1
    @JoãoPortela The risk to any design, in its current implementation, can only be increased by publishing details of that design. Tank armor is really a great analogy, albeit using camouflage as the obscurity layer in the analogy might not be best. Consider a nation already at war. Their tanks are deployed at the front lines in active combat. With open-source design, the enemy has practically equal information as you do when it comes to finding flaws in the tanks' construction or the armor's chemistry. And it's very possible that the enemy could dedicate more resources to finding them.
    – Iszi
    Commented Feb 3, 2016 at 15:28
  • 1
    Obscurity when it is the only protection is better than no protection at all. Believing that obscurity in this case keeps you safe may be fatal though.
    – gnasher729
    Commented Feb 3, 2016 at 16:19
  • 1
    Adding camouflage to the M-1 is done as an extra "field" layer of security that would never be considered "part of" the actual tank's security specs during testing or engineering or design or even "battle readiness" (I suppose that might be off. It wouldn't be considered battle ready without camo, but the camo also wouldn't be seen as boosting the tank's expected battlefield performance.) But most information-based security that uses some obscurity final layer does affect the interpretation of it's "true" security. People see the "camo" and think "now THAT's secure!"
    – Anthony
    Commented Feb 8, 2016 at 12:23
8

While not really answering your question, this might serve as an argument towards your school.

I would consider someone getting access to an authorized key/identity the real risk. People are sloppy, use bad passwords, and write secrets down all the time. A teacher at my school, ages ago, once left the keys to the entire school in a student bathroom.

If I wanted to get into that room I wouldn't even bother trying to find a security hole in the software; I'd steal the key, try tampering the physical lock, or some other external method.

Or, as a friend of mine said as he was asked by the principal how he would go about hacking the school system in order to destroy data; "I'd use a baseball bat".

0
3

I have a long explanation that may seem to wander, so I'll give a shortened answer, then justify it. Short answer, this is security through obscurity, but this likely isn't a problem because of the number of people that come into contact with the system. So it probably isn't worth having an argument over.

You are correct in your assertion, that keeping the system design a secret is security through obscurity(STO). The primary reason that STO is a bad idea is that a system who's inner workings are not initially known can be reverse engineered in all cases through careful observation and the proper application of social engineering. If you are the only person that understands how a system works, you are the only person who can verify its integrity. Therefore, if there is a potentially by-passable flaw in your design and someone else reverse engineers your design and discovers it, they can exploit it more easily than if you had not kept your designs secret. They are also more likely to be able to keep their discovery and illicit use a secret.

This is because if you make your design public knowledge, more people will examine it, the more people who examine a design the more likely someone is to discover an existing flaw and tell you about it. A design flaw may not even be in the general concept, but the specific implementation, such as the opportunity for a buffer overflow in the implementation of an otherwise secure algorithm. The primary concept of public cryptographic primitives use is that by making everyone aware of your cryptographic primitive algorithms, others may review it. After a large number of individuals have done so you can be reasonably assured that your design is secure. The difficulty is that because you're making a design for a school, only a very small number of people are likely to view your designs, very few of whom are likely to understand them. The fewer people that view your designs, the more likely that everyone that discovers a flaw won't report the flaw.

Unless you have access to a large community of security professionals willing to review your design, letting them have their way may be roughly equal in terms of actual security.

4
  • 1
    I think this-- the number of parties-- is the critical point. An attacker needs to know (a) the password or (b) the exploitable design flaw. If there are a million Acme keypads out there, then it will be easier to find (b), if it exists, because the attacker can just buy and take apart ten Acme keypads. In that case, Acme corp. are better off publishing the design, so they don't get complacent, and have a better chance of learning about any flaw right away. But if there's only one Acme keypad in existence, it's feasible, and worthwhile, to keep its design flaws as secret as its password.
    – bobtato
    Commented Feb 3, 2016 at 14:35
  • 2
    This answer is very flawed in that it assumes publicizing the design will increase the number of people who are reviewing it, and who are both non-hostile and competent. It certainly makes the system more accessible to review by many people. But you cannot know how many people actually will bother to review it. Or how many of those people will even be knowledgeable, skillful, and thorough enough in their review to find any flaws. Or what the intentions will be of the people who do find flaws.
    – Iszi
    Commented Feb 3, 2016 at 15:49
  • Iszi, I very specifically addressed the fact that his design is not likely to be reviewed if he does publish it. If you're going to refute someone, please don't do it by agreeing with their assessment. At the very least, try to fully read what you're commenting on before commenting, you could have easily realized you're not pointing out a flaw by reading only the first paragraph where I stated that STO isn't bad in this case.
    – Tikiman163
    Commented Feb 4, 2016 at 16:25
  • @Tikiman163 You do state that towards the end but the beginning also strongly implies the opposite so the answer should be improved. In its current state, I am not convinced it adds anything to the discussion.
    – Relaxed
    Commented Feb 5, 2016 at 22:53
0

One obvious goal would be that the door authentication system cannot be hacked by the students.

If we assume that the authentication system is slightly but not very difficult to hack, and that there are students with some but no advanced hacking skills, then it is quite possible that the added difficulty posed by making the details of the system unavailable is just enough to put it out of reach of that group (the students).

On the other hand, once the system is so secure that cracking it is much more difficult than finding or reverse engineering the workings of the system, keeping the details secret is not very useful anymore.

0

If I were responsable for the door I would have same requirement about confidentiality. If the world (and your job) were perfect, obscurity will indeed be useless.

But just think of what actually happens in a real world. I assume you did your job the best you could, using state of the art algorythms. But the devil hides in details, and an implementation detail can weaken security. It happened to the SSL library not that long ago, even if the algos were indeed secure.

What you want when disclosing the details of your security system is that peers do review your implementation and warn you for possible flaws before they are discovered and used by attackers. If you know experts in security system and have them review your work, I think it would be seen as good practice even in your school - IMHO it should at least. But if you just publish it, more likely your first readers will be the ones against whom the door security was built ! And you certainly do not want them to be the first to review your work for possible flaws.

TL/DR: If you build a general security system, that could have a large audience and you do not use it immediately in production you should publish it as broad as you can to get good reviews. But if you build a dedicated implementation that will immediately be used for real security only disclose details to trusted experts to avoid helping attackers to find possible flaws.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .