-1

The ANPR camera system's internal management dashboard could be accessed by simply entering its IP address into a web browser. No login details or authentication of any sort was needed to view and search the live system

Number-Plate Cam Site Had No Password, Spills 8.6 Million Logs of UK Road Journeys

I keep seeing this kind of nonsense every other day. Millions of records exposed here, millions of private pieces of data exposed there... Always without any "hacking" taking place. They just have no passwords. Just keep it all open to the world.

How is this possible?

The thing is, I actually did the same thing myself, BUT I was a single loser, early 20s, at home, in a deep psychosis, running a database which I believed had been secured (I misinterpreted the very misleading and weird documentation)... with no other person involved and a massive "ego" in that I believed myself to be a computer expert.

These leaks you hear about, on the other hand, are huge corporations or governments which obviously must have hired some kind of expert/professional to implement these things, yet they still do what I did at home for my personal project?

How is it possible? I truly do not get it. Unless it's done on purpose, over and over again. Do they not have any kind of "investigation" into these things? Do they never read the news and learn from others' mistake? Do they truly not care about their friends and family and their own records being leaked to the world? How can they not care?

How do you get some kind of security certification if you set up a database with NO AUTHENTICATION WHATSOEVER?

1
  • 2
    This might be too broad to answer, or the the answer is too generic: "human stupidity". The younger you was not the only incompetent moron in the world. While there are many companies doing things right, there will always be low hanging fruits for that reason. Also, if it's someone else's responsibility to secure things, instead of everyone's responsibility, it may end up being no-one's job. Commented Apr 29, 2020 at 10:05

3 Answers 3

1

Two very common situations:

  • no one thought about it
  • that was not the original design, but scope creep or "temporary measures" meant that the final product exceeded the plans for review, so it wasn't on anyone's radar

As a single developer, you hold all the moving parts in your head and you touch every part of the system. You could think about the impacts of a change on one part to the rest of the parts.

In a big team with multiple stakeholders and a shifting product plan and roadmap, it is very easy for blind spots to appear.

So, the problem is not that they didn't hire an expert. The problem is that they hired quite a lot of experts, and they were all focusing on their one small part and not the whole.

Should there be someone or some process to have their head above water to review the product before it goes to production? Sure, but then, someone needs to think about the possibility of needing such a process. And when deadlines are looming, final checks are the first to be cut. You can "always clean it up after it's released using professional services" ...

0

There are several aspects which lead to insecure software, data leaks etc. The main requirement for any business task is always that it has to work since otherwise it would be useless. Security comes only second and in no way should security requirements impact the actual use too much.

Add to this insecure defaults and the complexity of the software stack compared to the experience of the developers and users. Additionally it is much harder and takes more time to test if something is secure instead of just if it is working. Just have a look at the Boing 737 Max disaster: it is easy to show that the plane can fly but hard to make sure that it can fly in unusual conditions and can recover from problems.

0

Often the reasons are pretty complex, that is, it's a combination of several factors that are difficult to analyze from the outside. In this case, the article reports:

Lawyers for ANPR dashboard maker Neology told The Register the Sheffield system was put together by American megacorp 3M in September 2014. Around the same time, the business unit building the system was sold to Neology, with the lawyers insisting "our client has not been responsible for the management of the system" since then.

So it sounds like there might have been a change of property, change of team, change of responsibilities, and the management might have become pretty chaotic. For all we know, the system might have been protected in the past, but then all of a sudden might have become accessible without any restrictions because of a simple migration to another (misconfigured) server. All it takes for that to happen is the boss telling a junior employee "please migrate the app to this server", and then nobody bothering to check if everything was working correctly. Why? Because maybe nobody even knows how something was supposed to work correctly, maybe only the migration is included in the contract and anything else would end up being unpaid extra work, maybe the job has been outsourced and the needs of the actual client are not even known... and so for several reasons security becomes nobody's responsibility.

In general, here's a list of aspects that might lead to security blunders:

  • Security costs a lot. For good security you need expertise, time, and money. A lot of modern economy is based on "cheap and fast" processes, so security becomes a problem.
  • Information security is often not considered as important as other kinds of security. Lots of people still tend to think "I don't care about my privacy because I don't have nothing to hide", or "why would hackers care about this data?", or "who cares if I get hacked, if it happens I'll just change the passwords".
  • Some workers are forced to become jacks of all trades (and masters of none). In small teams or small businesses, you can't have an expert in every field. So the IT guy who fixes the printer also ends up being the guy who is expected to keep the company's website secure.
  • Responsibilities are not clearly defined beforehand. Is the kid who migrated the app responsible for the security hole that unexpectedly came into existence after the migration? Or is it the project manager? Is there even a project manager? Who is going to check the security of the app? Was the migration task outsourced? Was the boss convinced that a security check was implicitly included in the job, even though nothing like that was actually included in the contract when they outsourced the task? Was there even a written contract for that task?

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .