28

Unless someone has my private ssh key, how is leaving an aws instance open to 0.0.0.0 but only on port 22 via ssh insecure?

enter image description here

The ssh key would be distributed to a small set of people. I prefer to not need to indicate their source IP addresses in advance.

I do see another similar question SSH brute force entry in aws ec2 instance .

If you disabled password based login via SSH, then it is very hard to brute force an SSH login using a private key (

Maybe this covers it? Just want to double check since in the security world you do not get a second chance.

6
  • 4
    "Unless someone has my private ssh key, how is leaving an aws instance open to 0.0.0.0 but only on port 22 via ssh insecure?" It is not, unless your SSH server has bugs that can be exploited (not unheard of, remember the OpenSSH hacks of 2003 or so) or is badly configured. It's interesting to leave it on port 22 though, you will get immediate brute-forcing attempts from notorious IP addresses. The AWS text is standard lawyer-proofing. Commented Jun 26, 2020 at 9:37
  • 15
    I feel going from "We recommend ... to only allow access from known IP adresses" (amazon) to "trivially insecure" (question title) is a bit of a leap there ;)
    – marcelm
    Commented Jun 26, 2020 at 10:08
  • 36
    "The ssh key would be distributed"? Erm, no. Each user/computer combination should have their own key. It should really be generated by each user, and their public key set up on their account on the server.
    – jcaron
    Commented Jun 26, 2020 at 13:32
  • 3
    @jcaron 's comment above should get more attention. Don't distribute "the" (singular) key the AWS instance generates. Have each user have their own public key added to the host. Not strictly addressing the topic of the question, but important.
    – JesseM
    Commented Jun 26, 2020 at 19:17
  • Not an answer to your question, but you might be interested in knowing that you can connect through SSH using Systems Manager without opening port 22 to public IPs and without using an SSH key Commented Jun 26, 2020 at 21:05

12 Answers 12

36

The answer depends on your risk appetite. Restricting access to the SSH port to only known IP addresses reduces the attack surface significantly. Whatever issue might arise (private key leaks, 0-day in SSH, etc.), it can only be exploited by an attacker coming from those specific IP addresses. Otherwise the attacker can access the port from anywhere, which is especially bad in case of an unpatched SSH vulnerability with an exploit available in the wild.

It is up to you to decide, how important the system and its data is to you. If it is not that critical, the convenience of an SSH port open to the world might be appropriate. Otherwise, I would recommend limiting access, just in case. Severe 0-days in SSH do not pop up on a daily basis, but you never know when the next one will.

7
  • Thanks - I am doing a small prototype with only random test data. This may be usable. Commented Jun 25, 2020 at 21:22
  • 13
    @javadba just repeating what demento said so we'll. There is no such thing as "secure" or "insecure". There is only ever "secure enough for me". The security needs for an anonymous cute-cat-picture voting site or not the same as for the website where you launch nuclear strikes (which obviously shouldn't even be a website!!!). Amazon is giving best practices, but not all security measures are worth the cost for everyone. Commented Jun 25, 2020 at 21:47
  • @ConorMancone yes fair enough. I appreciate the feedback to ensure at least this is not a grab-and-go situation. Commented Jun 25, 2020 at 21:50
  • +1 for 0-day in SSH and SSH vulnerability with an exploit available in the wild.
    – mti2935
    Commented Jun 26, 2020 at 18:56
  • 1
    @Alexis the warning included in the OP, tells you how to do this: by updating your security group rules. You can get more information how IAM works in AWS from the official AWS documentation at Identity and access management for Amazon VPC Commented Jun 28, 2020 at 8:24
31

The ssh key would be distributed to a small set of people.

No, don't do that. Never share private keys. Have your folks generate key pairs on their own and collect their public keys. Take reasonable measures to ensure the pubkeys actually come from the right people.

Or if you don't mind the hassle, you can try a unified authentication scheme instead for example an SSH CA, so that you can sign them certificates, both of which can safely be distributed (the certificate is useless without the private key).

LDAP is even better, but I wouldn't bother with small-scale servers. It's just too complex to set up and maintain.


Opening an SSH port to the internet isn't insecure per se. It depends on how it authenticates. SSH scanning happens every minute on the internet. Try leaving it on for just a day and check /var/log/auth.log for invalid usernames.

I would say as long as you're using public key authentication and keeping the private part secure, no one can brute-force into your server in a practical amount of time, given that common SSH implementations like OpenSSH don't have 0-days popping up frequently. Sharing a private key is not secure, nor is it convenient. The key may get leaked during transmission, probably at some point you're not even aware of. That's what's dangerous.

8
  • 45
    Why a certificate? Have people send you their public keys, add those public keys to .ssh/authorized_keys. Take reasonable care that the public key is, indeed, from the person it claims to come from. That's how it was designed in the first place. Commented Jun 26, 2020 at 8:31
  • 2
    Good catch that the OP shouldn't share private keys, but I disagree with most else you said. Guntram's suggestion is the much better solution, and brute force isn't the only concern. The risk of 0-days is a bigger one. However, all of this depends on the OPs risk appetite. Commented Jun 26, 2020 at 12:15
  • I've never heard of "SSH CA" before (but am familiar with X509 CAs as well as SSH), is this what you're referring to? access.redhat.com/documentation/en-us/red_hat_enterprise_linux/… Commented Jun 26, 2020 at 19:58
  • 1
    @CaptainMan Yes that's it. Summarizing as ssh-keygen -s.
    – iBug
    Commented Jun 27, 2020 at 4:48
  • 3
    Some details on the ssh ca concept: engineering.fb.com/security/scalable-and-secure-access-with-ssh Commented Jun 27, 2020 at 15:05
6

Answer No not trivially insecure, but still not ideal.

I manage multiple AWS instances, and while most of them have Security Groups limiting SSH inbound access, there is a business need for one of them to listen on port 22 for all connections.

As such this host gets hit by thousands of script-kid (skiddy) connections every day. This is indicated at login by MOTD messages like

Last login: Fri Jun 19 23:17:36 UTC 2020 on pts/2
Last failed login: Sat Jun 27 01:00:44 UTC 2020 from 120.70.103.239 on ssh:notty
There were 21655 failed login attempts since the last successful login.
host1234 ~ # date
Sat Jun 27 01:12:18 UTC 2020

So that's roughly 2,500 a day or a hundred an hour. Certainly most of them will simply be automated probes, but what happens if a zero-day vulnerability is found and exploited?
By limiting your exposure you reduce the risk.

Solutions include one/some/all:

  • Use AWS security groups to only permit connections from specific IPs on the internet
  • Use a VPN solution and require that SSH be done over the VPN. The VPN can listen to all sources, have certs and 2FA, and generally add more layers. OpenVPN works well, or there are multiple AWS offerings to do the same task.
  • Move SSH to another port - its not any added security, but this does cut down on the number of ssh connection attempts and therefore the noise. Anyone worth their salt will scan all ports anyway, not just the default.
  • If you HAVE to listen for SSH promiscuously, explore a solution like fail2ban which adds sources to /etc/hosts.deny if they fail more than X times in Y minutes, and can remove them again after a day or so.
  • Explore IPv6 - like changing the listening port, IPv6 increases the time taken to scan, so skiddies have more space to search. v6 scanning still happens though.

For me, the devices sshing-in are hardware, so they have a valid user certificate and they always auth successfully. We wrote a script that scans /var/log/secure and looks for "user not found" or similar, and immediately adds those sources to the hosts.deny file permanently.
We've considered extending this to block whole subnets based on lookups, but that hasn't been needed yet.

We currently block:

host1235 ~ # grep -ci all /etc/hosts.*
/etc/hosts.allow:79
/etc/hosts.deny:24292

I'm not going to share a list of bad source IPs, because some locations consider IP addresses to be Personally Identifiable Information (or PII)

Note that our Office IPs are in hosts.allow which trump the hosts.deny file, so if someone fails a login from an office, then it won't lock out human users.

Do ask for clarifications - I know I've handwaved a lot of details.

3
  • 2
    I can vouch for fail2ban. Even hosting a Raspberry Pi on my home network saw brute force attacks! Commented Jun 30, 2020 at 15:56
  • 1
    On top of the existing layers you've described, is it possible to additionally implement some kind of a 2FA solution, that would be required in case the sign-ins are attempted from previously unseen IPs (or IPs that are not is some whitelist)? My issue, is that I'd like connectivity from a mobile hotspot, which however tends to change IP very often.
    – runr
    Commented Aug 25, 2020 at 13:28
  • 1
    Another thing, about changing the ports from 22: from what I understand, they can still be discovered by a thorough nmap scan - if that's the case, is it possible to spoof them? E.g., spawn some "honeypot" ssh ports that lead nowhere (or to some empty docker container?), essentially hiding the correct port that leads to the machine?
    – runr
    Commented Aug 25, 2020 at 13:35
4

You may want to consider using AWS Session Manager. When my company was using AWS it seemed like it was not a super widely known tool. Essentially it simply lets you log into the EC2 instance from the browser (or command line) via the AWS console. You use IAM policies to authorize instead of SSH keys.

At the risk of sounding like an advertisement, I'll go ahead and quote the relevant portion of the documentation.

  • No open inbound ports and no need to manage bastion hosts or SSH keys

    Leaving inbound SSH ports and remote PowerShell ports open on your instances greatly increases the risk of entities running unauthorized or malicious commands on the instances. Session Manager helps you improve your security posture by letting you close these inbound ports, freeing you from managing SSH keys and certificates, bastion hosts, and jump boxes.

So if you do suspect leaving port 22 open will be a problem (and I think Demento's answer covers well whether you should or not) then this is an approach you can use to keep it closed while still allowing SSH access (from a certain point of view at least).


†: There is a third party tool to use session manager from the command line here.

2
  • It doesn't really answer the OP question about SSH security but this is indeed the right solution. SSM can be easily used from command line too.
    – MLu
    Commented Jun 27, 2020 at 7:26
  • @MLu I've added the tool to the answer. If you (or anyone else) knows of a way to do it with first-party tools I'll edit that in as well. :) Commented Jun 30, 2020 at 15:58
2

No. It is not trivially insecure to have an openssh server open to receive connections from anywhere.

openssh has a really good security record, and it is unlikely there a new "killer exploit" will surface nearly.

Note however, that your instance will receive lots of bruteforcing attempts, from all over the world. This doesn't cover a weak password!

  • Disable password authentication at ssh server level. Thus requiring ssh keys for login.
  • Do not share a private key! Each one needing access to the server should get its own key (generated locally, never sent out) added to the server. This improves trackability, allows removing access from a single individual, that if a key is even suspected of being compromised can be easily replaced and removes the problems of distributing private keys.
  • You may consider moving the server to a different port. It isn't a security measure itself, but it will give you cleaner logs
  • You can further restrict the access by knowing from where it will not be connected. Maybe it is not possible for you to set a whitelist of the exact IP addresses that will be used, but you may perhaps know from which country they will. Or that nobody from has any business connecting there.

The warning by AWS is a good one, and it's good to restrict incoming sources if you can, but not doing that is not insecure. Note that AWS doesn't know if you require ssh keys, or if your credentials are root/1234. Sadly, this warning reflects the high number of instances that end up compromised due to trivially silly credentials.

2

A. Key-based SSH is very secure and widely trusted. There is of course always a possibility of a vulnerability (e.g. Heartbleed), and limiting by IP increases security. But I would hazard to guess that you're more likely to get compromised in other ways (say, getting phished for the AWS console).

B. Consider creating multiple SSH keys to prevent possible compromise when sharing them. (Though I understand this may be inconvient, as AWS allows only one SSH key when initially launching the instance.)

1

openssh has a pretty good security reputation. When I look through my Debian security alerts archive (this is not an exhaustive search, there may have been issues in libraries used by openssh that I did not spot). I see about one alert per year , but most of them seem to be relatively minor issues (some username enumeration issues, some issues in the client, a privilage escalation issues for users who are already authenticated on a sever with non-default configuration, some bypasses for environment variable restrictions

However one flaw does stand out. Back in 2008 there was a really nasty flaw in Debians openssl that meant that keys generated with openssl or openssh on vulnerable Debian systems could be brute-forced. Furthermore DSA keys that had been merely used on a vulnerable system were potentially compromised if the attacker had traffic involving the vulnerable key (either from sniffing or in the case of host keys from connecting to the server). When such a vulnerability becomes public you may have very limited time to adapt before the botnets start using it.

So best-practice is to minimise the amount of stuff you expose directly to the Internet, so that when a really nasty bug comes along you can mitigate it quickly. Having to do an emergency update on a handful of systems is much better than having to do it on every system at once.

Of course there is a cost to doing that, only being able to access a sever from systems on your VPN or having to bounce through multiple servers can become a major PITA. Ultimately you have to decide what balance is right for you.

0

No, it is not 'trivially insecure', but then, AWS never said that it was. Instead, it recommends doing something else, because doing that something else is compliant with standard best practices. You can avoid those best practices if you think you know better, but given that your OP discusses the idea of sharing a private key between multiple users, I would very strongly suggest just complying with every security notification AWS sends your way. At worst, you waste a bit of time 'over-engineering' things. At best, you avoid serious consequences.

0

Unless someone has my private ssh key, how is leaving an aws instance open to 0.0.0.0 but only on port 22 via ssh insecure?

It's not "insecure" you only increase your risk of being breached if there is an unknown or unpatched vulnerability in SSH.

Since you are using AWS you could use IAM to allow the team members the ability to add the remote IP addresses they are coming from themselves.

0

Note that this warning has some additional layers to it. First of all you might not want to assign a public IP address to the machines in your internal subnet at all.

Instead of (only) IP filtering in security group and VPC ACLs not having the network reachable without jump host, session manager or VPN is an additional measurement.

This also helps against accidentally reconfiguring the security groups (and staying and allowing additional network services). It also helps against kernel IP level exploits or DOS risks. It is all part of security in depth and layered approaches, guided by the principle of not allowing any access which can be avoided - even if you can't find an immediate threat by it.

Before public clouds people put way to much trust into perimeter protection, the same can be sailed for software defined architectures in the cloud where micro segmentation really should be the norm.

0

Short answer:

  • Change ssh default port
  • Remove ssh default banner
  • Don't share private keys
  • Restrict allowed ips
  • Add a daily cronjob to install security updates
1
  • I'd add to disable root login and password logins. Commented Jun 30, 2020 at 16:00
0

Assuming you configure it correctly, not only is it not "significantly insecure"; it's not vulnerable at all short of worldwide-catastrophy-level vulns that are not expected to exist. Seriously, AWS management infrastructure is a weaker link than OpenSSH.

Now, there are lot of possible ways to misconfigure it, including the one you mentioned in your question, distribution of a shared private key. Absolutely don't do that. You should never handle someone else's private key; they should give you their public keys. Also, no authentication options other than pubkey should be enabled - no passwords, no GSSAPI, no PAM, etc.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .