> Disallowing root login is also frequently recommended. I believe this has limited merit in our current landscape since 95% of the time, the user you log in with has sudo privileges. Then it adds no extra security. But you should really judge this for your own situation.
Disallowing explicit `root` login makes it harder for attackers to guess the usernames which have sudo access, thus I'd say it decreases the attack surface area. Yes, sudo gives them the same level of access as root but the path to get there from an attacker's perspective is not the same.
I typically frame this as accounts are for accountability.
Reducing accountability isn't typically a goal for organizations so it's strictly better to have 20 accounts with sudo versus everyone using a single shared account (shared accountability) UNLESS the accountability is managed some other way (I think this something Gravitational Teleport tries to sell on this forum often).
You can use smart cards as plain SSH keypairs and sshd will of course log the fingerprint of the key used to authenticate. That's pretty foolproof accountability.
I in fact have done this (heavy user of smart cards and author of middleware), BUT what sshd logs (a fingerprint of the public key) requires a bit of work to match an authorized_keys format file (basically a stripped down PKCS#1 format with a header).
I actually use a fork of OpenSSH called PKIXSSH which supports X.509 certificates in sshd, and this is far more reliable.
If all users have a unique key for root access, and only keyed authentication is possible, then you can match the key fingerprint in the logs to a physical user.
What are some examples of these "several distros" that set it by default? Of the major ones I'm willing to use in production (Debian, Ubuntu, RHEL and derivs, SuSE, hesitantly Arch), none offer NOPASSWD sudo to users marked as administrative in each distro's normal way. (i.e. various group memberships)
> Disallowing explicit `root` login makes it harder for attackers to guess the usernames which have sudo access
Or rather, it forces attackers to enumerate other usernames because the default root account isn't available to brute force. In general, when password authentication is in use (which it should never be for SSH), then disabling the default user account acts as a simple way of mitigating brute force attacks. If best practices are followed and passwords are not used to authenticate to SSH servers, then this additional measure might be moot. Even so, I can't think of a reason not to disable root SSH login anyway, even if password authentication is disabled. A layered defense assumes the possibility that other defenses might fail. Noone wants to be the person who wrote off an additional defense measure because they didn't think it was possible to exploit, only to have a new CVE come out to prove them wrong.
I haven't seen anyone changing the default user logins for their instances in AWS, though - everyone uses ubuntu, ec2-user, or whatever is the default of their distribution.
Basing things on source IP is really inexact and easily muddied, though. For instance, the source IP is a workstation or a laptop—now we have to go through DHCP logs to figure out who had that IP address at the time the incident occurred. Or if we've properly implemented source limitations through a bastion host, all we'll see on the end server is the source IP of the bastion host, so we'll need to go to the bastion to figure out who was logged in at that time and what they were doing—hopefully there was only one engineer logged in at the time! And that assumes they didn't just SSH-jump through, in which case we just have the transitory SSH logs...which leaves us with the DHCP problem above unless they used their own userID to connect to the bastion itself. If it's a publicly available AWS host (which...oy gevalt), was that login from "dhcp-host-X.Y.Z.Q-suspiciouscablehosting.regionalisp.net" a hack, or an engineer trying to fix something by logging in from his home Internet connection?
Ultimately, it's just much easier to force individual logins and require sudo. Even just an engineer logging in and doing 'sudo su -' is significantly more traceable than everyone logging in directly as root. Even better if you can force the individual logins and sudo sessions to use multi-factor auth—then you can keep root non-multi-factored and get in with just a password on the console when your MFA solution has gone pear-shaped. :)
`sudo su -` is always a nuisance. As an individual developer I prefer to ssh as root directly. Even more, I use mosh and connect to hosts with something like `mosh root@host -- screen -xRR -D`.
Enterprisey server management tools issue short-lived keys that they push through backchannels to hosts. Hosts do not have users aside from system or applicaiton users.
If you have the sort of system that has automated key issuance and low interactivity on hosts, I agree that makes sense. But then, that's exactly the 'ssh as yourself and then sudo to root' model, just abstracted in a different way, isn't it? In both cases you're authenticating to some intermediary as yourself first and then being given access to the local root account—it's just that in my system the 'intermediary' is a local unique user account, whereas in yours it would be the individual authenticating to the temporary-ssh-key-issuance system like Vault. As long as that system does a unique authentication, the two are more-or-less identical, although you'd have to be careful to ensure you can untangle the auth logs to the intermediary system and associate them with local root sessions.
> Disallowing explicit `root` login makes it harder for attackers to guess the usernames (...)
Technically correct I guess, but isn't password logins considered bad practice anyway? So if you have passphrase protected, key-based authentication only, is it really relevant whether you have it against root or a user with sudo permissions?
These howtos involving cryptography should generally be ignored unless you actually understand the issues fairly well. The default configuration gets a lot of scrutiny. The stuff from "Big Bobs Super Secure" configuration howto mostly comes from the same sort of article. The ideas from these things take on a life and truth of their own after they circulate around a few times.
I disagree, you can always go straight to one of the industry recognized hardening benchmarks then. The ssh hardening guidance in the posted article and any number of hardening guidelines are all pretty similar.
The defaults keep the mailing lists from filling up with troubleshooting questions, but anyone with some command line skills can change one parameter at a time and test. If you’re exposing ssh to the internet you should absolutely not be ignoring hardening guides.
I belong to the camp that believes in using the defaults when it comes to ciphers. I am not an expert in cryptography, nor do I like copypasting stuff I don't fully understand. The openssh guys know this stuff better than I do and I think that's fine.
I was long of that conviction too. But the default install optimizes for a different thing, compatibility. Or at least emphasizes is more than I would do.
For example I never use RSA keys. So these can go. Less cyphers => less attack surface.
But I do agree that I'm sure the defaults picked are sensible.
Exactly, the default RSA for the keygen is what a lot of users accept without realizing the implications. Well, lots of HowTos out there suggest "enter, enter, enter.." to get your key.
What's the rationale for keeping RSA as a default these days?
> What's the rationale for keeping RSA as a default these days?
I think they recently changed this, but for the longest time RSA keys were the only kind AWS supported for EC2 keypairs.
For staff that weren't deploying EC2 instances, EC keys were fine, but for the ops people setting them up where the EC2 keypair was the initial access key, they needed RSA keys.
I’m not an expert but I think RSA is discouraged these days because it’s strength is dependent on the key size which is regularly outdated as computers get faster/parallelize. Ten years ago a 512bit key was considered secure but these days I think 4096 is the recommended minimum length for a keypair that’s considered secure. Because of this it requires cycling through keys every now and again which can be tedious and even painful if you build PKI using it. The latter happened with a project I worked on where we’ve had to cycle our users keys for an application a few times now, I think jumping from 512 to 2048 and recently to 4096. This is even more tedious in a zero-knowledge system where the keys can only be unlocked with user authentication but deadlines for updating exist..
I’m not positive but I don’t think Elliptic Curves have the same issue, or key lengths have longer predicted life spans.
As far as I have been able to figure, 2048 bit RSA is good for the ages. It would take some fundamental breakthrough like quantum computing to break. No conceivable incremental improvement in current computing technology will come close to touching it.
Estimates shows that state-level factorisation will be readily possible by 2030, and academic-size factorisation by five to ten years after that (this is just using classical computers).
State level factorization is probably somewhere in the 1024 bit RSA range with a Manhattan Project level of effort. The extra difficulty when going to 2048 bits is around a billion (1E9). So that would mean that the estimate assumes that we are going to be able to increase our computing capability by a factor of a billion in ten years. That seems very unlikely to me.
Mooore's law has the number of transistors doubling every 2 years (not for sure faster transistors, just transistors). So for 10 years we get 2^5=32. That seems well short of a billion and it is generally accepted that current technology is going to run into fundamental physical limits fairly soon.
Ignoring physical limits, if Moore's law holds it works out to 60 years for state level attacks on 2048 bit RSA.
> State level factorization is probably somewhere in the 1024 bit RSA range with a Manhattan Project level of effort. The extra difficulty when going to 2048 bits is around a billion (1E9). So that would mean that the estimate assumes that we are going to be able to increase our computing capability by a factor of a billion in ten years. That seems very unlikely to me.
I mean, Moore's law is hanging but that doesn't mean that they can just, you know, expand their computer footprint? To be precise, NSA (or is it NRO?) is preparing for a warehouse-size supercomputer and it is conceivable that other countries are bucking up with this.
Plus, after the "let's rely on Moore's law" tactic, chip design has another boost of investment, and it's paying off. IPCs, despite the clocks hovering around 5GHZ, is increasing and specialised chips and immersion and/or sub-zero cooling can boost this further. It's rather exciting after the relative stagnation last decade.
No they are not. It is just some sort of internet legend going around right now. If I had to play the odds I would consider RSA more secure than curves simply because RSA has been solid for much longer than the current curves have existed.
There's a cherry on top there though: shorter RSA keys are easily breakable now, so most recommendation now focuses on extending from the defaults. It doesn't help that the practical default for SSH was 1024-bit until 2019-ish (OpenBSD folks: yes, OpenSSH did move to 2048-bit RSA way earlier, but OpenSSH builds on other OSes don't move that fast).
For me: great smartcard support. Smartcards with EC are still rare. I think the latest yubikeys can do it with OpenPGP but it takes some fiddling and I also use physical cards.
And really, RSA-2048 is more than sufficient to keep all but the most funded hackers out. And if those target me, they'll get in anyway.
If you are using DH kex (RSA), then you probably should regenerate the moduli file instead of just taking the distro’s version and filtering it as outline in your link.
Port 22: I always change it to something else and bots leave my servers alone. I wonder why they don't try all the ports. I understand it's 16 bits but they should do it only for the addresses that don't answer on port 22.
The real search space is 48 bits (IP+port) 281,474,976,710,656. Searching at 1 million destinations per second it'd take ~9 years. So options are either go build something in the billion destinations per second range (completely feasible just more work to not get blocked) or just to limit the search space. Throwing out unadvertised IP space and throw out around 99.998% of the remaining destinations by just doing port 22 allows you to scan the entire internet extremely quickly with the smallest of scanning setups.
But without China it's just non-stop to the point of consuming noticeable amount of hard drive space for logs, and making it annoying to read logs looking for other things.
Back in the EFNet days everyone with a lick of sense autobanned all of Eastern Europe, Asia, and South America. It was literally nothing but script kiddies.
I have been asking this of colleagues informally for decades now, but I will do it again: why is it that, if the majority of best practices for security are identical (ie "disable these settings asap"), are the default settings the way they are? And what would it take to change them to be secure by default?
The defaults assume your appetite for security risk is much greater than your appetite for interop failure.
Imagine two products that could easily exist, almost the same features:
A by default works just fine, but there's a risk an adversary spends $10M to attack you. You can tweak the config and replace all the OmniCorp CheapDevices in your estate to get rid of that attack which is best practice.
B by default requires that you replace all the OmniCorp CheapDevices in your estate or it won't work. You can tweak the config and risk that an adversary spends $10M to attack you so that the OmniCorp CheapDevices still work.
Which one do you buy? You buy A of course. Nobody is going to sign off on the budget to replace all the OmniCorp CheapDevices. What's this nonsense about a $10M attack anyway?
At some point the risk is judged to be too high, and people flip the defaults for those risks. For example OpenSSH decided the risk for SHA1 authentication is now too high, like the Web PKI had years before, but despite SSH being a much less plausible target than the Web PKI for various reasons.
Which ones aren't secure by default? When you install new OS like Debian those defaults will be correctly set (and depending on your organization, you may change some settings around to fit your needs).
However, if you customized your config e.g. in debian 6 and updated to 11, you may wanna revisit those settings and change them
Not really SSH, but Kerberos on many distros allows extremely weak ciphers by default. And when I say weak, I mean these should have been disabled a decade ago. On Ubuntu 20.04 with the default setup, keytabs using DES are allowed...
Not sure about that, but from my understanding that's because with kerberos the encryption level doesn't matter as much because cracking DES from such a small amount of bytes still shouldn't be feasiable
I don't think the defaults are bad. I think the defaults are generic. And I don't have a generic machine. Nobody has. Old SSH clients don't need access to my server. So I think it is a good idea to remove old cyphers.
The defaults support the largest number of client SSH implementations. If you're doing a de novo SSH implementation, you can almost certainly just support the newest clients, and thus get a better config.
I'm pretty sure some of those are on by default (PermitEmptyPasswords). Some others don't have a sane general setting (AllowUsers) or depend completely on your local environment (KerberosAuthentication). Some are compatibility issues (PasswordAuthentication, cypher selection). And finally, some are ... more controversial than defaults should be (Port, PermitRootLogin).
The software authors can’t change the default settings, since many users have config files where a setting isn’t specified, and these users would, when they upgrade, get an unasked-for change, which might break their workflow. The software authors can change the defaults in major version releases, but even that is frowned upon by those who would be affected by the change. Therefore, this usually does not happen until a real security issue is caused by the old state of things.
The downstream package maintainers for various operating systems or distributions have some more leeway in changing the defaults, but here, also, they have to bow to the impact which a change might have on real-world users. Indeed, since these package maintainers are closer to the actual affected end users, it has been known to happen that upstream authors have changed a default value to be more secure, but the real-world impact has been so large that the package maintainers have reverted this change in the packaged versions of the software, essentially making the software more insecure in the name of compatibility. So, package maintainers are more flexible, but are also more beholden to the wishes of users who might be adversely affected by any changes.
the problem is a lot of the "disable asap" steps require a step before that so the machine is still accessible afterwards (such as adding a non-root user before disabling root login, adding an authorized key before disabling password login, etc)
Tailscale and Teleport are similar, but operate at different levels of the network stack. Tailscale governs access and routing at L3 in the OSI model. See Hashicorp's Boundary or VPNs for alternatives. As a generalization, Teleport works at L7 -- doing auth and routing at the application protocol (ssh, psql, k8s) level.
There are ups and downs to both: L3 is relatively technology agnostic (e.g. you don't need different support for connecting to a database vs ssh). L7 auth & routing gives greater protocol introspection, but means more work to support different use cases.
Depending on your scale and use case, the right answer may be both: Do 2FA for both network access (are you allowed to send packets to the ip:port) and application access (are the packets you send allowed to sign in to the database as an intern or a admin?). The most important part is to get a hardware token and SSO on the path to access.
Disclosure: I work for Teleport. I also think Tailscale is awesome and run it for my home lab.
We use ZSSH based on OpenZiti so that the SSH client itself has zero trust, private connectivity embedded in the SSH client (i.e., clientless) - https://ziti.dev/blog/zitifying-ssh/
Doesn't mention passwording the keys or anything like TOTP, which isn't all that hard to setup (albeit a bit more complex for a simple guide). Nor any reactive address blocking
Also for some of the algorithms, AFAIK you may need some configuration changes to the ssh client which you are using to connect. Both client and server need to use the same algorithm.
The algorithms used will be negotiated. So, unless your SSH client is unwilling to use any of the acceptable algorithms it just will work.
For the server's proof of its identity, one gap in older SSH versions is that the client doesn't learn other host keys. So if your client is content with Archaic-host-key, even though the server has been telling anybody new about Shiny-modern-host-key, when the server finally removes Archaic-host-key the client can't verify this server. In modern OpenSSH UpdateHostKeys controls this in clients and defaults to learning new host keys in the most obvious cases.
Openssh allows "weak" ciphers as long as they are still "strong enough" for some values of weak and strong enough, typically where there is no known or imminent way to attack it even if it isn't as strong as other options.
Switching to only ciphers currently known to be strong puts you a level above that where even if a new weakness was found (publicly or not) it may not be enough to make it breakable.
Whether most really need to worry about any of that is another story. I'd say if you're job task isn't to research how to secure SSH better don't go changing ciphers just because you read it in a blog post. If it is your task consider it one point of data as you dive into understanding the nerd knobs.
> Disallowing root login is also frequently recommended. I believe this has limited merit in our current landscape since 95% of the time, the user you log in with has sudo privileges. Then it adds no extra security. But you should really judge this for your own situation.
Disallowing explicit `root` login makes it harder for attackers to guess the usernames which have sudo access, thus I'd say it decreases the attack surface area. Yes, sudo gives them the same level of access as root but the path to get there from an attacker's perspective is not the same.