Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Bad faillock defaults
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
wanne32
n00b
n00b


Joined: 11 Nov 2023
Posts: 69

PostPosted: Thu May 16, 2024 12:53 am    Post subject: Bad faillock defaults Reply with quote

I created a new account and was more or less instantaneously locked out.
Found out that this is enough to easily dos a default gentoo host with almost no need of computing power or bandwith:
Code:
while sleep 10
do sshpass -p $RANDOM ssh user@host
done

I would say putting local_users_only or removing faillock.so at all from defaults would be a much better approach.
Yes it makes bad passwords easier attackable. But if you use bad passwords, you should do precautions by yourself since it is not advisable anyways.
But making any default installation susceptible to super easy DoS seems much worse to me.

Wat do you think?


Last edited by wanne32 on Thu May 16, 2024 11:49 am; edited 1 time in total
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22578

PostPosted: Thu May 16, 2024 1:18 am    Post subject: Reply with quote

When you say it allows a denial-of-service, what exactly is being denied? Is it only that faillock locked the failed account, as designed? Is the remote peer also able to force the Gentoo system to use a large amount of CPU time or available memory, with only minimal expenditure from the remote peer? If you configure sshd to disallow password authentication, is this still a problem?

Why did your account get locked immediately on creation? Are you allowing Internet bots to probe the system, and they triggered faillock?

I think users who pick bad passwords will tend to leave all other security settings at default too. Therefore, if a strict faillock is the only thing protecting those people today, changing faillock not to apply to remote ssh password attempts will leave them with no protection. I prefer that the system default-enable security features that do not break standard user workflows, and that the system require that a power user choose to disable that protection when it is not useful. This protects the users who do not understand their system, while still allowing those who know how to enable alternate security measures to switch over. For example, a user who has the awareness and ability to enable fail2ban can delegate security to that, and not need the protection of faillock. However, since fail2ban is not enabled by default, users who do not choose to activate it will not have it.
Back to top
View user's profile Send private message
wanne32
n00b
n00b


Joined: 11 Nov 2023
Posts: 69

PostPosted: Thu May 16, 2024 8:43 am    Post subject: Reply with quote

Quote:
Is it only that faillock locked the failed account, as designed?
Witch is an other word for making the system unavailable.
Quote:
Are you allowing Internet bots to probe the system, and they triggered faillock?
It is a host that is connected to the internet.
Quote:
This protects the users who do not understand their system, while still allowing those who know how to enable alternate security measures to switch over. For example, a user who has the awareness and ability to enable fail2ban can delegate security to that, and not need the protection of faillock.
The problem is: If you have no other "protection" (fail2ban is just an other tool for introducing DoS (and maybe other) vulnerabilities to Systems that wouldn't be vulnerable otherwise. Also it has become much less an issue since the switched away from reading plain text logs.) it just renders your system useless. If you have them in place anyway you do not need faillock.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3404

PostPosted: Thu May 16, 2024 10:47 am    Post subject: Reply with quote

I've never ran into this problem on my machines and from your description alone can't really tell what you're actually complaining about.
Does it allow a remote attacker to disable a local login for the account? This would be the only thing that could be considered a bad default. Otherwise the tool is working as intended.

Quote:
The problem is: If you have no other "protection" (fail2ban is just an other tool for introducing DoS

If you have no protection and can't deal with the consequences, that is a you problem :lol:
Disposing of tools that are getting in your way may is a perfectly valid solution. As long as you use your brain, so you don't shoot your foot.
_________________
Make Computing Fun Again
Back to top
View user's profile Send private message
wanne32
n00b
n00b


Joined: 11 Nov 2023
Posts: 69

PostPosted: Thu May 16, 2024 11:40 am    Post subject: Reply with quote

Quote:
Does it allow a remote attacker to disable a local login for the account?
Yes. All he needs to know is your username and try to log in 3 times a minute. – Which is miles away from overloading your system. It even prevents you from logging in locally.
Quote:
If you have no protection and can't deal with the consequences, that is a you problem
Like I said: It wouldn't be a problem if faillock wouldn't be activated. If I create a user with a password, I expect that I can log in with that password. I expect attackers to do bad stuff. But gentoo is a source I trust. And if someone I trust does something I didn't expect and hurts me, I do not consider it a only me problem. This is why we talk about how expectations should look like.

I usually disable password and challenge-response authentication for ssh on mulituser systems. But then I communicate that to my users very loudly and it is a very obvious thing: You can test it and SSH won't announce password auth. With this it is very different: You test it in a protected environment and everything seems to be fine. Then you put it in production and nothing works any more.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3404

PostPosted: Thu May 16, 2024 12:45 pm    Post subject: Reply with quote

I see. Yes, it makes sense now. Gentoo provides a drop-in config file for sshd enabling PAM support, I suppose it could result in the behavior you described.
I have it disabled on my systems, so faillock only rate limits local logins and is never triggered by ssh login attempts. I protect SSH with keys and firewall instead. Still, calling it a DoS vulnerability is... arguable.
Fortunately, this is easy to override with either another drop-in config, or by disabling includes completely in /etc/sshd_conf (I went this way, since I don't really want installed packages to change my sshd settings).
_________________
Make Computing Fun Again
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22578

PostPosted: Thu May 16, 2024 2:12 pm    Post subject: Reply with quote

Coming back to one earlier statement first:
wanne32 wrote:
I would say putting local_users_only or removing faillock.so at all from defaults would be a much better approach.
As I read the documentation, local_users_only will not help with your complaint. It says:
Code:
       local_users_only
           Only track failed user authentications attempts for local users in
           /etc/passwd and ignore centralized (AD, IdM, LDAP, etc.) users.
This reads to me like it only matters if you use a centralized authentication system. For the common Gentoo setup that users are locally managed, this flag will not impact the behavior about which you are complaining.
wanne32 wrote:
Hu wrote:
Is it only that faillock locked the failed account, as designed?
Witch is an other word for making the system unavailable.
No, it made one specific account unavailable. The rest of the system remains up and usable.
wanne32 wrote:
Hu wrote:
Are you allowing Internet bots to probe the system, and they triggered faillock?
It is a host that is connected to the internet.
You put it directly on the Internet, allowing unknown machines to connect to it with no speed limiter and with no firewall to protect it? That is just asking for trouble, whether or not faillock is enabled. Personally, I would prefer a system that locks down in that case over one where it can be under attack with no obvious notification to me.
wanne32 wrote:
Hu wrote:
This protects the users who do not understand their system, while still allowing those who know how to enable alternate security measures to switch over. For example, a user who has the awareness and ability to enable fail2ban can delegate security to that, and not need the protection of faillock.
The problem is: If you have no other "protection" (...) it just renders your system useless. If you have them in place anyway you do not need faillock.
If you have no other defenses, then yes, specific accounts with names that Internet bots guess and try to authenticate to will get locked. I then come back to my earlier position: this default is good for users who do not care to think about securing their system, and can be reverted by users who have taken other measures (such as very high quality passwords or disabling password authentication entirely).
wanne32 wrote:
(fail2ban is just an other tool for introducing DoS (and maybe other) vulnerabilities to Systems that wouldn't be vulnerable otherwise.
How does fail2ban introduce a denial-of-service to legitimate users? When configured properly, it should only ban clients that fail repeatedly. Legitimate users should not be failing like that.
wanne32 wrote:
Also it has become much less an issue since the switched away from reading plain text logs.)
Could you explain this statement? The only way I can parse it is that some logging system is not usable as an input to fail2ban, and thus fail2ban does not work with that system. However, if you don't want fail2ban to work, you could disable it instead of denying it the input it needs to work properly.
wanne32 wrote:
Then you put it in production and nothing works any more.
That seems like your test environment is not a good replica of production. The test environment ought to be under constant attack too, otherwise you miss testing services that react poorly to attacks.

This last statement of yours is the first indication I see that this is anything other than a personal system on a home network. Are you using this setup in an environment where you expect legitimate users to be connecting from the Internet?
Back to top
View user's profile Send private message
wanne32
n00b
n00b


Joined: 11 Nov 2023
Posts: 69

PostPosted: Thu May 16, 2024 6:22 pm    Post subject: Reply with quote

Hu wrote:
How does fail2ban introduce a denial-of-service to legitimate users?
fail2ban is quadratic in runtime with failed attempts, since it scans all previous failed attempts. Back when there was no mysql Backend you could easily overwhelm a few fail2ban host with an equal amount of attackers with much less resources.
This has improved greatly with mysql. While the theory stays the same, usually the bandwidth is the limiting factor long before the host is rendered unusable due to CPU-Power (or more exactly memory bandwidth). Especially because the databases is cleared out regularly – so you can not stretch your attack over a long period of time.
Still it encourages to extensive logging. The DNS of our ISP went down, when our government ordered to block rt.com. – It was just logging all "malicious" requests. Resulting in a overwhelmed logging backend (journald? disk?) making it impossible to answer any valid request in time.
And also I do not know of any attacks in that direction, this opens a whole new attack surface named mysql with all its injection possibilities.
Reminds me a little nit on the iptables syn-flood protection that uses magnitudes more CPU time than the kernel (with syn-cookies enabled). Making the attack worse. – And again no one cares because most CPUs can handle all the bandwidth they are connected to – even with iptables. But it is still a horrible idea. – But you can say "I have done something against it!"
Hu wrote:
When configured properly, it should only ban clients that fail repeatedly.
It usually bans IP-Addresses. In case of IPv4 this is usually an entire Region with hundreds of thousands of users for most European mobile providers for IPv6 you usually get a big enough junk of IP-addresses for millions of tries without being banned. So no protection against attackers who chose their internet connection wisely but an easy way for bad actors who want to kick out their boy/girlfriend etc. – Just use the same ISP and hope that she is using one of the many dumb android devices with IPv6 disabled on cellular networks by default. Operator will blame him for choosing a "bad" ISP.
Hu wrote:
The test environment ought to be under constant attack too, otherwise you miss testing services that react poorly to attacks.
Yes. I know everybody will now say differently. But no – noone can simulate a full environment with all network traffc. Yes you do some basic performance tests. You maybe run nessus an Co over it and let john or haveibeenpawned test passwords. But the everyday backround chatter of stupid auth tries with stupid passwords? Its proven enough that OpenSSH can handle that. You do not assume that there is a 3rd party module that gets killed by trying wrong passwords.

Hu wrote:
Are you using this setup in an environment where you expect legitimate users to be connecting from the Internet?
Yes. Or more exactly a "private" network with usually a few ten thousand active clients. background chatter is normal... pretty sure I won't run behind everyone who is trying to log in with @gentoo:sex or similar.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3404

PostPosted: Thu May 16, 2024 7:28 pm    Post subject: Reply with quote

Quote:
fail2ban is quadratic in runtime with failed attempts, since it scans all previous failed attempts

1) that's new to me, I think f2b was effectively tailing log files when I was using it
2) already banned IPs never connect again, so they stop generating new log entries
3) logrotate

This said, I went with iptables' hashlimit on new connections to rate limit connection attempts per IP (regardless of result). Learning Gentoo-specific part of f2b is more effort than enabling session multiplexing with persistent master in ssh client (system-wide), and most things would work even without it.
_________________
Make Computing Fun Again
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum