Limit Fail2Ban memory usage?

I have a new install of Virtualmin on a Centos 7 with 2GB of RAM and Fail2Ban is using more 1.25GB - Is this normal? Or is there a way to limit it’s RAM usage?

I found the ‘ulimit -s 256’ option but adding it to F2B’s config prevents the service from starting.

Likely normal on my Centos 7 8G ram VM,
Virtualmin reports 3.7 G ram in use.
Resident working set usually smaller.
You need to use command line tools to find working set.

Command /usr/bin/python -s /usr/bin/fail2ban-server -xf start
ID 1485 Parent process /usr/lib/systemd/systemd --swi .
Owner root CPU 1.7 %
Size 3.76 GiB Run time 06:14:22
Nice level

IO scheduling class

IO priority

Process group ID 1485 TTY None
Real user root 1582606800
Real group root Group root
Started Feb25
1 Like

Thanks this helped.

However I still have an issue with f2b using up all of the system memory and eventually crashing Webmin. I’ve disabled it for now - but that’s not ideal.

You really should have 8 gigs of ram minimum for virt to run it with the best setup for security and performance.

Using ulimit doesn’t make sense. It just makes memory allocations fail for fail2ban so it’ll exit. If you don’t want it to run, just turn it off. :wink:

Actual resident size is the more useful information, and you’re not seeing it there (we need to fix that process page, as it shows virtual memory usage which includes buffer/cache usage and since fail2ban has astronomical VSS memory usage, but more manageable, but still pretty big, resident size, it gives a wholly inadequate picture). Anyway, there is no way that I know of to shrink fail2ban, and there is also no way to limit it. But, it’s not as bad as you think (still big though).

If memory is at a premium, you might consider disabling it and switching to something smaller like sshguard. I use it on my small devices these days. It does a lot less, but is tiny and is written in C, so it’s much less demanding of other resources, as well.

1 Like

So further testing suggests that somewhere between 5-6GB of RAM is required to run Virtualmin with Fail2Ban and 25 or so virtual servers. Below that threshold I my server was staying up by the Virtualmin/Webmin interface would crash due to lack of RAM. I upgraded to 8GB on my VPS and all has been running perfectly.

It does look as sshguard does much the same thing but with fewer resources, so I’ll try that on my backup VPS with 4GB of RAM and see how things go. Is it a simply a case of removing f2b with yum and installing sshguard?

I cannot agree with you @substandard that Virtualmin needs 5-6 GB of RAM to run with Fail2Ban. I am getting adequate performance from servers which have less RAM

I have an Amazon Lightsail VPS with 1 GB RAM hosting 46 virtual servers and 236 users who send / receive 10 GB of email a day. After I tweaked Virtualmin I was able to run Bind and Spamassassin (along with Fail2ban but without mySQL) quite comfortably in production. See attached screenshot vps02 of Virtualmin Dashboard.

On another VPS I have 4 GB of RAM and host 100 virtual servers consisting of a mix of WordPress, TikiWiki CMS, Microweber and SuiteCRM with great TTFB for users in India by tweaking Virtualmin, Apache and PHP-FPM. See https://calport.com/article67-Fast-web-servers-dare-to-compare-TTFB-and-FCP and the attached screenshot vps01 of the Virtualmin Dashboard.

Here is what memory usage looks like on vps02. Fail2ban footprint is under control.

That’s not normal. 2GB is about the minimum for a web-only system (no AV/spam filtering, but small database apps are OK). Something pathological is happening, though I don’t know what. My instances of fail2ban, even in the biggest case, are only a few hundred MB in RSS. Maybe check fail2ban logs for errors or some indication of why it’s growing so large?

Are you also running ClamAV? In my experience that’s always the biggest single process. I think it’s up to 800MB or so RSS these days. Fail2ban on most of my systems hovers around 35-40MB RSS and 1GB Virt. I have a bunch of processes that are bigger than fail2ban (MySQL, Apache, php-cgi, etc.). So, if you want to kill something that’ll notably shrink real memory usage, clamd is the one I’d start with (and it’s of marginal utility, anyway, unless you have non-technical mail users who might not be up to date on their OS).

Also, increase swap. Crashing is bad. If it’s crashing because memory is running out, give it more memory and stop the crashing. (But, also sort out why something, maybe fail2ban, is hogging your memory.)

My poor understanding on Linux memory architecture is definitely a factor here. Here’s the difference between the Fail2Ban memory usage reported in Webmin v’s HTOP.

Looking at the RES column Fail2Ban is actually using ~600MB - which is fine.

However, when my server is configured with 4GB RAM or less the Webmin interface crashes after 12-15 hours, moving to 8GB immediately solved this. It didn’t occur to me to set up a swap drive, that would have made more sense (and saved me some monthly overhead).

Interesting, I found one of my servers also has a ~600MB fail2ban (while most are in the 30-40MB range). Weird. I’m gonna look into that, because it feels wrong. I don’t know what makes it grow (maybe activity?).

I found this: https://github.com/fail2ban/fail2ban/issues/2045#issuecomment-364564285

Which doesn’t seem like it should apply to the version of fail2ban we all have (0.9.7 on CentOS 7), but when I purged the database as suggested, fail2ban usage dropped down to the expected ~30 MB RSS after a restart.

I think maybe you (and me, on this one server) are seeing a bug.

Oh, but 10.5 is available in EPEL now. Can you check what version of fail2ban you have? (rpm -q fail2ban if on CentOS, or dpkg -s fail2ban on Debian/Ubuntu)

rpm -q fail2ban-server
fail2ban-server-0.10.5-2.el7.noarch

I also did the DB purge and re-started Fail2Ban, it now looks like this:

I had a similar issue on my CentOS 7 system.
A cron job purges fail2ban database daily fixed symptom

hi, that is strange… I run f2b for years and my stats are like this:

root@host:~# free ram -h
              total        used        free      shared  buff/cache   available
Mem:          3.8Gi       568Mi       848Mi       101Mi       2.4Gi       2.9Gi
Swap:         4.6Gi       340Mi       4.2Gi
root@host:~# 

aslo it does block lot of sshd and apache attacks as well as sasl and other including bad bots. I aint rebooted system for about 16 months now…

EDIT: size of my f2b db is around 2 gigs… its located at: /var/lib/fail2ban/fail2ban.sqlite3 at least on debian eh, so that should not be an problem. Also you might check this solution here: https://serverfault.com/a/1002350