View previous topic :: View next topic |
Author |
Message |
o5gmmob8 l33t

Joined: 17 Oct 2003 Posts: 618
|
Posted: Sun Jun 01, 2025 6:05 pm Post subject: nftables - firehol blocks |
|
|
I would like to incorporate the ipsets from firehol:
github.com/firehol/blocklist-ipsets.git
The problem that I'm having is that the volume of addresses is huge and I'm not sure if nftables can handle that size or not. If I use all of the files in the main directory ending in .ipset, then there are roughly 750k records.
I tried to load these into a set with:
Code: | xargs -a <FILENAME> -d '\n' -n 5000 |
sed -e 's/ /, /g' |
xargs -I {} nft add element ip global <TABLE_NAME> { {} } |
But, I aborted that after 15 minutes as that seemed way too unreasonably long to load.
1. is there a more efficient way to load the set?
2. once it is loaded, will I have any lingering performance issues?
I ask #2 because I had the partial set loaded and I noticed that subsequent calls to resolve names to IPs to update my other sets was slow and consuming quite a bit of CPU time.
My set declaration was:
Code: | set ip_blocks_et {
type ipv4_addr
flags interval
auto-merge
}
set ip_blocks_firehol {
type ipv4_addr
flags interval
auto-merge
} |
ET is for emerging threats and the volume there is considerably smaller.
Ideally, I would keep these sets updated every day, but my notes from a way back were to update them every 4 hours, I'm not sure if that is accurate or not, but if upstream is updated every 4 hours or less, then I'd like to refresh to match and have the process complete in < 30s ideally. It'd be a background job, but still, I'd like to be able to debug it in a reasonable amount of time. |
|
Back to top |
|
 |
user Apprentice

Joined: 08 Feb 2004 Posts: 239
|
|
Back to top |
|
 |
o5gmmob8 l33t

Joined: 17 Oct 2003 Posts: 618
|
Posted: Tue Jun 03, 2025 12:00 am Post subject: |
|
|
That seems to just provide translation which I already am doing with my script. The problem is that it is slow due to the volume of 750k records. I'm processing 5k records at a time, but it seems that even if processing 5k at a time is reasonable, once the set gets so large, the system seems to get bogged down.
I have 24GB of ram and it only shows I'm using 6.5 GB. |
|
Back to top |
|
 |
user Apprentice

Joined: 08 Feb 2004 Posts: 239
|
Posted: Tue Jun 03, 2025 5:46 am Post subject: |
|
|
No need to process each entry by single nft add element call
create a text file with your 750k entries as nftable map style and load this special file at nft startup with "include" option. |
|
Back to top |
|
 |
RumpletonBongworth Tux's lil' helper


Joined: 17 Jun 2024 Posts: 108
|
Posted: Fri Jun 13, 2025 6:14 pm Post subject: Re: nftables - firehol blocks |
|
|
o5gmmob8 wrote: | I would like to incorporate the ipsets from firehol:
github.com/firehol/blocklist-ipsets.git
The problem that I'm having is that the volume of addresses is huge and I'm not sure if nftables can handle that size or not. If I use all of the files in the main directory ending in .ipset, then there are roughly 750k records.
I tried to load these into a set with:
Code: | xargs -a <FILENAME> -d '\n' -n 5000 |
sed -e 's/ /, /g' |
xargs -I {} nft add element ip global <TABLE_NAME> { {} } |
|
You should synthesise a command stream containing a single "add element" command. Assuming that your input file contains a series of one-or-more newline-terminated IPv4 addresses, it need be no more complicated than the following.
Code: | { printf 'add element ip global %s { ' "SET_NAME"; tr '\n' ,; printf ' }'; } < "FILENAME" | nft -f - |
Such could easily be made to be a shell function or utility in its own right. For example:
Code: | add_to_set() { local IFS; { printf 'add element %s { ' "$*"; tr '\n' ,; printf ' }'; } | nft -f -; }
add_to_set ip global SET_NAME < FILENAME |
Note that it you wish for the elements being added to wholly replace the elements that were already present, you need only produce a "flush set" command that precedes the "add element" command. The important thing is to ensure that nft is executed exactly once for the resulting command stream. The use of the "auto-merge" flag will incur a performance cost in the course of re-populating a set, so I would suggest considering this (and forgoing the use of the flag).
As for your second question (regarding performance), there have been a tremendous number of bugs and glitches that have affected the implementation of nftables sets over the years. As of today, the situation has improved markedly. Generally, things should work quite well as long as you are running a recent kernel and userspace. That being said, should you discover any problems, please do report them to the Netfilter project.
EDIT: Changed the TABLE_NAME placeholder to SET_NAME. The former made little sense, as presented. |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|