Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Networking & Security
  • Search

TCP: out of memory -- consider tuning tcp_mem

Having problems getting connected to the internet or running a server? Wondering about securing your box? Ask here.
Post Reply
Advanced search
6 posts • Page 1 of 1
Author
Message
Vieri
l33t
l33t
Posts: 935
Joined: Sun Dec 18, 2005 12:26 pm

TCP: out of memory -- consider tuning tcp_mem

  • Quote

Post by Vieri » Mon Sep 18, 2017 3:59 pm

Hi,

I getting lots of these in /var/log/messages:

Code: Select all

kernel: TCP: out of memory -- consider tuning tcp_mem
Here are my values:

Code: Select all

# sysctl net.ipv4.tcp_mem
net.ipv4.tcp_mem = 384027       512036  768054
# sysctl net.ipv4.tcp_rmem
net.ipv4.tcp_rmem = 4096        87380   6291456
# sysctl net.ipv4.tcp_wmem
net.ipv4.tcp_wmem = 4096        16384   4194304
# sysctl net.core.rmem_max
net.core.rmem_max = 212992
# sysctl net.core.wmem_max
net.core.wmem_max = 212992

Code: Select all

# uname -a
Linux inf-fw2 4.9.34-gentoo #1 SMP Mon Jul 10 11:05:23 CEST 2017 x86_64 AMD FX(tm)-8320 Eight-Core Processor AuthenticAMD GNU/Linux

Code: Select all

# top
top - 17:51:33 up 19 days, 10:18,  2 users,  load average: 1.38, 1.49, 1.42
Tasks: 344 total,   1 running, 343 sleeping,   0 stopped,   0 zombie
%Cpu0  :  2.2 us,  0.5 sy,  0.0 ni, 93.0 id,  0.0 wa,  0.0 hi,  4.3 si,  0.0 st
%Cpu1  :  0.5 us,  0.0 sy,  0.0 ni, 97.9 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu2  :  1.1 us,  0.0 sy,  0.5 ni, 95.2 id,  0.0 wa,  0.0 hi,  3.2 si,  0.0 st
%Cpu3  :  1.1 us,  0.5 sy,  0.0 ni, 96.3 id,  0.0 wa,  0.0 hi,  2.1 si,  0.0 st
%Cpu4  :  2.1 us,  0.0 sy,  0.0 ni, 96.3 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu5  :  0.5 us,  0.0 sy,  0.0 ni, 98.9 id,  0.0 wa,  0.0 hi,  0.5 si,  0.0 st
%Cpu6  :  0.5 us,  1.1 sy,  0.0 ni, 96.8 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu7  :  1.6 us,  0.0 sy,  0.0 ni, 90.9 id,  0.0 wa,  0.0 hi,  7.5 si,  0.0 st
KiB Mem : 32865056 total,   820664 free, 20358972 used, 11685420 buff/cache
KiB Swap: 37036988 total, 34924984 free,  2112004 used. 12014564 avail Mem
Obviously, I'm having connection issues now.

What should I do?

A quick search suggests NOT to touch tcp_mem, but increase the other variable values.

Code: Select all

sysctl -w net.core.rmem_max=8738000
sysctl -w net.core.wmem_max=6553600
sysctl -w net.ipv4.tcp_rmem=8192 873800 8738000
sysctl -w net.ipv4.tcp_wmem=4096 655360 6553600
Can anyone please advise?
Why aren't the kernel defaults enough?
In any case, how should I calculate my optimum values given my RAM?

Thanks,

Vieri

[EDIT: adding more info]

Code: Select all

# cat /proc/net/sockstat
sockets: used 13121
TCP: inuse 10010 orphan 11 tw 246 alloc 12597 mem 772909
UDP: inuse 92 mem 59
UDPLITE: inuse 0
RAW: inuse 7
FRAG: inuse 0 memory 0

Code: Select all

# cat /proc/net/sockstat6
TCP6: inuse 282
UDP6: inuse 40
UDPLITE6: inuse 0
RAW6: inuse 5
FRAG6: inuse 0 memory 0

Code: Select all

#  sysctl -a |grep tcp
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.nlm_tcpport = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = cubic reno
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = cubic reno
net.ipv4.tcp_base_mss = 1024
net.ipv4.tcp_challenge_ack_limit = 1000
sysctl: net.ipv4.tcp_congestion_control = cubic
reading key "net.ipv6.conf.all.stable_secret"net.ipv4.tcp_dsack = 1

net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_key = 6707aeac-2dd079df-0dee3da3-befd1107
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_limit_output_bytes = 262144
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 131072
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_max_tw_buckets = 131072
net.ipv4.tcp_mem = 384027       512036  768054
net.ipv4.tcp_min_rtt_wlen = 300
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_notsent_lowat = -1
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_pacing_ca_ratio = 120
net.ipv4.tcp_pacing_ss_ratio = 200
net.ipv4.tcp_probe_interval = 600
net.ipv4.tcp_probe_threshold = 8
net.ipv4.tcp_recovery = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 4096        87380   6291456
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_thin_dupack = 0
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_workaround_signed_windows = 0
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.enp10s0.stable_secret"
sysctl: reading key "net.ipv6.conf.enp5s0.stable_secret"
sysctl: reading key "net.ipv6.conf.enp6s0.stable_secret"
sysctl: reading key "net.ipv6.conf.enp7s0f0.stable_secret"
sysctl: reading key "net.ipv6.conf.enp7s0f1.stable_secret"
sysctl: reading key "net.ipv6.conf.enp7s0f2.stable_secret"
sysctl: reading key "net.ipv6.conf.enp7s0f3.stable_secret"
sysctl: reading key "net.ipv6.conf.enp8s5.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
Top
ct85711
Veteran
Veteran
Posts: 1791
Joined: Tue Sep 27, 2005 8:54 pm

  • Quote

Post by ct85711 » Mon Sep 18, 2017 4:32 pm

Generally, you shouldn't ever need to change any of these settings.

One of the first things I noticed on the info you put, is this:

Code: Select all

KiB Mem : 32865056 total,   820664 free, 20358972 used, 11685420 buff/cache
KiB Swap: 37036988 total, 34924984 free,  2112004 used. 12014564 avail Mem
Most specifically, the amount of free memory. Considering it appears that you have 32G of ram and 20G is used; it looks you have an issue of something is eating a lot more memory than it should be.
Otherwise, from what it appears from a little research, indicates tcp_mem is set by the kernel based on the amount of system memory is available. One thing to note, is the values for tcp_mem are NOT in kb, but in pages.

This 2 articles explain more about when this error is shown and the causes...
http://blog.tsunanet.net/2011/03/out-of ... emory.html
http://www.linux-admins.net/2013/01/tro ... emory.html
So two conditions that can trigger this "Out of socket memory" error:

1. There are "too many" orphan sockets (most common).
2. The socket already has the minimum amount of memory and we can't give it more because TCP is already using more than its limit.
Top
Ant P.
Watchman
Watchman
Posts: 6920
Joined: Sat Apr 18, 2009 7:18 pm
Contact:
Contact Ant P.
Website

Re: TCP: out of memory -- consider tuning tcp_mem

  • Quote

Post by Ant P. » Mon Sep 18, 2017 6:19 pm

Vieri wrote:

Code: Select all

#  sysctl -a |grep tcp
...
sysctl: net.ipv4.tcp_congestion_control = cubic
That's a pretty terrible choice in this century and almost certainly isn't helping with the RAM issue. Configure your kernel to use TCP_CONG_BBR+NET_SCH_FQ if it's a server/desktop, or TCP_CONG_CDG+NET_SCH_FQ_CODEL if it's routing packets.
Top
Vieri
l33t
l33t
Posts: 935
Joined: Sun Dec 18, 2005 12:26 pm

  • Quote

Post by Vieri » Mon Sep 18, 2017 9:12 pm

I suspect the culprit may be with Squid, or any related process (c-icap, clamd, helpers, etc.).

In fact, after stopping and restarting Squid I no longer get the "TCP out of memory" message, and connections start working again.

However, RAM usage grows steadily and I'm bound to get into trouble again.

Code: Select all

# top

top - 22:42:23 up 19 days, 15:08,  2 users,  load average: 1.30, 1.48, 1.50
Tasks: 342 total,   1 running, 341 sleeping,   0 stopped,   0 zombie
%Cpu0  :  1.5 us,  1.5 sy,  0.0 ni, 95.5 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 98.5 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni, 98.5 id,  0.0 wa,  0.0 hi,  1.5 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni, 97.0 id,  0.0 wa,  0.0 hi,  3.0 si,  0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni, 97.0 id,  0.0 wa,  0.0 hi,  3.0 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 95.5 id,  0.0 wa,  0.0 hi,  4.5 si,  0.0 st
KiB Mem : 32865056 total,  3679916 free, 17272208 used, 11912932 buff/cache
KiB Swap: 37036988 total, 35700916 free,  1336072 used. 15100248 avail Mem 

Code: Select all

# ps aux --sort -rss | head -n 23
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     31780  0.0 10.9 4547476 3603176 ?     Sl   Sep15   1:43 /usr/libexec/c-icap
root     12222  0.0  5.0 2122132 1656692 ?     Sl   Sep17   0:50 /usr/libexec/c-icap
root     12679  0.0  4.8 2121056 1596344 ?     Sl   Sep17   0:51 /usr/libexec/c-icap
root     16684  0.0  4.6 2121056 1538440 ?     Sl   Sep17   0:50 /usr/libexec/c-icap
root     17266  0.0  4.6 2121056 1524016 ?     Sl   Sep17   0:49 /usr/libexec/c-icap
root     15430  0.0  4.4 2121056 1448768 ?     Sl   Sep17   0:47 /usr/libexec/c-icap
root     12620  0.0  3.2 1531232 1056944 ?     Sl   11:25   0:28 /usr/libexec/c-icap
root      9046  0.0  2.7 1465696 909808 ?      Sl   11:11   0:27 /usr/libexec/c-icap
root      9544  0.0  2.5 1400160 827648 ?      Sl   11:13   0:23 /usr/libexec/c-icap
postfix  12038  2.4  2.3 1417900 776256 ?      SNsl Sep08 368:17 /usr/sbin/clamd
root     17410  0.0  1.6 810336 544184 ?       Sl   13:50   0:14 /usr/libexec/c-icap
mysql     8000  0.7  1.1 4645052 371500 ?      Ssl  Aug30 205:44 /usr/sbin/mysqld --defaults-file=/etc/mysql/my.cnf
squid    17158  0.2  0.7 495692 256996 ?       S    Sep08  32:20 (squid-1) -YC -f /etc/squid/squid.owa2.conf -n squidowa2
squid    21552  1.2  0.6 291140 209040 ?       S    19:19   2:32 (squid-1) -YC -f /etc/squid/squid.conf -n squid
postgres 10502  0.0  0.4 202192 138576 ?       Ss   Aug30   0:26 postgres: checkpointer process   
postgres 10503  0.0  0.4 201816 132872 ?       Ss   Aug30   0:09 postgres: writer process   
named     7902  0.7  0.3 833384 110208 ?       Ssl  Aug30 198:47 /usr/sbin/named -u named
postfix  26837  0.5  0.2 202172 84888 ?        S    22:34   0:02 /usr/bin/perl /usr/bin/mimedefang.pl -server
suricata 32489  0.7  0.2 2124304 70180 ?       Ssl  Sep15  40:58 /usr/bin/suricata --pidfile /var/run/suricata/suricata.pid -D -c /etc/suricata/suricata-HMAN.yaml -vvvv -q 0 -q 1 -q 2 -q 3 -q 4 -q 5 --set logging.outputs.1.file.filename=/var/log/suricata/suricata.log --user=suricata --group=suricata -l /var/log/suricata
squid    17036  0.0  0.1 158840 41340 ?        S    Sep08   1:30 (squid-1) -YC -f /etc/squid/squid.https.conf -n squidhttps
postfix  28574  0.1  0.1 114496 39772 ?        S    22:38   0:00 /usr/bin/perl /usr/bin/mimedefang.pl -server
squid    17002  0.0  0.0 132336 30384 ?        S    Sep08   1:17 (squid-1) -YC -f /etc/squid/squid.http.conf -n squidhttp
I have 5 squid instances.
After stopping one of them, here's the output of top.

Code: Select all

# top

top - 22:50:05 up 19 days, 15:16,  2 users,  load average: 1.01, 1.48, 1.52
Tasks: 299 total,   1 running, 298 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 96.7 id,  0.0 wa,  0.0 hi,  3.3 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 98.7 id,  0.0 wa,  0.0 hi,  1.3 si,  0.0 st
%Cpu2  :  0.3 us,  0.0 sy,  0.0 ni, 97.7 id,  0.0 wa,  0.0 hi,  2.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni, 99.0 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni, 98.0 id,  0.0 wa,  0.0 hi,  2.0 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu6  :  0.0 us,  0.3 sy,  0.0 ni, 98.0 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 93.0 id,  0.0 wa,  0.0 hi,  7.0 si,  0.0 st
KiB Mem : 32865056 total,  4082436 free, 16870636 used, 11911984 buff/cache
KiB Swap: 37036988 total, 35700952 free,  1336036 used. 15504180 avail Mem 
I'm still seeing lots of used RAM.

After stopping all squid instances, there's still plenty of RAM in use.

Finally, after I stop c-icap, I get this reading:

Code: Select all

# top

top - 22:55:15 up 19 days, 15:21,  2 users,  load average: 0.91, 1.06, 1.32
Tasks: 276 total,   1 running, 275 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 98.1 id,  0.0 wa,  0.0 hi,  1.9 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 97.2 id,  0.0 wa,  0.0 hi,  2.8 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni, 99.1 id,  0.0 wa,  0.0 hi,  0.9 si,  0.0 st
%Cpu3  :  0.0 us,  0.9 sy,  0.0 ni, 98.1 id,  0.0 wa,  0.0 hi,  0.9 si,  0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni, 98.1 id,  0.0 wa,  0.0 hi,  1.9 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni, 99.1 id,  0.0 wa,  0.0 hi,  0.9 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni, 97.2 id,  0.0 wa,  0.0 hi,  2.8 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 92.5 id,  0.0 wa,  0.0 hi,  7.5 si,  0.0 st
KiB Mem : 32865056 total, 19145456 free,  1809168 used, 11910432 buff/cache
KiB Swap: 37036988 total, 36221980 free,   815008 used. 30568180 avail Mem 
Starting c-icap again, along with all 5 Squid instances yields this:

Code: Select all

# top

top - 22:59:20 up 19 days, 15:25,  2 users,  load average: 1.25, 1.06, 1.24
Tasks: 292 total,   1 running, 291 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni, 97.7 id,  0.0 wa,  0.0 hi,  2.3 si,  0.0 st
%Cpu2  :  0.3 us,  0.0 sy,  0.0 ni, 98.0 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni, 99.0 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
%Cpu4  :  0.3 us,  0.0 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  1.3 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni, 97.3 id,  0.0 wa,  0.0 hi,  2.7 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni, 93.4 id,  0.0 wa,  0.0 hi,  6.6 si,  0.0 st
KiB Mem : 32865056 total, 19103744 free,  1843460 used, 11917852 buff/cache
KiB Swap: 37036988 total, 36221980 free,   815008 used. 30527568 avail Mem
So, it seems c-icap and/or squidclamav (that's what I'm using for content scanning) are responsible for this.

Nothing apparently relevant in the c-icap log though...

I'm not sure what I can try next except contact the c-icap devs.

BTW, "cubic" is default in gentoo-sources. I had trouble with hardened-sources, and I don't know what's the default there.
Hard to choose between "server" and "router" because my system is a firewall/router/IPS, but also a Squid, MySQL, Postgresql, Apache/PHP, postfix server.
I know I should seperate these services, at least in VMs or docker containers, but I haven't come to that yet.

Is it "safer" to use TCP_CONG_BBR+NET_SCH_FQ or TCP_CONG_CDG+NET_SCH_FQ_CODEL?
Top
Ant P.
Watchman
Watchman
Posts: 6920
Joined: Sat Apr 18, 2009 7:18 pm
Contact:
Contact Ant P.
Website

  • Quote

Post by Ant P. » Tue Sep 19, 2017 12:07 am

Vieri wrote:Is it "safer" to use TCP_CONG_BBR+NET_SCH_FQ or TCP_CONG_CDG+NET_SCH_FQ_CODEL?
The latter; CoDel is designed to reduce the amount of buffering in the network stack. BBR may make things a tiny bit faster but it's also more complex.
Top
Vieri
l33t
l33t
Posts: 935
Joined: Sun Dec 18, 2005 12:26 pm

  • Quote

Post by Vieri » Wed Sep 20, 2017 10:52 am

It seems that I have a memory leak in either c-icap, squidclamav or libarchive.
I'm currently testing the c-icap option MaxRequestsPerChild to see if I can control this leak until I find a real fix for this issue.

Even though it's almost another topic, thanks for letting me know about CoDel and BBR. I think I'm going to give both a try.

The default Gentoo sources do not include BBR and CDG.
I compiled them as modules on a running system because I wanted to test them without needing to reboot.
If all goes well I'll recompile them built-in.

Code: Select all

TCP_CONG_BBR
NET_SCH_FQ
TCP_CONG_CDG
NET_SCH_FQ_CODEL
After compilation:

Code: Select all

# egrep 'TCP_CONG_BBR|NET_SCH_FQ|TCP_CONG_CDG' /usr/src/linux/.config
CONFIG_TCP_CONG_CDG=m
CONFIG_TCP_CONG_BBR=m
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_FQ=m

Code: Select all

## Are these necessary?
# modprobe tcp_bbr
# modprobe tcp_cdg
# modprobe sch_fq_codel
# modprobe sch_fq

Code: Select all

# lsmod | egrep 'tcp_|sch_'
sch_fq                  6552  0
sch_fq_codel            7656  0
tcp_cdg                 3463  0
tcp_bbr                 4992  0
tcp_htcp                2781  0

Code: Select all

# sysctl net.ipv4.tcp_available_congestion_control
net.ipv4.tcp_available_congestion_control = cubic reno htcp bbr cdg
These are the defaults before the change:

Code: Select all

# sysctl net.core.default_qdisc
net.core.default_qdisc = pfifo_fast
# sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = cubic
Testing transfer rates through firewall/router (rsync from a LAN host to a host in another physical subnet through router):

Code: Select all

# du -h /tmp/test/bigfile/big
554M    /tmp/test/bigfile/big
# du -h /tmp/test/smallfiles/filesdir
56M     /tmp/test/smallfiles/filesdir
# time rsync -a /tmp/test/bigfile/big root@host1:/tmp/test/
real    0m52.753s
user    0m9.820s
sys     0m3.020s
# time rsync -a /tmp/test/smallfiles/filesdir root@host1:/tmp/test/
real    0m7.540s
user    0m1.070s
sys     0m0.240s
Switch to BBR (not using -w for now):

Code: Select all

# sysctl net.core.default_qdisc=fq
# sysctl net.ipv4.tcp_congestion_control=bbr
Testing transfer rates through firewall/router again:

Code: Select all

# time rsync -a /tmp/test/bigfile/big root@host1:/tmp/test/
real    0m52.623s
user    0m9.930s
sys     0m2.850s
# time rsync -a /tmp/test/smallfiles/filesdir root@host1:/tmp/test/
real    0m7.490s
user    0m0.930s
sys     0m0.310s
I guess I can switch to CDG like this:

Code: Select all

# sysctl net.core.default_qdisc=fq_codel
# sysctl net.ipv4.tcp_congestion_control=cdg
However, why am I not seeing any improvements?
Do I need to apply the changes some other way?
Top
Post Reply

6 posts • Page 1 of 1

Return to “Networking & Security”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic