Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Kernel & Hardware
  • Search

it is numa kernel option useful for single cpu pc?

Kernel not recognizing your hardware? Problems with power management or PCMCIA? What hardware is compatible with Gentoo? See here. (Only for kernels supported by Gentoo.)
Post Reply
Advanced search
16 posts • Page 1 of 1
Author
Message
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

it is numa kernel option useful for single cpu pc?

  • Quote

Post by papu » Tue Aug 29, 2023 10:37 pm

i have always had numa option enabled in my kernel but it is necessary on a single cpu system?
i have disabled for test and i my gentoo seems working like before, so i don't know...
in case it had to be enabled what would be the recomended kernel numa defaults?

thanks a lot

:lol:
....

NUMA refers to a memory architecture that allows processors in a multi-socket system to access different memory regions with varying latencies. In contrast, we have the traditional Symmetric Multiprocessing (SMP) approach, where all processors have uniform memory access.

Consequently, with the NUMA architecture, processors can access local memory, which is closer and, therefore, faster than memory located further away (remote memory). This distinction becomes particularly important in systems with multiple sockets, where varying memory access times can significantly impact application performance.

In a server environment, understanding if a server supports NUMA is essential. This is because we can ensure our applications effectively utilize available resources, optimizing for improved efficiency
and responsiveness.
....
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
pietinger
Moderator
Moderator
Posts: 6617
Joined: Tue Oct 17, 2006 5:11 pm
Location: Bavaria

Re: it is numa kernel option useful for single cpu pc?

  • Quote

Post by pietinger » Tue Aug 29, 2023 10:47 pm

papu wrote:i have always had numa option enabled in my kernel but it is necessary on a single cpu system?
i have disabled for test and i my gentoo seems working like before, so i don't know...
Usually it is not absolutely necessary if you dont have a NUMA system ... but ... it is recommended for some CPU's. See <Help> =>
For 64-bit this is recommended if the system is Intel Core i7
(or later), AMD Opteron, or EM64T NUMA.
(I have an i7 and an i9 and it is enabled)


P.S.: I took the default:

Code: Select all

(6)   Maximum NUMA Nodes (as a power of 2)
Top
logrusx
Advocate
Advocate
User avatar
Posts: 3529
Joined: Thu Feb 22, 2018 2:29 pm

  • Quote

Post by logrusx » Wed Aug 30, 2023 4:32 am

Emerge numactl and run:

Code: Select all

numactl --hardware
If it indicates more than one NUMA nodes, then you need this option enabled in the kernel.

In essence, some CPU's can group cores to work with dedicated memory channels, but with the desktop CPU's it's mostly not the case. Some threadrippers use NUMA, but some don't. I don't remember which.

Best Regards,
Georgi
Top
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

  • Quote

Post by papu » Wed Aug 30, 2023 8:17 am

mmm, ryzen it's not mencionated in <Help> i have a ryzen 7700

then i understand that it's not necessary in my system according the information below, isn't it? :P

a kernel with numa enabled:

Code: Select all

$ egrep -i NUMA /boot/config-$(uname -r)
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_NUMA=y
# CONFIG_AMD_NUMA is not set
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_ACPI_NUMA=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

$ numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 31245 MB
node 0 free: 26720 MB
node distances:
node   0
  0:  10

$ dmesg | grep -i numa
[    0.000738] No NUMA configuration found

$ lscpu | grep -i numa
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-15

$ cat /sys/devices/system/node/node*/numastat
numa_hit 2079321
numa_miss 0
numa_foreign 0
interleave_hit 1900
local_node 2079321
other_node 0

$ cat /sys/devices/system/node/online
0


the same kernel with numa disabled:

Code: Select all

$ egrep -i NUMA /boot/config-$(uname -r)
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
# CONFIG_NUMA is not set# find /proc -name numa_maps
find: ‘/proc/3700/task/3700/net’: L’argument passat no és vàlid
find: ‘/proc/3700/net’: L’argument passat no és vàlid

$ numactl --hardware
No NUMA available on this system

$ dmesg | grep -i numa
[    0.223392] pci_bus 0000:00: on NUMA node 0

$ lscpu | grep -i numa

$ cat /sys/devices/system/node/node*/numastat
cat: '/sys/devices/system/node/node*/numastat': No such file or directory

$ cat /sys/devices/system/node/online 
cat: /sys/devices/system/node/online: No such file or directory
Last edited by papu on Wed Aug 30, 2023 7:54 pm, edited 1 time in total.
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
logrusx
Advocate
Advocate
User avatar
Posts: 3529
Joined: Thu Feb 22, 2018 2:29 pm

  • Quote

Post by logrusx » Wed Aug 30, 2023 8:57 am

NUMA is only useful when there are multiple memory channels connected to different groups of cores. It's more common with multiple socket main boards for obvious reasons. The NU(Non-Uniform) part of NUMA (MA being Memory Access) comes from the fact that different groups of CPU's access different parts of the memory faster. If they need to access other parts of the memory (I think the term is foreign memory), it's slower. Having one NUMA node should be the same as uniform memory access, so I don't know if even if the CPU supports one NUMA node, having it enabled bears any benefit.

There's such thing as soft NUMA where certain cores are dedicated to certain tasks, I think benefiting from the particular cores cache being more useful in such scenarios and thus the tasks being executed faster.

If there's somebody who can explain better I would appreciate it.

Best Regards
Georgi
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Wed Aug 30, 2023 10:49 am

AFAIR when I enabled NUMA and looked at dmesg, there was only one node, so I disabled it again. That was true on my old Phenom board and remained true for my Zen 3 /ASUS 570X motherboard. It might just possibly benefit if I had 4 memory cards, which might be seen as two nodes, but with only 2 cards, IIUC there can only be one node.
Greybeard
Top
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

  • Quote

Post by papu » Wed Aug 30, 2023 7:50 pm

Goverp wrote:AFAIR when I enabled NUMA and looked at dmesg, there was only one node, so I disabled it again. That was true on my old Phenom board and remained true for my Zen 3 /ASUS 570X motherboard. It might just possibly benefit if I had 4 memory cards, which might be seen as two nodes, but with only 2 cards, IIUC there can only be one node.
do you mean 4x dimm modules instead 2x?
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
Spanik
Veteran
Veteran
Posts: 1170
Joined: Fri Dec 12, 2003 9:10 pm
Location: Belgium

  • Quote

Post by Spanik » Wed Aug 30, 2023 8:07 pm

logrusx wrote:Emerge numactl and run:

Code: Select all

numactl --hardware
If it indicates more than one NUMA nodes, then you need this option enabled in the kernel.

In essence, some CPU's can group cores to work with dedicated memory channels, but with the desktop CPU's it's mostly not the case. Some threadrippers use NUMA, but some don't. I don't remember which.

Best Regards,
Georgi
Interesting. I never tought a "single package" cpu could have more than one memory interface. But apparently it can, I ran it and my Epyc 7401P does have 4 nodes. That probably also explains why you need to have at least 4 ram modules in this motherboard.
Expert in non-working solutions
Top
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

  • Quote

Post by papu » Thu Aug 31, 2023 8:58 am

Spanik wrote:
logrusx wrote:Emerge numactl and run:

Code: Select all

numactl --hardware
If it indicates more than one NUMA nodes, then you need this option enabled in the kernel.

In essence, some CPU's can group cores to work with dedicated memory channels, but with the desktop CPU's it's mostly not the case. Some threadrippers use NUMA, but some don't. I don't remember which.

Best Regards,
Georgi
Interesting. I never tought a "single package" cpu could have more than one memory interface. But apparently it can, I ran it and my Epyc 7401P does have 4 nodes. That probably also explains why you need to have at least 4 ram modules in this motherboard.
may you paste your?
lscpu | grep -i numa
numactl --hardware
egrep -i NUMA /boot/config-$(uname -r)

thanks
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Thu Aug 31, 2023 10:24 am

papu wrote:
Goverp wrote:AFAIR when I enabled NUMA and looked at dmesg, there was only one node, so I disabled it again. That was true on my old Phenom board and remained true for my Zen 3 /ASUS 570X motherboard. It might just possibly benefit if I had 4 memory cards, which might be seen as two nodes, but with only 2 cards, IIUC there can only be one node.
do you mean 4x dimm modules instead 2x?
Yes. At least, on my motherboard, you must install dimm modules in pairs.
Greybeard
Top
Spanik
Veteran
Veteran
Posts: 1170
Joined: Fri Dec 12, 2003 9:10 pm
Location: Belgium

  • Quote

Post by Spanik » Thu Aug 31, 2023 5:43 pm

papu wrote:
Spanik wrote:
logrusx wrote:Emerge numactl and run:

Code: Select all

numactl --hardware
If it indicates more than one NUMA nodes, then you need this option enabled in the kernel.

In essence, some CPU's can group cores to work with dedicated memory channels, but with the desktop CPU's it's mostly not the case. Some threadrippers use NUMA, but some don't. I don't remember which.

Best Regards,
Georgi
Interesting. I never tought a "single package" cpu could have more than one memory interface. But apparently it can, I ran it and my Epyc 7401P does have 4 nodes. That probably also explains why you need to have at least 4 ram modules in this motherboard.
may you paste your?
lscpu | grep -i numa
numactl --hardware
egrep -i NUMA /boot/config-$(uname -r)

thanks
Sure, it isn't some secret but I didn't want to spoil the thread.

Code: Select all

ikke@daw ~ $ lscpu | grep -i numa
NUMA node(s):                    4
NUMA node0 CPU(s):               0-5,24-29
NUMA node1 CPU(s):               6-11,30-35
NUMA node2 CPU(s):               12-17,36-41
NUMA node3 CPU(s):               18-23,42-47
ikke@daw ~ $ numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 24 25 26 27 28 29
node 0 size: 15958 MB
node 0 free: 15006 MB
node 1 cpus: 6 7 8 9 10 11 30 31 32 33 34 35
node 1 size: 16090 MB
node 1 free: 15011 MB
node 2 cpus: 12 13 14 15 16 17 36 37 38 39 40 41
node 2 size: 16124 MB
node 2 free: 15457 MB
node 3 cpus: 18 19 20 21 22 23 42 43 44 45 46 47
node 3 size: 16121 MB
node 3 free: 15429 MB
node distances:
node   0   1   2   3 
  0:  10  16  16  16 
  1:  16  10  16  16 
  2:  16  16  10  16 
  3:  16  16  16  10 
daw ~ # egrep -i NUMA /boot/config-$(uname -r)
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_NUMA=y
# CONFIG_AMD_NUMA is not set
CONFIG_X86_64_ACPI_NUMA=y
# CONFIG_NUMA_EMU is not set
CONFIG_ACPI_NUMA=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y 
First forgot I had to mount /boot.
Expert in non-working solutions
Top
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

  • Quote

Post by papu » Thu Aug 31, 2023 11:28 pm

Spanik wrote: Sure, it isn't some secret but I didn't want to spoil the thread.
buff 48 cores :o

4 nodes, you seem need numa :)

thanks!
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Fri Sep 01, 2023 4:13 pm

FWIW I just enabled NUMA on my 570X motherboard, and got

Code: Select all

numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 0 size: 32011 MB
node 0 free: 27441 MB
node distances:
node   0 
  0:  10 
so that'll be a no, then.
Only one bank of memory - no idea if a second would change it.
Greybeard
Top
Spanik
Veteran
Veteran
Posts: 1170
Joined: Fri Dec 12, 2003 9:10 pm
Location: Belgium

  • Quote

Post by Spanik » Fri Sep 01, 2023 6:18 pm

papu wrote:
Spanik wrote: Sure, it isn't some secret but I didn't want to spoil the thread.
buff 48 cores :o

4 nodes, you seem need numa :)

thanks!
Yes indeed. And the worst thing is that I only have NUMA enabled out of habit. I previously had a dual socket Opteron board so I really needed NUMA. I still enabled it because it was marked that this would not have penalties if not needed. It never occured to me that a single package cpu would need NUMA. This thread got me thinking about a few things. Now I digress but maybe others might have interesting comments.

From this I now understand that this CPU has internally 4 chiplets, with each their own memory controller. And between those 4 chiplets some "glue chip". In fact this replaces 4 separate cpu's and a north bridge. So it is in reality a multi cpu in a single cpu package. Very different from what I thought it to be.

Probably this also has implications on the pci-e lanes. I bought this particular setup (Epyc 7401P + H11SSL-i) because I wanted:
- a high core count cpu
- lots of memory slots
- independent pci-e lanes between slots
(- 2 Gb-ethernet interfaces)

But from what I learned here, these pci-e lanes probably are just as well "locked" to certain chiplets. So if you want some peculiar setup (like high audio channel count or multi video card setup) then selecting in which slot you put your interfaces so they all connect to the same chiplet in order to avoid having them on different memory controllers might be a good idea. Or maybe just the other way round if some are only input and the others output?

These single cpu, multi-core cpu's are more multi-cpu than they pretend to be. Or not. But you just cannot know without a lot more datasheet reading. Before these it was simple to make the difference between a multi-cpu and a single-cpu: just count the cpu chips on the motherboard. But now it has become something you need to think about as it isn't so visual anymore.

So I find this thread very interesting. If maybe a bit too late for me. If I had know this before I might have settled for a cpu/motherboard with a bit less separated cpu's and more clockspeed.

Sorry if I take this out of context.
Expert in non-working solutions
Top
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

  • Quote

Post by papu » Sat Sep 02, 2023 4:58 pm

Goverp wrote:
papu wrote:
Goverp wrote:AFAIR when I enabled NUMA and looked at dmesg, there was only one node, so I disabled it again. That was true on my old Phenom board and remained true for my Zen 3 /ASUS 570X motherboard. It might just possibly benefit if I had 4 memory cards, which might be seen as two nodes, but with only 2 cards, IIUC there can only be one node.
do you mean 4x dimm modules instead 2x?
Yes. At least, on my motherboard, you must install dimm modules in pairs.
mmm , you're thinking about that a 1 cpu/1 node would to have 1 channel: 2x16,2x32 to getting the best performance rather than 2 channels: 4x8, 4x16.?

so why the motherboards for single cpu have at least two ram channels?

:wink:
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
papu
l33t
l33t
Posts: 772
Joined: Fri Jan 25, 2008 3:04 pm
Location: Sota algun pi o alzina...

  • Quote

Post by papu » Sat Sep 02, 2023 5:10 pm

Spanik wrote:
These single cpu, multi-core cpu's are more multi-cpu than they pretend to be. Or not. But you just cannot know without a lot more datasheet reading. Before these it was simple to make the difference between a multi-cpu and a single-cpu: just count the cpu chips on the motherboard. But now it has become something you need to think about as it isn't so visual anymore.

So I find this thread very interesting. If maybe a bit too late for me. If I had know this before I might have settled for a cpu/motherboard with a bit less separated cpu's and more clockspeed.

Sorry if I take this out of context.
mmm i think Threadrippe also are 4 cpus on a single die
what cpu would you bought, nowadays?
~amd64 && systemd && plasma --cpu 7700 --ram 2x32GB --gpu RX 6600
Top
Post Reply

16 posts • Page 1 of 1

Return to “Kernel & Hardware”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy