Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Linux can't handle low memory
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Wed May 28, 2014 12:43 am    Post subject: Linux can't handle low memory Reply with quote

I like GNU/Linux. Been using it for years. Would never go back to Windows, and probably will never switch to Macs.

But one thing that bothers me a lot is that, in my experience, Linux is really REALLY horrible at gracefully dealing with low (full) memory situations. Over the years, I have tried systems with and without swap; the experience is the same. When memory usage approaches memory capacity, the system gets moderately thrashy, and when memory gets really full, the system becomes so swappy/thrashy that it becomes unusable. Sometimes you can move your mouse, and see it jump at a frame rate of about 1 every 10 seconds. But usually, the entire system is locked up. GUI doesn't respond, can't SSH in, can't switch out of X to a console, nothing. Nothing but a constantly flickering disk usage hardware light, anyway. The only escape is a hard reset of the box.

Why is this, and what can be done about it? It's as if the Linux OOM killer is not doing its job.

Ideally what I'd like is to be able to specify a priority list of certain applications which I am permitting to be primary targets for the OOM killer (and others to be avoided by the killer, if possible).

Any tips or suggestions?
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 7836
Location: Somewhere over Atlanta, Georgia

PostPosted: Wed May 28, 2014 1:20 pm    Post subject: Reply with quote

Would you mind a little basic diagnosis first? Could you post the output of
Code:
swapon
and
Code:
cat /proc/meminfo
please?

For what it's worth, your experience doesn't match mine. My Linux installations appear to reasonably gracefully handle virtual memory allocations up to about 1.5x physical memory and don't really show degraded responsiveness except under highly specialized situations.

- John
_________________
This space intentionally left blank.
Back to top
View user's profile Send private message
haarp
Guru
Guru


Joined: 31 Oct 2007
Posts: 372

PostPosted: Wed May 28, 2014 2:18 pm    Post subject: Reply with quote

It seems memory management has been really broken for a few years now.

When my system runs out of RAM, it simply freezes. It won't invoke oom_killer, it won't use the generously provided swap, no, it freezes and that's that.

In the end I just threw more RAM at the problem and called it a day. Not the best solution, but it works for now.
Back to top
View user's profile Send private message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Wed May 28, 2014 4:04 pm    Post subject: Reply with quote

swapon has no discernable output. But this system has no swap at all. In the past, with systems that had swap, it didn't seem to matter. Low mem situations would just fill or nearly fill swap, and then the same thrashy, degraded performance occurred until total system lockup.

meminfo:

Code:

# cat /proc/meminfo
MemTotal:        8138812 kB
MemFree:          317600 kB
Buffers:          703320 kB
Cached:          2716804 kB
SwapCached:            0 kB
Active:          4669200 kB
Inactive:        1737124 kB
Active(anon):    2988292 kB
Inactive(anon):   118068 kB
Active(file):    1680908 kB
Inactive(file):  1619056 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:               144 kB
Writeback:             0 kB
AnonPages:       2986272 kB
Mapped:           645884 kB
Shmem:            120160 kB
Slab:             761996 kB
SReclaimable:     673932 kB
SUnreclaim:        88064 kB
KernelStack:        5504 kB
PageTables:        57580 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     4069404 kB
Committed_AS:    9241976 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      309064 kB
VmallocChunk:   34359421816 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:     2594304 kB
DirectMap2M:     5752832 kB


Re: throwing more money/RAM at the problem: I've had increasingly more powerful systems over time, and it doesn't matter, I seem to fill memory up anyway. :) I'm hoping there's a technical (software) solution to this problem.
Back to top
View user's profile Send private message
i92guboj
Moderator
Moderator


Joined: 30 Nov 2004
Posts: 10028
Location: Córdoba (Spain)

PostPosted: Wed May 28, 2014 4:39 pm    Post subject: Reply with quote

That doesn't match my experience.

I never use top-notch hardware, so I know it for good...

As an example, the machine I am writing from is a p4 laptop that's maddening me because of the noisy fans as I write. I use this machine for development (cross-OS), and I often need to fire up a virtualbox machine to test the resulting binaries (or compile them natively) in the target systems. When I forget to close one VM before launching the next one, my best friend, the OOM Killer, comes and gives me the due salutation. Which its next victim will be is always a mystery, but, being the bigger beasts, the VMs are almost always the chosen ones.

This laptop has RAM maxed at 2529 mb even if I put 2 2GB sticks in it (don't ask me why, someone at HP decided that'd be something cool).

If the machine really hard-locks it's usually for some other reason. It might be related to the RAM filling up, but it's not caused by it. Maybe some buggy fs or graphics driver which doesn't like the fact that there's no free RAM. Or some bad stick of RAM which produces a kernel panic.

That's my experience anyway. I can't be sure what your problem is. But I am sure Mr. OOM is still alive and kicking because I see it quite often ;) :roll:
_________________
Gentoo Handbook | My website
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32098
Location: 56N 3W

PostPosted: Wed May 28, 2014 6:48 pm    Post subject: Reply with quote

Pistos,

Not having swap does not stop the kernel swapping. It just robs it of one of the possible ways to do swapping.

Swap space is only ever used for dynamically allocated RAM.
Data or program code that has permanent home on disk can be dropped from RAM at any time and reloaded as required.
In the case of data, there may need to be a write if changed data has not yet been committed but the kernel will manage that.

Not having swap only locks dynamically allocated RAM into RAM, forcing the kernel to remove parts of the code of a program you would like to use from RAM to load a piece of another program you would like to use too. All the while, ALL dynamically allocated memory for all programs is kept in RAM.

If you have swap and its needed but not used, you have a setup problem.

RAM and swap filling up sounds like a program with a 'memory leak'. It dynamically allocates some RAM but never frees it. Next time the code segment is executed, the same thing happens ....
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Wed May 28, 2014 7:23 pm    Post subject: Reply with quote

Thanks for the replies, guys, they're educational for me.

I wish to heck the OOM killer would work as you described, i92guboj. It basically almost never does, though, for me. (We should see /var/log/messages lines when it kicks in, shouldn't we? Is there anything I can grep for in there?)

Quote:
RAM and swap filling up sounds like a program with a 'memory leak'.

I rarely run things that actually balloon out of control with any kind of speed. Whenever I get my system lockup problems, it's because I've started up a few too many things over time, and there is one final program that breaks the camel's back. I've learned to try to deal with this by keeping an eye on mem usage manually, and closing things manually as I start reaching memory limits. But I would expect that this is something my OS should be doing for me automatically, or at least, giving me warnings automatically. I do run KSysGuard, but I guess it's time I embed that in my taskbar so it's always visible. It's challenging, though, because usually it happens when I am hovering at, say, 6.5 out of 8 GB RAM usage, and then I start up a greedy app, like video processing or an emerge, and that tries to consume 2 GB or something, and then the system grinds to a halt before I can manually kill windows or processes.

Is there anything I can do or try? Add a swapfile? Tweak swappiness? Tweak OOM killer settings, or other kernel/system settings?

Quote:
If the machine really hard-locks it's usually for some other reason. It might be related to the RAM filling up, but it's not caused by it. Maybe some buggy fs or graphics driver which doesn't like the fact that there's no free RAM. Or some bad stick of RAM which produces a kernel panic.

The thing is, it's usually not a sudden lockup. The system will show signs of slowing+thrashing for about a minute, and then whatever was filling up the memory will fill it up the rest of the way to full, and then the lockup happens. But for a minute or so, I am able to slowly move the mouse around at a framerate of 1 per 2 seconds. Maybe the computer is mocking me, like torturing with pain before actually killing the victim, heh.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32098
Location: 56N 3W

PostPosted: Wed May 28, 2014 9:17 pm    Post subject: Reply with quote

Pistos,

While the kernel can allocate memory to apps and that includes refusing to allocate memory, the OOM wont do anything.
Its up to apps to deal with the return codes the kernel provides.

The OOM is only invoked when the kernel needs RAM (for itself) and insufficient can be allocated.
The choice then is kill a user task or kernel panic.

Swapping user code or data is just as slow as swapping anonymous memory but having and using swap cuts down on the amount of swapping of user code or data.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
i92guboj
Moderator
Moderator


Joined: 30 Nov 2004
Posts: 10028
Location: Córdoba (Spain)

PostPosted: Thu May 29, 2014 7:13 am    Post subject: Reply with quote

Even if the problem is a miss-behaved app that leaks like mad, this shouldn't happen.

There must be something unusual that we are not considering here. The first thing would be to make sure that it's a definitive lockup, and not something else.

For example, and assuming your kernel is configured correctly, does the SysReq stuff work at all? If so, you can try to invoke the oom killer using it, you can also try to get a core dump using SysReq. If the system recovers after invoking the oom killer then it's not a hardlock, but something that's bringing the machine to its knees, probably due to i/o (that could be a faulty hd, a faulty mobo, faulty ram... even a faulty power source, if you ask me). Also, disable any cpufreq related stuff while testing.

If it's an i/o problem, one problem could be that trend of having 2xRAM swap space, which, in slow drives (that's most of them) and with nowadays RAM sizes will result in you waiting two hours for the swap space to fill before oom gets into scene.

If the machine is completely frozen (you can't sync/reboot using SysReq), then it's time to check more things. I'd start with the most basic stuff: the temperature. Also for the HD and GPU, if your machine has sensors for all that. If they're normal, please, try to reproduce this in console, to sort out graphics driver issues.

I assume you are using an unpatched vanilla kernel and no 3rd party binary blobs (nvidia, ati, or whatever), if that's not the case, you know what to do next...

What I mean is that there must be a reason for this, either a hardware or driver issue. No userland program can crash the system like this unless your kernel permits it, and it usually does not. This kind of nasty bug is usually introduces by either a 3rd party blob, an experimental driver or a hardware issue.

:)
_________________
Gentoo Handbook | My website
Back to top
View user's profile Send private message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Thu May 29, 2014 1:56 pm    Post subject: Reply with quote

i92guboj: Great advice, thanks. That's a lot of things to explore, but I'll try to take it step by step. I'll get back to you on results. But, it's never a "quiet" hard lock, it's always a "more and more things contending for disk I/O until they all logjam in the bottleneck" kind of thing. And I have my doubts that it is specific to this machine, because I have seen the same behaviour from probably 3 different boxen over the years. I've been through ATI, nvidia and Intel for graphics -- doesn't matter. All give the same problem. So maybe it's just the way I have been (mis)configuring Gentoo again and again. And, to reiterate, this machine in particular has no swap partition or file.
Back to top
View user's profile Send private message
i92guboj
Moderator
Moderator


Joined: 30 Nov 2004
Posts: 10028
Location: Córdoba (Spain)

PostPosted: Thu May 29, 2014 2:04 pm    Post subject: Reply with quote

Another thing that comes to my mind: boot off a live disk and run rkhunter and chkrootkit over your system.

Just in case.

Alsoo, mind that even if no process is using all your cpu there are other ways to take down a working system. For example, read on fork bombs. In such case, having a powerful cpu only makes it worse.
_________________
Gentoo Handbook | My website
Back to top
View user's profile Send private message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Thu May 29, 2014 2:40 pm    Post subject: Reply with quote

Quote:
What I mean is that there must be a reason for this, either a hardware or driver issue. No userland program can crash the system like this unless your kernel permits it, and it usually does not. This kind of nasty bug is usually introduces by either a 3rd party blob, an experimental driver or a hardware issue.

To be totally honest, from what I can tell, I doubt it's ever any one single process or application being naughty. The way it looks like is just multiple individually well-behaved, non-malicious processes contending for RAM, or disk, or both, until the system can't handle it any more.
Back to top
View user's profile Send private message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Thu May 29, 2014 3:03 pm    Post subject: Reply with quote

I'm also surprised none of the readers of this thread have ever experienced what I have. Surely I'm not the only Gentooist to open up several memory-hungry apps at once, like: emerge, video processing, GIMP, MySQL/PostgreSQL, some web daemons (Rails, Django, what-have-you), Apache, many browser tabs... this all seems like very common, normal computer usage. Under these conditions, you guys just see windows disappearing nicely, and your system remaining stable? Because in my decade plus of using Linux, I have never, ever seen that happen.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32098
Location: 56N 3W

PostPosted: Thu May 29, 2014 4:53 pm    Post subject: Reply with quote

Pistos,

Thats just what I see.

I recall an experimental Xorg that leaked RAM ... it would hog all all of swap and RAM in about 10 min, then the OOM kicked in.

Committing all of RAM +2G to a series of VMs made things very slow both on the host and the VMs but it did not run out of RAM.
Bits of the VMs got swapped out.
Note to self - Over committing RAM like that is a really bad idea.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2700
Location: The Peanut Gallery

PostPosted: Thu May 29, 2014 5:01 pm    Post subject: Reply with quote

Pistos wrote:
I'm also surprised none of the readers of this thread have ever experienced what I have. Surely I'm not the only Gentooist to open up several memory-hungry apps at once, like: emerge, video processing, GIMP, MySQL/PostgreSQL, some web daemons (Rails, Django, what-have-you), Apache, many browser tabs... this all seems like very common, normal computer usage. Under these conditions, you guys just see windows disappearing nicely, and your system remaining stable? Because in my decade plus of using Linux, I have never, ever seen that happen.

Yes I do: I occasionally have a web-browser (konqueror instance) terminated, but far more likely is for the system to slow down when doing a massive build, eg of firefox, or libre-office. It might take a while as it swaps out to disk, but the system unwinds itself gracefully and eventually everything is dealt with and the build continues at normal pace. I find it quite impressive, even if I know it's fairly simple at a code level, in the overall scheme of things. It's lessened now that I've turned down the debug settings for certain packages.

4G of RAM, swap on two drives (2G and 2.4G). Used to be 2G.

Yes, I'm on an old machine, but then I remember running full KDE on RH5/6, on a K6-2/450 MHz with 64 MB RAM. And that was considered more than enough for Windows (98) at the time.
So I really don't buy that we need 8G+ to run a desktop: too much bloat. It's certainly not the case that a desktop does anything significantly more, in HCI and Computing terms, than it ever did, nor do I want it to: I'd like it to support my work, not require work.
Back to top
View user's profile Send private message
Kidov
n00b
n00b


Joined: 20 Jul 2006
Posts: 55
Location: Finland

PostPosted: Sat May 31, 2014 3:51 pm    Post subject: Reply with quote

I run some tests with my setup. (Xeon E3-1230 V2 @3.30GHz and 16GB of ram)

Had two virtual machines running, few different web browsers, gimp and lots of other stuff. At first attempt KDE became very unresponsive. Mouse was moving very slowly. I tried to move to console, but that killed KDE completely. Second attempt was more successful. I had almost all memory in use and then started to emerge mplayer2. KDE became a bit slow for a few seconds and then everything was back to normal. Second virtual machine had been terminated. I suppose that in my setup KDE and (or) nvidia drivers are causing most of my lockups.
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2700
Location: The Peanut Gallery

PostPosted: Sat May 31, 2014 4:34 pm    Post subject: Reply with quote

Hmm I suppose I should mention I run KDE without semantic-craptop (and am still on 4.10 atm.) Installing a production DB server on my desktop or laptop simply isn't something I want to do, and in fact I had real slowdown problems (and flat out bugs) when I did have it.

Hardest thing was to lose KMail after 15 years of using nothing else; it took me a few months even to get the motivation together to sort out email. Mutt is however, beautifully lightweight, and reminds me of pine; I didn't even need notmuch indexing in the end, the qdbm backend is fast enough for me. Plus I finally got to learn procmail, which had always been something I wanted to know about. What I especially love is how quick it is, in yakuake; multiple startup is not an issue, and nor is shutting down with them open.

That approach isn't for everyone, since you have to keep up with changes in the tree (eg I need to patch Konversation not to pull in qt-mysql), but it sure made KDE-4.x worth using again for me (I was ready to ditch it altogether.) I wrote up converting from KMail to mutt, and you can also get rid of *kit as well, which again leads to less headaches ime.

Now KDE-4.10 feels like 3.5 used to: very slick, quick and smooth. Plus kate is a whole lot better than it used to be. ;)
Back to top
View user's profile Send private message
Chiitoo
l33t
l33t


Joined: 28 Feb 2010
Posts: 877
Location: Here and Away Again

PostPosted: Sun Jun 01, 2014 6:29 pm    Post subject: ><)))°€ Reply with quote

Hmmm.

Story time!


Some time ago, someone brought me a tower (Packard Bell) with around 2 GiB of RAM, an Intel Core 2 and a GeForce 7900 GS running Windows Vista, which did not boot. They wanted me to recover some pictures from it, so I cloned the drive away and looked into the Vista installation.

My investigation led me into believing that the 24 or so bad sectors on the drive had unfortunately eaten up the MFT(s), with no recovery possible. I got the files out with the combination of ddrescue, testdisk, and photorec (thanks girls/guys!). It's somewhat surprising to me how few still are the open/free recovery software options, but I guess there's a good reason to it somewhere.
The files were without proper names since the MFT and its mirror were gone, and although Restorer 2000 (a very old version I still have laying about) actually managed to find them with proper names, it couldn't restore them (I probably failed to find the correct partition, I maybe guess).


The recovered images and videos were a bit under 100 GiB in total, and around 68000 of them were jpg-files. I installed a couple of variants of Linux Mint to the probably failing drive, and shoved the files in for the owner(s) to browse through them (and save whatever they want to keep elsewhere).

Opening up the directory containing the 68k or so images with the manager of files on Cinnamon (Nemo I guess) didn't prove to be a too grand of an idea. The result was much what was described here: an unresponsive system with the drive apparently being read a lot (and thumbnails written, probably). The mouse cursor would move ever so slightly now and then, but I couldn't get it to switch to the console (I'm pretty sure SysRq wasn't doing its magic either).

I let it sit there for a while, but it didn't seem like it was going to unfurl itself any time soon, and I certainly wouldn't wait for it to go through all the sixty-eight thousand images.

I did that a couple of times as a test, until it broke the directory completely (I/O error when trying to read it... it was NTFS, and while a command-prompt of a Windows 7 installation-disk would show files there, its chkdsk would seemingly freeze up while doing something).


Gentoo, however, has never really frozen up on me, unless I was having hardware issues (which obviously can't be ruled out in the above). There have been times that I've had (a heavily stripped) KDE stutter, but not to that extent.

Something close to that has happened when running a make on a kernel with the -j option (that is, plain -j without a number... oopsies!), but it would not take too long until processes were killed and the system was responsive again.

Another example could be building something like firefox and thunderbird simultaneously. Memory on this machine is around 8 GiB, and swap only at 0.51, which is rarely used (hence its size, and well, it was the handbook example back when this thing was installed!).

Nowadays my builds happen within a 6144M tmpfs, so the latter example will probably fail before freezing up anything. ^^;


Edit:

I completely forgot a chapter from the story. The installation was using nouveau and other such defaults provided by Mint currently for the Cinnamon desktop release.

I later “had” to switch into using nvidia-drivers, for kernel modesetting would apparently always set an unusable mode for its temporary display (an LG TV). I tried quite a few methods of forcing the mode (Grub(2) configs, xorg.conf as well as the video and drm_kms_helper.edid_firmware parameters on kernel command-line), and it did seem like they were used, but all the TV would display was 'no signal'.

If I plugged it into my main display, a Benq thingy, it would work fine. I could actually unplug it from the Benq, and plug it in on the TV again, and it would work, but that was quite tiresome, and nomodeset isn't really an option due to the low performance. Not to mention the fact that whenever KMS adjusted the mode, like for example when launching a game, it would go all no signal on me again...

It's an interesting bug at least to me, and I'll have to put it on the long list of things to do (re-produce it with Gentoo, and if possible, then create a topic about it somewhere around here). As if that list was not long enough already.

On my main installation, I've been using nvidia-drivers since I do quite a bit of gaming, but I am planning on trying nouveau some time soon, as I know (and have tested on other machines) that it has come quite a long way from what it was only few years ago. Or so I maybe guess at least.
_________________
Kind Regards,
~ The Noob Unlimited ~

Sore wa sore, kore wa kore.
Back to top
View user's profile Send private message
Pistos
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jul 2003
Posts: 133
Location: Canada

PostPosted: Mon Jun 02, 2014 3:56 am    Post subject: Reply with quote

I'm glad it's working for everyone except me. On the one hand, I'm jealous. But on the other hand, it gives me hope that I can maybe tweak one or two things, and get a normal GNU/Linux system like everyone else.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5720
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Mon Jun 02, 2014 3:47 pm    Post subject: Reply with quote

@Pistos:

give zram and zswap a try :)


append to kernel boot:

zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=40


/etc/local.d/zram.start :

Quote:
#!/bin/sh

modprobe zram num_devices=2
echo lz4 > /sys/block/zram0/comp_algorithm
echo 4 > /sys/block/zram0/max_comp_streams

echo lz4 > /sys/block/zram1/comp_algorithm
echo 4 > /sys/block/zram1/max_comp_streams

SIZE=8192

echo $(($SIZE*1024*1024)) > /sys/block/zram0/disksize

echo $(($SIZE*1024*1024)) > /sys/block/zram1/disksize

mkfs.jfs -L portage_tmp /dev/zram1

mount -o noatime,nodiratime,nointegrity,discard,async /dev/zram1 /var/tmp

chmod a+w /var/tmp

mkswap /dev/zram0

swapon -d /dev/zram0 -p 10



so one 8 GB drive is for /var/tmp

the other 8 GB zram drive is for swap


(4 GB probably also would work ;) )

note the discard-support



before upgrading to a newer rig - I had also 8 GB of RAM, ZFS (mirror-mode), dozens tabs in firefox and chromium open, playing flash, videos, music streaming - etc. etc.

and had no issues


the most preferrable kernel (since it has some neat performance related zram-updates) would be 3.15*


otherwise - if you're curious and brave - I could offer my patched up 3.14.5 kernel for testing - this offers a glimpse into the future what would be in the next kernel releases (backups are a requisite), included are:

- BFS
- BFQ
- latest Btrfs code
- fadvise updates
- madvise support [ http://marc.info/?l=linux-kernel&m=139683911004690&w=2 ]
- mm: per-thread vma caching
- mm: compaction related updates
- fat fallocate support
- mm: thrash detection-based file cache sizing (this one's probably very important in that regard) [ http://lwn.net/Articles/552327/ ]
- zram (the mentioned zram and zswap performance & compression updates, added discard support)
- zswap (performance-related updates)
- mm: vmscan related updates
- 3.15-based updates (RCU, x86 kaslr, locking, timers/nohz)
- some workqueue updates
- and some more
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.3.0-r2
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
sundialsvc4
Guru
Guru


Joined: 10 Nov 2005
Posts: 436

PostPosted: Mon Jun 02, 2014 4:46 pm    Post subject: Reply with quote

As far as I am dimly-aware, the OOM-killer logic is only concerned with a dire-shortage of physical RAM, and only to the extent of allowing the kernel to continue to run. It doesn't deal with massive-overcommits of virtual memory. And, really, I don't know of anything that does.

The only way that I know of to deal with this sort of problem is to avoid it, e.g. with the "ulimit" command. Which is not terribly effective.

My overall experience, with big-iron and with this-stuff, is that if you allow a system to attempt to over-commit itself, it probably will, and then it will roll over, puke all over your nice carpet, and then die.
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2700
Location: The Peanut Gallery

PostPosted: Mon Jun 02, 2014 6:18 pm    Post subject: Reply with quote

I find it a bit concerning that people have said swap does not work (ie is not even used.) My experience, as stated above, has been that it swaps out to disk, which slows everything down, in itself quite useful imo, and eventually unwinds it all.

I don't think I've ever run out of swap; it's always been something I've used at double RAM, though I'm on same-as-RAM now. Hmm guess I must've when firefox took so long (ie before the debug turn-down.)

I don't use tmpfs for anything ofc, including /tmp, quite deliberately. I've never seen the attraction of that for /tmp, since afaic the buffer cache does that anyhow if a file is ephemeral, and if it's not (or it's a download, which some apps will quite reasonably use /tmp for) then I don't want it using my RAM; I'd rather it went out to disk (which are spinning platters, not SSD.)
Back to top
View user's profile Send private message
Chiitoo
l33t
l33t


Joined: 28 Feb 2010
Posts: 877
Location: Here and Away Again

PostPosted: Mon Jun 02, 2014 8:33 pm    Post subject: Reply with quote

Pistos wrote:
I'm glad it's working for everyone except me. On the one hand, I'm jealous. But on the other hand, it gives me hope that I can maybe tweak one or two things, and get a normal GNU/Linux system like everyone else.

I'm sure it's possible. There are quite a few users around who are not afraid of looking at kernel configuration files and are pretty good at spotting things that might not be necessarily ideal.

That's not to say it is a kernel configuration thing.

I haven't actually used gentoo-sources for quite some time, so I'm not entirely sure what those are like now. I've been using ck-sources since 3.8 or so if I don't remember terribly wrong.

Now that I think back more, I might have had the slow-downs happen quite a bit at one point where I had only 2 GiBs of RAM. I had 4 during the start of my journey with Gentoo, but I discovered half of it was bad (segmentation faults during compilations made it quite apparent). So it was that I cruised along with only 2 GiBs for a time, and compiling anything slightly big would indeed be troublesome. That's when I temporarily whipped up more swap to alleviate that.

steveL wrote:
I've never seen the attraction of that for /tmp, since afaic the buffer cache does that anyhow if a file is ephemeral, and if it's not (or it's a download, which some apps will quite reasonably use /tmp for) then I don't want it using my RAM; I'd rather it went out to disk (which are spinning platters, not SSD.)

I decided to try it out some time ago, since I don't actually use a lot of my RAM normally otherwise (I guess I felt bad for the 8). I can't say I've noticed much of a difference, though I didn't really do any tests other than timing some builds that might not have had any benefit from it in the first place.


It's not often that I run things like virtual machines, or anything else that would actually make use or RAM that much, so that's probably the main reason for me not having this issue...
_________________
Kind Regards,
~ The Noob Unlimited ~

Sore wa sore, kore wa kore.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5720
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Mon Jun 02, 2014 11:09 pm    Post subject: Reply with quote

I ran a virtualbox (Windows XP, 3 GB of RAM) on top of ZFS

so RAM was pretty full and things started to get to a crawl,

with the above mentioned zram and zpool it worked fine :)


so a SWAP might be needed but not in the form of a physical swap PARTITION :wink:
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.3.0-r2
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Mon Jun 16, 2014 8:46 pm    Post subject: Reply with quote

The only time I get close to running out of RAM is either when running VM's and linking firefox on a < 1GB computer. It highly depends on what's using the RAM that tells what can and can't be swapped. If you're running software that has a memory leak, that leaked memory is a prime candidate for swap, and you should have swap to cover for this. Since it's leaked memory, it's never touched ever again, and won't ever get swapped back into RAM until the program ends or gets killed and the OS cleans up the pages. Swap is good.

However if the sofware goes and accesses every page it has allocated constantly, then these are the applications that end up being bad candidates for swap and will confuse the swap scheduler. Plus there's nothing that really can be done except for pretty much manually killing or exiting this application - as it's not looking for more pages and has everything it needs, it's all fine and dandy and the OOM killer will have a hard time selecting this application. This is the situation where the UI performace will suffer. Hence the curiousity: what is being run that's causing this issue?

I don't know if perhaps the best way to handle OOM is to kill most-ram-used first, but this will make it easy to DoS other peoples' programs in a multiuser situation. Hence it's not used, at least by default.

I've yet to see degraded performance on my 8G machine due to OOM as other than VMs I don't really run anything that use a large amount of ram to fill all 8GB. (On my 8GB machine, I have PORTAGE_TMPDIR on tmpfs, and since building doesn't have to do writeback to the slower disk, it is indeed faster than simply letting cache handle everything.)
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum