View previous topic :: View next topic |
Author |
Message |
mole Tux's lil' helper

Joined: 07 Nov 2009 Posts: 84
|
Posted: Sun Nov 12, 2023 7:25 pm Post subject: [SOLVED] 45.3GB trimmed, but no space is recovered |
|
|
I've an NVMe drive that appears to have 45GB "missing":
Code: | /dev/nvme0n1p4 851G 806G 2.1G 100% /data |
fstrim isn't set to run automatically. When I run fstrim as root it reports:
Code: | fstrim -v /data
/data: 45.3 GiB (48638926848 bytes) trimmed |
but the space is not recovered. This happens every time fstrim is run - always about 45GB data reported as recovered, but it doesn't appear.
I've re-mounted the partition and rebooted but it makes no difference.
The output of lsblk --discard is:
Code: |
NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda 0 0B 0B 0
├─sda1 0 0B 0B 0
├─sda2 0 0B 0B 0
└─sda3 0 0B 0B 0
nvme0n1 0 512B 2T 0
├─nvme0n1p1 0 512B 2T 0
├─nvme0n1p2 0 512B 2T 0
├─nvme0n1p3 0 512B 2T 0
└─nvme0n1p4 0 512B 2T 0
|
So trimming should be supported, but fstrim appears to be running as if it's with the "dry-run" option.
On another PC, it seems to work (or at least do something)
Code: | /dev/nvme0n1p5 354G 131G 205G 39% /home |
Code: | fstrim -v /home
/home: 223 GiB (239429185536 bytes) trimmed |
Which gives an extra 1GB without deleting any files.
Code: | /dev/nvme0n1p5 354G 131G 206G 39% /home |
It doesn't appear to be df -h reporting an incorrect value as software is giving critical warnings regarding disk space "available=2.74GiB" and shut down when 2GiB was reached.
Both filesystems are ext4; the only difference is the p5 /home partition has been filled slowly with normal use, while the p4 /data partition has been filled quickly - over a day or so.
I've tried mounting /dev/nvme0n1p4 without and with the discard option, it's normally mounted without it.
gdisk reports the partition size as 865.3 GiB, (ie 929.1 of the new smaller GBs) and the df manpage describes the -h flag giving sizes in "powers of 1024" so I don't think units are involved.
The fstrim claims of GiB trimmed do seem larger than the final free space on the second machine though.
Would fstrim exclude very recent data? Or is there just something I'm missing?
Last edited by mole on Mon Nov 13, 2023 7:28 am; edited 1 time in total |
|
Back to top |
|
 |
krotuss Apprentice

Joined: 01 Aug 2008 Posts: 250
|
Posted: Sun Nov 12, 2023 10:54 pm Post subject: |
|
|
fstrim has nothing to do with increasing free space reported by df utility, it merely notifies the drive about which blocks are empty.
As for used, free, total space discrepancy. Difference is about 42.9GB which is about 5%, common amount for reserved space on fs. Although I am not 100% sure that that is really the case and how df reports it on ext4. |
|
Back to top |
|
 |
mole Tux's lil' helper

Joined: 07 Nov 2009 Posts: 84
|
Posted: Mon Nov 13, 2023 7:27 am Post subject: |
|
|
Yes, I misunderstood what fstrim actually does and was off down a rabbit hole. It was the reserved space as you suggested:
tune2fs showed:
Code: | Block count: 226823345
Reserved block count: 11341167
Overhead clusters: 3842112
|
after Code: | tune2fs -m 0 /dev/nvme0n1p4 |
df -h shows Code: | /dev/nvme0n1p4 851G 806G 46G 95% /data |
The partition is for data only, no system processes, and I'll live with fragmentation until the larger disk arrives |
|
Back to top |
|
 |
logrusx Advocate


Joined: 22 Feb 2018 Posts: 3029
|
Posted: Mon Nov 13, 2023 10:13 am Post subject: |
|
|
Add discard option for your root fs in /etc/fstab so that you don't need to run fstrim manually and do not cause unnecessary wear of the ssd. Here's mine:
Code: | UUID=.. / ext4 noatime,discard 0 1 |
Best Regards,
Georgi |
|
Back to top |
|
 |
irets Apprentice


Joined: 17 Dec 2019 Posts: 234
|
Posted: Tue Nov 14, 2023 3:21 pm Post subject: |
|
|
logrusx,
Are there any downsides to having the discard mount option on a SSD root?
Should I always use that? |
|
Back to top |
|
 |
NeddySeagoon Administrator


Joined: 05 Jul 2003 Posts: 55304 Location: 56N 3W
|
Posted: Tue Nov 14, 2023 4:08 pm Post subject: |
|
|
Irets,
That's a hard one. It depends how the SSD firmware implements it's garbage collection.
In some SSDs, it's an immediate instruction, in others, the firmware takes notes and does garbage collection later, when there is lots of it.
It's not easy to tell from the outside.
Immediate garbage collection increases the wear, as the erase block size is much bigger that the read/write block size.
So blocks still in use need to be moved out of the way of the region to be erased.
Using fstrim in a cron job or as a part of your update cycle, puts the actual erasing back in you control, regardless of how the drive implements it.
Its an open question if the excess write amplification makes any practical difference to drive life.
One drive I have has a projected write life of 180 years. I won't care if it's worn out then :)
In 10 years it will be too small/slow/old and bill be replaced. Write life is not the problem it once was.
My first SSD, over 10 years old now, failed in September. Not for write life, for a sudden rash of bad blocks. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
 |
logrusx Advocate


Joined: 22 Feb 2018 Posts: 3029
|
Posted: Wed Nov 15, 2023 7:55 am Post subject: |
|
|
Irets wrote: | logrusx,
Are there any downsides to having the discard mount option on a SSD root?
Should I always use that? |
I can't tell, Neddy has already shared some thought, but what I can tell is not having a discard/trim mechanism setup is the worst option of them all.
Best Regards,
Georgi |
|
Back to top |
|
 |
toralf Developer


Joined: 01 Feb 2004 Posts: 3943 Location: Hamburg
|
Posted: Wed Nov 15, 2023 9:11 am Post subject: |
|
|
logrusx wrote: |
I can't tell, Neddy has already shared some thought, but what I can tell is not having a discard/trim mechanism setup is the worst option of them all.
|
I do disagree. Trim might be beneficial, but is not mandatory IMO. |
|
Back to top |
|
 |
logrusx Advocate


Joined: 22 Feb 2018 Posts: 3029
|
Posted: Wed Nov 15, 2023 2:09 pm Post subject: |
|
|
toralf wrote: | logrusx wrote: |
I can't tell, Neddy has already shared some thought, but what I can tell is not having a discard/trim mechanism setup is the worst option of them all.
|
I do disagree. Trim might be beneficial, but is not mandatory IMO. |
The argument I've heard in favor of trim/discard is the firmware unnecessarily moves blocks around which should be already discarded thus contributing to SSD wear. The irony is it's doing that to distribute wear evenly. Also at one moment or another the SSD will find itself full and I guess it will be unable to write anything unless free blocks are discarded.
What is the argument supporting the claim trim is not necessary?
Best Regards,
Georgi |
|
Back to top |
|
 |
Hu Administrator

Joined: 06 Mar 2007 Posts: 23516
|
Posted: Wed Nov 15, 2023 4:01 pm Post subject: |
|
|
The drive ought to perform any needed migrations before it reaches the point that it has no free space in which to work. The argument that trim is not necessary is based on the idea that the drive ought to be rated for enough write-erase cycles that even without trim, the drive is removed from service (whether because it dies of electronics failures or is retired for age) before the write-erase load renders it unusable. Whether that will happen in practice depends both on the workload and the quality of the drive. I could believe that certain low end drives have few enough write-erase cycles that a heavy load (such as constantly writing and deleting files, as fast as the drive can handle requests) could kill the drive before it is retired. On the other hand, if a drive is used sparingly (such as a read-mostly workload) or was built to withstand heavy load, it has a good chance of being removed from service before it breaks.
None of this argues that trim cannot improve the situation (assuming the drive's firmware handles trim well; some don't); it only argues that good drives ought to be usable enough even without trim. |
|
Back to top |
|
 |
logrusx Advocate


Joined: 22 Feb 2018 Posts: 3029
|
Posted: Wed Nov 15, 2023 4:26 pm Post subject: |
|
|
Hu wrote: | it only argues that good drives ought to be usable enough even without trim. |
To my knowledge, trim informs the drive which blocks are free. Otherwise it doesn't know they can be overwritten. Normal usage of a drive will sooner or later lead to a situation a drive thinks there are no free blocks to be written on.
Maybe there are cases where this won't happen in foreseeable future, but here we're talking about regular use if I'm not mistaken. I also don't see any benefit in not trimming the device.
Best Regards,
Georgi |
|
Back to top |
|
 |
NeddySeagoon Administrator


Joined: 05 Jul 2003 Posts: 55304 Location: 56N 3W
|
Posted: Wed Nov 15, 2023 5:14 pm Post subject: |
|
|
logrusx,
Quote: | To my knowledge, trim informs the drive which blocks are free. Otherwise it doesn't know they can be overwritten. Normal usage of a drive will sooner or later lead to a situation a drive thinks there are no free blocks to be written on. |
That can't be correct as LUKS on SSD works. Part of the LUKS setup, by default, is to fill the container with random data, which writes the entire LUKS container.
Adding user data still works on SSD though.
Conversely allowing trim on a LUKS volume erases the free space, so the location of user data stands out.
The file system allocates that block to this file.
Trimming permits the drive to erase 'free' space ahead of the need to write it which maintains write speed, once every block has been written once.
It gets more complex with drives that have 'over provisioning' as blocks in the over provisioned pool cannot by definition hold user data.
Thus the drive is free to swap erased blocks from the over provisioned pool, to the user pool when it needs an erased block, then erase the 'dirty' block swapping into the over provisioned pool.
None of this avoids the write amplification that is a feature of SSD wear levelling.
Even blocks containing data that never changes take part in wear levelling. e.g. LBA 0, where the partition table is.
There was a well known firmware bug is some drives where the partition table would be erased :) _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
 |
Hu Administrator

Joined: 06 Mar 2007 Posts: 23516
|
Posted: Wed Nov 15, 2023 5:15 pm Post subject: |
|
|
I expect that the drive keeps a reserve of sectors that are not addressable by the OS, which it can use as a scratch space during wear leveling because the OS can never make those sectors dirty.
The only benefits to not trimming are (1) if you have an encrypted device and want to make it unclear which areas are free or (2) if your drive's implementation of trim is of such poor quality that trimming causes more problems than it solves. As Neddy mentions, some early drives were bad about taking trim as an immediate command, such that the drive was compelled to set aside all other work and perform the trim immediately. I think I also read about drives that could only handle a trim if no other commands were in the queue from the OS to be processed. There may be other ways in which poor quality implementations made their users unhappy. |
|
Back to top |
|
 |
logrusx Advocate


Joined: 22 Feb 2018 Posts: 3029
|
Posted: Wed Nov 15, 2023 5:22 pm Post subject: |
|
|
Neddy, Hu, thank you for taking the time to explain and dispel the wrong knowledge I have :)
Best Regards,
Georgi |
|
Back to top |
|
 |
|