Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Filesystem for SSD?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Sun Oct 13, 2019 11:01 am    Post subject: Reply with quote

The_Great_Sephiroth wrote:
Mike, I ruled out ext4 because it is a journaled system.

It is possible to create an ext4 partition without journalling:

Code:
mkfs.ext4 -O ^has_journal /dev/sda2

It is also possible to disable (and enable) journalling in an existing unmounted ext4 partition:

Code:
tune2fs -O ^has_journal /dev/sda2

If a user does want ext4 to journal, it is even possible to configure ext4 to place the journal on a different device/partition.

So ruling out using ext4 on an SSD for reasons of journalling is not really a valid argument, because journalling is optional in ext4.

Code:
# dmesg | grep EXT4
[    1.947113] EXT4-fs (sda2): mounted filesystem without journal. Opts: (null)
[    2.063456] EXT4-fs (sda2): mounted filesystem without journal. Opts: (null)
[    5.351241] EXT4-fs (sda2): re-mounted. Opts: (null)
# blkid
/dev/sda1: UUID="49CA-4AC5" TYPE="vfat" PARTUUID="22be1d7f-92b3-4c4f-b2d3-d4d160a5139f"
/dev/sda2: UUID="85440e0c-2335-4746-97be-7ad3efc9a43b" TYPE="ext4" PARTUUID="f50d5459-b956-f348-b23b-224b127bbb6b"
/dev/sda3: UUID="bf467dd7-9d64-4b0a-a732-50bfa16f7c97" TYPE="swap" PARTUUID="047bbada-08c5-c142-b604-dfe874d65cf3"

_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Sun Oct 13, 2019 2:41 pm    Post subject: Reply with quote

Lots of detailed info here. I am now reading about things I had no clue on. I did not realize that as of 2015 some SSDs could start losing data in as little as seven days without power. Also that SLC and MLC was more reliable than TLC for bit-rot. My work laptop gets used enough, but I am concerned about bit-rot now. While F2FS is light-years faster than BTRFS, EXT4, or other systems on an SSD, only BTRFS offers bit-rot protection. However, BTRFS will not allow DUP mode on an SSD, so there is no protection. The SSD is supposed to check for data corruption, but what if there is? Does the SSD have a good backup? If not do I just lose data and resort to backups being restored?

The more I read the more I regret purchasing an SSD over an HDD. Data integrity is God here. SSDs seem to have so much against them, but everybody loves speed. I am seriously considering never using an SSD again. At least until they are as reliable as an HDD. I can do DUP mode on an HDD and never lose a single bit of data. What do I do on an SSD to prevent bit-rot?

*EDIT*

I forgot to mention that while I considered making two identical-sized partitions on the SSD and doing BTRFS RAID1 with them, I have chosen not to because I am fairly sure that most SSDs will see identical data and just reference the data for the second copy. This means that if the actual copy on the SSD is corrupted, both copies are since only one exists. Am I correct here?
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Sun Oct 13, 2019 3:13 pm    Post subject: Reply with quote

I too am wary about the UBER (Uncorrectable Bit Error Rate) in SSDs. Has much changed in the last three or four years with SSD technology? The last two articles I read on SSD reliability were both from 2016 (see links below). Have you seen any recent articles on the latest SSDs?

CyberStreams - 2016’s SSD (Solid State Drive) Reliability Report
Quote:
Overall, SSD flash drives experience significantly lower replacement rates (within their rated lifetime) than hard disk drives. The only catch is that they experience significantly higher rates of uncorrectable errors than hard disk drives.


ZDNet - SSD reliability in the real world: Google's experience (February 25, 2016)
Quote:
KEY CONCLUSIONS
• Ignore Uncorrectable Bit Error Rate (UBER) specs. A meaningless number.
Good news: Raw Bit Error Rate (RBER) increases slower than expected from wearout and is not correlated with UBER or other failures.
• High-end SLC drives are no more reliable that MLC drives.
Bad news: SSDs fail at a lower rate than disks, but UBER rate is higher (see below for what this means).
SSD age, not usage, affects reliability.
• Bad blocks in new SSDs are common, and drives with a large number of bad blocks are much more likely to lose hundreds of other blocks, most likely due to die or chip failure.
• 30-80 percent of SSDs develop at least one bad block and 2-7 percent develop at least one bad chip in the first four years of deployment.

Quote:
But it isn't all good news. SSD UBER rates are higher than disk rates, which means that backing up SSDs is even more important than it is with disks. The SSD is less likely to fail during its normal life, but more likely to lose data.


The_Great_Sephiroth wrote:
*EDIT*

I forgot to mention that while I considered making two identical-sized partitions on the SSD and doing BTRFS RAID1 with them, I have chosen not to because I am fairly sure that most SSDs will see identical data and just reference the data for the second copy. This means that if the actual copy on the SSD is corrupted, both copies are since only one exists. Am I correct here?

I don't use BTRFS, but RAID1 does not protect against data corruption. RAID1 is not a backup solution.
_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6097
Location: Dallas area

PostPosted: Sun Oct 13, 2019 4:52 pm    Post subject: Reply with quote

I've been running a samsung 840 evo since Dec 2014 and have had 0 errors of any kind.
I have several 850's that are a few years newer, but still the same reliability.

I still backup my drives onto a mirrored raid, and one offline drive but that's just standard precaution IMO.

Unless you're buying el cheapo ssd, then I wouldn't worry about it.

Note the above is standard ssd, not the newer nvme style ssds. I have little experience with those, (only have one in a notebook and it gets minimal use).
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
mike155
Advocate
Advocate


Joined: 17 Sep 2010
Posts: 4438
Location: Frankfurt, Germany

PostPosted: Sun Oct 13, 2019 4:57 pm    Post subject: Reply with quote

The_Great_Sephiroth wrote:
I did not realize that as of 2015 some SSDs could start losing data in as little as seven days without power.

I've never heard about that and it's hard to believe. Please post a link.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54208
Location: 56N 3W

PostPosted: Sun Oct 13, 2019 4:58 pm    Post subject: Reply with quote

The_Great_Sephiroth,

Provided you can detect a bit error(s) you can guard against them.
Undetected errors result in data corruption.
When an error is detected, one of two things happen.
a) the error is connected ... as happens with ECC RAM.
b) the error is detected but nothing can be done except flag it (fail the read)

For the sake of completeness, ECC RAM can detect and correct all single bit errors and detect but not correct two bit errors.
For higher error counts, anything can happen.

In the case of your raid1, a failed read on one copy would result in the other copy being read.
mdadm would rewrite the failed read too, so it was good next time.

Read Partial-response maximum-likelihood and be afraid for your data on rotating rust.
Be very afraid. :)

SSDs do error correction too. The causes and consequences of errors reading SSDs and magnetic hard drives are quite different, so different error correction techniques are required.

If you want to go the raid1 route, you need 3 mirrors. With only 2 mirrors when one fails, you no longer have a backup.
Raid is not backup anyway but you are not using raid here as a backup in the conventional sense, its to ensure data integrity.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
msst
Apprentice
Apprentice


Joined: 07 Jun 2011
Posts: 259

PostPosted: Sun Oct 13, 2019 5:09 pm    Post subject: Reply with quote

I do think you are exaggerating and creating some sort of unnecessary panic for yourself here, but if

Quote:
Data integrity is God here.

Quote:
but I am concerned about bit-rot now


Then use BTRFS Raid 1 or 10 with regular scrubbing and in addition use borgbackup for incremental independant backups. Problem solved. If there was any.
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6097
Location: Dallas area

PostPosted: Sun Oct 13, 2019 5:18 pm    Post subject: Reply with quote

msst wrote:
I do think you are ... creating some sort of unnecessary panic for yourself here


+++

It's computer equipment, something could always happen, whether ssd or hdd, I've had several hdd's (over the years) refuse to spin up or have bad sectors appear out of the blue. That's why you do backups. I still use hdds for backup media and general media storage. I use the ssd as the main drive in both my desktop and laptop, because it's a lot faster. But I still do backups of both the ssd's and the hdd's. There are no guarantees in life, except that there are no guarantees.
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3129

PostPosted: Sun Oct 13, 2019 5:28 pm    Post subject: Reply with quote

Quote:
In the case of your raid1, a failed read on one copy would result in the other copy being read.
mdadm would rewrite the failed read too, so it was good next time
I'd be careful with that.
I've tested mdraid 6 against data corruption on 1 out of 4 drives. Scrubbing it simply recalculated all checksums destroying the test data.
So, if the hard drive reports a read error, you're fine. But there are no sanity checks against single strip corruption.
Back to top
View user's profile Send private message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Sun Oct 13, 2019 5:35 pm    Post subject: Reply with quote

NeddySeagoon wrote:
In the case of your raid1, a failed read on one copy would result in the other copy being read.
mdadm would rewrite the failed read too, so it was good next time.

A failed read on one of the drives. But in the case of a successful read of corrupt data on one of the RAID1 drives, the RAID software would not know which of the two is correct data. Or is that impossible with SSDs? In the case of using mdadm with HDDs it is possible to 'scrub' a RAID1 array to check for such errors: https://wiki.archlinux.org/index.php/RAID#Scrubbing

Arch Linux Wiki wrote:
It is good practice to regularly run data scrubbing to check for and fix errors. Depending on the size/configuration of the array, a scrub may take multiple hours to complete.

To initiate a data scrub:

Code:
# echo check > /sys/block/md0/md/sync_action

The check operation scans the drives for bad sectors and automatically repairs them. If it finds good sectors that contain bad data (the data in a sector does not agree with what the data from another disk indicates that it should be, for example the parity block + the other data blocks would cause us to think that this data block is incorrect), then no action is taken, but the event is logged (see below). This "do nothing" allows admins to inspect the data in the sector and the data that would be produced by rebuilding the sectors from redundant information and pick the correct data to keep.

Arch Linux Wiki wrote:
Note: Users may alternatively echo repair to /sys/block/md0/md/sync_action but this is ill-advised since if a mismatch in the data is encountered, it would be automatically updated to be consistent. The danger is that we really do not know whether it is the parity or the data block that is correct (or which data block in case of RAID1). It is luck-of-the-draw whether or not the operation gets the right data instead of the bad data.

Arch Linux Wiki wrote:
Due to the fact that RAID1 and RAID10 writes in the kernel are unbuffered, an array can have non-0 mismatch counts even when the array is healthy. These non-0 counts will only exist in transient data areas where they do not pose a problem. However, we cannot tell the difference between a non-0 count that is just in transient data or a non-0 count that signifies a real problem. This fact is a source of false positives for RAID1 and RAID10 arrays. It is however still recommended to scrub regularly in order to catch and correct any bad sectors that might be present in the devices.

_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54208
Location: 56N 3W

PostPosted: Sun Oct 13, 2019 5:56 pm    Post subject: Reply with quote

Fitzcarraldo,

Now we are down in the weeds of error detection and correction.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Sun Oct 13, 2019 8:58 pm    Post subject: Reply with quote

The link to the information about temperature and being able to lose data in seven days is posted here. Also, BTRFS RAID1 does normally prevent bit-rot in that fi one disks rots it realizes this and corrects from the other. This does not help with RAID1 through mdadm, lvm, or any other normal means.

At this point I believe I will go with F2FS simply because it is faster than BTRFS many, many times over and BTRFS cannot prevent bit-rot on a modern SSD. I have to hope that the SSD (WD Blue in this case) is good enough to detect and correct errors, and backup to my BTRFS RAID10 system weekly. My biggest concern is that I save a file on Tuesday and when I back it up Saturday I am backing up a corrupted file. What I may do is see if I can somehow hook something where whenever a file in my home directory is written an sha1 sum is calculated and stored in a file with the same name and an sha1 extension. Then I could EASILY script a backup shell script which only backs up files which match the sha1 sum and informs me at the end which ones, if any, do not match so I can make the call on backing them up.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6097
Location: Dallas area

PostPosted: Sun Oct 13, 2019 9:13 pm    Post subject: Reply with quote

WD Blue's aren't really enterprise class drives, whether ssd or hdd.

Quote:
My biggest concern is that I save a file on Tuesday and when I back it up Saturday I am backing up a corrupted file


You're no more likely to have that happen on an ssd than an hdd, or with btrfs vs ext4 vs f2fs.
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54208
Location: 56N 3W

PostPosted: Sun Oct 13, 2019 9:24 pm    Post subject: Reply with quote

The_Great_Sephiroth,

Code:
$ eix tripwire
* app-admin/tripwire
     Available versions:  2.4.3.7 {libressl selinux ssl static +tools}
     Homepage:            http://www.tripwire.org/
     Description:         Open Source File Integrity Checker and IDS

_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
mike155
Advocate
Advocate


Joined: 17 Sep 2010
Posts: 4438
Location: Frankfurt, Germany

PostPosted: Sun Oct 13, 2019 10:11 pm    Post subject: Reply with quote

The_Great_Sephiroth wrote:
The link to the information about temperature and being able to lose data in seven days is posted here.


Very interesting link. Thanks! Especially the JEDEC Powerpoint presentation (link inside the ExtremeTech presentation).

Do you think that the tables in the ExtremeTech article show that a SSD loses data after 52 weeks @ 30°C or after 14 weeks @ 40°C? That would be wrong!

I'm still trying to figure out what those numbers really mean. I will post an explanation as soon as I understand them.


Last edited by mike155 on Sun Oct 13, 2019 10:19 pm; edited 1 time in total
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Sun Oct 13, 2019 10:15 pm    Post subject: Reply with quote

Neddy, I will look into tripwire ASAP, thank you!

Moose, that is fairly false. I currently use BTRFS in DUP mode on an HDD. To have the same problem not one but TWO sectors would need to fail or have rot, and they would have to be the EXACT sectors that the two copies of the data for the file are on. Those odds are astronomically low. Running BTRFS in single mode, I agree. That same data could be corrupted by a single sector failing or rotting. Your statement is true for Ext4 however. BTRFS DUP mode puts two copies of the meta-data and two copies of the data for each file on the disk, not in adjoining sectors either. This is why I fell in love with BTRFS. Sadly, every recent test (some as recent as September this year) show BTRFS far behind in performance on regular SATA SSDs, likely due to CoW. Ext4, XFS, and F2FS all beat it out. F2FS is regularly the fastest SATA SSD filesystem, but Ext4 is right behind it, normally tied with it. However, in testing Ext4 wears out flash media faster than F2FS, so for the same speed some SSDs had their life extended by almost 70% on the ones using F2FS. But none of these systems have the bit-rot protection I am used to.

I guess it is like this. I have not had ANY data corruption or loss since switching to BTRFS and now I have to give that up because I went with a physical media that defeats BTRFS in several ways. It's like wearing pants your entire life and then having to go to work in only your underpants. You're a tad nervous about it!

*EDIT*

I forgot to ask. What scheduler would be best for an SSD? I have read that deadline may be good for a SATA drive, but not an NVME drive. In fact that is echoed on the Debian Wiki.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
tholin
Apprentice
Apprentice


Joined: 04 Oct 2008
Posts: 203

PostPosted: Sun Oct 13, 2019 10:38 pm    Post subject: Reply with quote

The_Great_Sephiroth wrote:
The link to the information about temperature and being able to lose data in seven days is posted here.

The article says "In worst-case scenarios or high storage temps, the data on an enterprise drive can start to fail within seven days". The slides goes on to state that worst-case scenarios involves completely depleting the P/E cycles of the chips before the retention test. That's important because higher P/E cycles means decreased retention time. The SSD I use on my gentoo desktop has been in use for 6 years 9 months and I've depleted 8% of the rated P/E cycles. Under normal use you'll never get close to the rated P/E cycles. The worst-case storage temperature in that test was 55°C which is also bad for retention.

That article was shared a lot some years back because of the click bait headline but under normal conditions SSDs retain data for many years.
Back to top
View user's profile Send private message
mike155
Advocate
Advocate


Joined: 17 Sep 2010
Posts: 4438
Location: Frankfurt, Germany

PostPosted: Sun Oct 13, 2019 10:59 pm    Post subject: Reply with quote

tholin wrote:
That article was shared a lot some years back because of the click bait headline but under normal conditions SSDs retain data for many years.

I completely agree.

There's a good explanation at: https://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention
Anandtech wrote:
All in all, there is absolutely zero reason to worry about SSD data retention in typical client environment. Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs.

@The_Great_Sephiroth: stop worrying and enjoy your SSD :)
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9675
Location: almost Mile High in the USA

PostPosted: Mon Oct 14, 2019 3:19 am    Post subject: Reply with quote

mike155 wrote:
@The_Great_Sephiroth: stop worrying and enjoy your SSD :)

... and stop spreading misinformation about ext4fs journaling reducing SSD life by 70%.

Just because it does so for MTD devices with no/poor wear leveling doesn't mean it's the same for all SSDs.
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Mon Oct 14, 2019 2:06 pm    Post subject: Reply with quote

I never said ext4 with journaling reduced life by 70%. I said that in multiple tests which I have been following F2FS has extended life by up to about 70%. This includes testing against BTRFS, XFS, ZFS, and others. I am at work now but when I get home I will link said tests. Results do vary slightly but all show extended life with the filesystem designed for SSDs.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
mysterious
n00b
n00b


Joined: 14 Oct 2019
Posts: 1

PostPosted: Mon Oct 14, 2019 5:13 pm    Post subject: sys-fs/f2fs-tools-1.12.0-r1 boo issue Reply with quote

f2fs is awesome and I use it for the root file system. However, you should be aware that there is an issue with sys-fs/f2fs-tools-1.12.0-r1. Boot fails with this version of f2fs tools. The fix for this issue was to revert back to version 1.11.0. Recently, "unstable" version sys-fs/f2fs-tools-1.13.0 has been released. However, I didn't try it yet.

Here is a link to a forum post discussing the 1.12.0-r1 boot failure: sys-fs/f2fs-tools-1.12.0: breaks boot on f2fs root
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Mon Oct 14, 2019 8:38 pm    Post subject: Reply with quote

I am on a break and using my Galaxy Tab A 8 to write this. I have a Bluetooth keyboard here, but forgive me if I have any typos, I am not as good on the small keyboard as I am on a normal one.

Here is the thread on Reddit where a user tested filesystems on four SD cards and F2FS showed increased life up to about 70%. I read this one last. Let me find the other tests and I will update this post with links as well. Just give me some time as I am not on my laptop or desktop!

*UPDATE*

Here is a slightly older review of F2FS in comparison with ext4. Note that the big thing here is speed difference.
_________________
Ever picture systemd as what runs "The Borg"?


Last edited by The_Great_Sephiroth on Mon Oct 14, 2019 8:51 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54208
Location: 56N 3W

PostPosted: Mon Oct 14, 2019 8:41 pm    Post subject: Reply with quote

The_Great_Sephiroth,

SD cards are not SSDs.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
mike155
Advocate
Advocate


Joined: 17 Sep 2010
Posts: 4438
Location: Frankfurt, Germany

PostPosted: Mon Oct 14, 2019 8:45 pm    Post subject: Reply with quote

Quote:
a user tested filesystems on four SD cards and F2FS showed increased life up to about 70%.

If I had a SD card, I would also prefer F2FS over ext4. :)

But we are talking about SSDs, aren't we? SSDs and SD cards are entirely different.
Back to top
View user's profile Send private message
The_Great_Sephiroth
Veteran
Veteran


Joined: 03 Oct 2014
Posts: 1602
Location: Fayetteville, NC, USA

PostPosted: Mon Oct 14, 2019 8:53 pm    Post subject: Reply with quote

Yes, but I have read about how F2FS works with the FTL to improve both performance and life. I just updated my last post and am reviewing browser history to find more of the good reading material I have been going through. I use F2FS on a 16GB USB stick for portage on my old laptop as it stands and it IS faster than ext4 was on the same stick.

*EDIT*

So what about the scheduler? I am under the impression that I should use deadline for an SSD, not CFQ since CFQ was designed to minimize head movement, or so I have read.
_________________
Ever picture systemd as what runs "The Borg"?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 3 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum