Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Discussion & Documentation Documentation, Tips & Tricks
  • Search

XFS on steroids

Unofficial documentation for various parts of Gentoo Linux. Note: This is not a support forum.
Post Reply
Advanced search
122 posts
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
Author
Message
prymitive
Apprentice
Apprentice
Posts: 260
Joined: Sun Jun 13, 2004 2:37 pm

  • Quote

Post by prymitive » Tue Sep 16, 2008 8:33 pm

TSP__ wrote:I been using XFS for a while, now...i am thinking in tweak a bit my fstab. i used mkfs.xfs without options to make my / and also for my /home. it's is safe to add

Code: Select all

logbufs=8
right now? i only use noatime in fstab for xfs in both partitions. Any other hint?

Cheers!
Yes, it's safe. Putting nobarrier will speed up writes a lot, but in case of power loss You may lose more data as they will be in ram; if You got laptop You got backup power already included so it's safe then.
Top
rada
Apprentice
Apprentice
Posts: 202
Joined: Fri Oct 21, 2005 11:39 pm
Location: Ottawa, Canada
Contact:
Contact rada
Website

  • Quote

Post by rada » Wed Sep 17, 2008 5:24 am

Using XFS for my /home partition, I have problems whenever there is a crash, with aMule and uTorrent, the .part.met files for aMule [sources for a partially downloaded file] and the resume.dat for uTorrent [all the loaded torrents] files are written to 0. Any way around this? No, eh? I changed back to ext3 because of this issue for that partition.
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Wed Sep 17, 2008 9:21 am

rada wrote:Using XFS for my /home partition, I have problems whenever there is a crash, with aMule and uTorrent, the .part.met files for aMule [sources for a partially downloaded file] and the resume.dat for uTorrent [all the loaded torrents] files are written to 0. Any way around this? No, eh? I changed back to ext3 because of this issue for that partition.
that's NOT an issue ;)

it's an specific feature of xfs
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
rada
Apprentice
Apprentice
Posts: 202
Joined: Fri Oct 21, 2005 11:39 pm
Location: Ottawa, Canada
Contact:
Contact rada
Website

  • Quote

Post by rada » Wed Sep 17, 2008 1:30 pm

figured as much. oh well.
Top
TSP__
n00b
n00b
Posts: 21
Joined: Tue Sep 16, 2008 6:06 pm

  • Quote

Post by TSP__ » Thu Sep 18, 2008 7:47 pm

prymitive wrote:
TSP__ wrote:I been using XFS for a while, now...i am thinking in tweak a bit my fstab. i used mkfs.xfs without options to make my / and also for my /home. it's is safe to add

Code: Select all

logbufs=8
right now? i only use noatime in fstab for xfs in both partitions. Any other hint?

Cheers!
Yes, it's safe. Putting nobarrier will speed up writes a lot, but in case of power loss You may lose more data as they will be in ram; if You got laptop You got backup power already included so it's safe then.
Thanks for this info. BTW: nobarrier don't get colour in fstab using vim with syntax on, which is something who take my attention, since i am running Gentoo on a Laptop, this option seams good for me.

Cheers!
Top
DigitalCorpus
Apprentice
Apprentice
User avatar
Posts: 283
Joined: Mon Jul 30, 2007 10:43 am
Contact:
Contact DigitalCorpus
Website

  • Quote

Post by DigitalCorpus » Fri Sep 26, 2008 6:23 pm

I'm using Reiser4 and XFS for my partitions in my setup. I originally formatted my primary partition for server use with this:

Code: Select all

mkfs.xfs -f -d agcount=24 -l internal,size=128m -L MediaServer /dev/sda7
The partition is a 500GiB chunk of disk space on my 640GB drive. I can into a problem when I was making use of the partition and copying files over to it. Throughput performance was good, but whether I was scp-ing a few large (4 to 7 GiB) files to the disk or actively making a mirror of a website, the latency I experienced was horrible! I have a Q6700 and the disk is a brand new Seagate Barracuda 7200.11 SATA 3Gb/s 640-GB. Since I've had no problems with Reiser4 under disk activity, I did some research into XFS. I'm using the Anticipatory Scheduler btw. Mount options remain the same in my fstab at:

Code: Select all

logbufs=8,noatime
It is only stated once or twice in this thread, but agcount and agsize are more important when it comes to interactivity of the filesystem. Splitting up a 500 GiB partition 24 ways means it is separated into ~21 GiB logical chucks where it'll queue requests from active processes. for a server where you have a bunch of little files, I'd imagine having multiple pending requests in a 21 GiB region can take a while to process. XFS docs suggest having at least 1 agcount per 4 GiB of disk space used. The minimum size for agsize is 16 MiB. I considered the specs of my drive as it has a 32 MB cache, and at the beginning of the partition I get read speeds of 115 MiB/sec according to hdparm -t.

Taking all of this into account I backed up my partition and reformatted with the following settings:

Code: Select all

mkfs.xfs -f -d agsize=128m -l size=32m -L MediaServer /dev/sda7
Note I did not specify internal for the journal since I only have one drive in the system. I realize that this may be a bit of overkill as it is equivalent to setting agcount=16016, but instead of taking 6 to 10 seconds to load my phpsysinfo page under disk activity, I get my usual .5-.6 seconds initial load time. I had a bunch of friends load the page as well with the same results. I also have noticed a lot less disk thrashing on large files.

I have not used bonnie to test throughput though, but the anecdotal evidence is strong. From the whole experience my opinion is to set agcount/agsize based on disk size, not by the number of CPUs/cores you have in your system.

Edited disk activity, scheduler, and agcount=16016.
Atlas (HDTV PVR, HTTP & Media server)
http://mobrienphotography.com/
Top
arth1
n00b
n00b
Posts: 24
Joined: Wed Nov 29, 2006 4:03 am

  • Quote

Post by arth1 » Wed Oct 01, 2008 2:42 am

biggyL wrote:Hello All,

I'd like to share xfsdump script (I wrote awhile ago) I'm using to route 1 full and 6 incremental backups during the week.
I'm using xfsdump to make dumps and xfsinvutil to prune (manage) sessions on the date of xfsdump.

[chop]
Any comments very appreciated.
Well, I have written a script myself, that does an automated dump of all xfs volumes on a system that has the dump flag set in /etc/fstab (that's the 0 or 1 in the second to last field), and handles incremental backups and automated pruning of the dump inventory.
It also calls /usr/local/sbin/xfsbackup.local, which is a user-provided script that can be used to set extended attributes for files and directories that should NOT be executed.
("chattr -R +d /var/tmp" is an example of what you might put there.)

Anyhow, you can find "xfsbackup" at http://www.broomstick.com/tech/xfsbackup

Place it in /usr/local/sbin (or location of choice), edit the commented defaults at the top (like destination for the backup, which I have set to /var/backup/ which is an NFS mount on my host, but obviously should be changed to suit your use), and set up a cron job for it.

I use a staggered Tower Of Hanoi approach for backups, which is a reasonable compromise between backup/restore time and disk usage.

Code: Select all

1	3	1-31/16	*	*	/usr/local/sbin/xfsbackup -l 0
1	3	9-31/16	*	*	/usr/local/sbin/xfsbackup -l 2
1	3	5-31/8	*	*	/usr/local/sbin/xfsbackup -l 4
1	3	3-31/4	*	*	/usr/local/sbin/xfsbackup -l 6
1	3	2-31/2	*	*	/usr/local/sbin/xfsbackup -l 8
This makes a full copy at 3:01 AM on the 1st and 17th of every month, and a staggered incremental backup otherwise, (level 0, 8, 6, 8, 4, 8, 6, 8, 2, 8, 6, 8, 4, 8, 6, 8).

For now, there's no corresponding restore script, but the procedure is as follows: If using compression, uncompress the backup files first. Then look at the xfsrestore man page -- it's not that hard to figure out. It's saved my neck a couple of times, when hard drives died.
Top
arth1
n00b
n00b
Posts: 24
Joined: Wed Nov 29, 2006 4:03 am

  • Quote

Post by arth1 » Wed Oct 01, 2008 3:04 am

vinboy wrote:WAT THE HELL!!! I thought my HDD was going to die.
The HDD is brand new.

I formatted my exterhal HDD (500GB) connected through USB2.0.

With XFS (used settings in the first post):
-When writing to the HDD, the max speed was 20MB/s
-The HDD sounded like it was going to explode! The head was moving here and there all the time!

With EXT2:
-Mas speed 29MB/s <---- 50% improvement over XFS.
-The writing operation is so quiet, hardly notice anything.

please advise what was going on?
This is almost certainly due to allocation groups (AGs). XFS divides each partition into several "sub-partitions", which improves speed on RAID systems with multiple CPUs, makes it less likely that you'll lose the entire volume in case of disk corruption, but only up to a fraction corresponding to number of allocation groups, and also reduces the chance of disk failure because the load will be spread out over the entire disk.
So far, so good. But here's the problem you likely see:

A hard drive is much faster near the start of the disk than it is near the end. The drive platters read much like an LP (remember those?), starting at the outside, and moving inwards. The outermost tracks thus move much faster past the drive head, and can contain more data per rotation, which leads to faster speeds. A drive being 3x as fast near the start of the disk as the end is not uncommon.

With lots of allocation groups, the load will be spread out over the entire disk. So you write not only to the faster outer tracks, but also to the slower inner tracks. This means that the drive will be slower when empty or near empty. EXT2/3 will start at the start of the disk, and write inwards. Thus, on an empty disk, EXT2/3 will often be faster, simply because it writes to the faster outer sectors first.
However, as the disk fills up, that advantage will disappear. At around 1/2-2/3 full, the advantage is negated, and when close to full, XFS has a distinct advantage.

Speed tests should really not be performed on empty drives, unless you plan to keep the drive almost empty at all times. Fill it up with as much random data as you expect having during normal operations, and then test the speed. Then you get a far more realistic test of the speed.

Speaking of allocation groups... I strongly recommend against lowering the number to 2 like the OP says. The speed advantage is really only there for empty disks (because you write to the start and the middle of the disk with two AGs, and never near the slower end), but you risk losing up to half the drive if there is irrecoverable corruption to the b-tree. For a drive that's 50-80% full, like most drives become over time, there is really no speed advantage to speak of, and even a speed slowdown as you get closer to 100%. Plus, with a quad core (or dual core with hyperthreading), you lose the advantage of multiple operations being prepared in parallel. At the very least, don't go below 4 AGs (or 8 with quad core with HT), and set it even higher if you think the disk will become very near full.
Top
cpu
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 122
Joined: Sun Nov 09, 2003 1:19 pm
Location: POLAND/ZG

  • Quote

Post by cpu » Wed Oct 01, 2008 9:07 am

I've chosed XFS for my /home partition on server too but I have some problems with XFS:
1. Every power down causes a some data loss - I found this when I used xfs_repair - anyone have solution for this? How to examine XFS?
2. I often have data corruption when I transfer files (15-20GB) from my local network to server by SAMBA. Yesterday I even had hard lockup of my server at the end of files transfering...

Thanks in advance for help
Image
Top
arth1
n00b
n00b
Posts: 24
Joined: Wed Nov 29, 2006 4:03 am

  • Quote

Post by arth1 » Thu Oct 02, 2008 4:39 pm

cpu wrote:I've chosed XFS for my /home partition on server too but I have some problems with XFS:
1. Every power down causes a some data loss - I found this when I used xfs_repair - anyone have solution for this? How to examine XFS?
2. I often have data corruption when I transfer files (15-20GB) from my local network to server by SAMBA. Yesterday I even had hard lockup of my server at the end of files transfering...

Thanks in advance for help
Try asking in one of the appropriate fora -- the description for this one clearly states:
Unofficial documentation for various parts of Gentoo Linux. Note: This is not a support forum.
When you ask, don't forget to post the relevant /etc/fstab entry as well as the output from "xfs_info /".
Top
DigitalCorpus
Apprentice
Apprentice
User avatar
Posts: 283
Joined: Mon Jul 30, 2007 10:43 am
Contact:
Contact DigitalCorpus
Website

  • Quote

Post by DigitalCorpus » Wed Oct 22, 2008 8:53 am

I switched to CFQ for my IO Scheduler to test things out. Much better responsiveness. So I thought I'd decrease the number of allocation groups to see if the advice given here was applicable. I set agsize to 1GiB on reformat and ran the same scenario I will be coming across several times a week: Copying 6 to 8 GiB files from one partition to another while serving files via apache. Well. Even with CFQ, the reduced allocation groups (down to about 1/4 of what I had for the 500GiB partition) I saw a visible increase in the latency of reading small files from the disk, regardless of filesystem type. I'm on amd64, gentoo-sources patched with Reiser4, on a SATA II 640GB Seagate disk. Given that I'm on amd64, This might just be the whole system responsiveness issue most have complained of though since it was easy to get rid of, I thought not. Does anyone have any suggestions? I'm thinking of switching these two large partitions over to Ext3 but the amount of space the super-block takes up detracts me.

In my anecdotal observations I see that, if the goal is interactivity and responsiveness of a system, agcount/agsize should be set not based on the number of CPUs/core a user has or the theoretical number of simultaneous reads and writes, but based on a multiple of the read/write speed of the disk mechanism itself. I'm still new to Linux and Gentoo so I haven't gotten around to using bonnie to test for various scenarios.

With 6 SATA ports on my current motherboard and continuing plans of recording HDTV, XFS (over LVMS when I get there) seems to be the most logical solution so I'd like to learn to utilize this filesystem properly for performance in throughput and responsiveness. Guess it is analogous to a regular kernel and an fully preemtible kernel if I'm not mistaken.
Atlas (HDTV PVR, HTTP & Media server)
http://mobrienphotography.com/
Top
snIP3r
l33t
l33t
Posts: 866
Joined: Fri May 21, 2004 6:21 pm
Location: germany

  • Quote

Post by snIP3r » Sun Nov 23, 2008 11:34 am

hi all!

i have a question about the xfs_fsr tool. i use a partition that is encypted via dm-crypt. i have this fragmentation:

Code: Select all

area52 ~ # xfs_db -c frag -r /dev/mapper/stuff
actual 128404, ideal 70103, fragmentation factor 45.40%
i want to know if it is save to defrag the whole encrypted partition? has someone finished something like my config successfully?

thx in advance
snIP3r
Intel i3-4130T on ASUS P9D-X
Kernel 6.6.52-gentoo SMP
-----------------------------------------------
if your problem is fixed please add something like [solved] to the topic!
Top
kowal
n00b
n00b
Posts: 40
Joined: Wed Apr 23, 2003 12:34 am

xfs performance tweaks

  • Quote

Post by kowal » Mon Jan 05, 2009 12:19 am

Good read
http://everything2.com/index.pl?node_id=1479435
Top
snIP3r
l33t
l33t
Posts: 866
Joined: Fri May 21, 2004 6:21 pm
Location: germany

  • Quote

Post by snIP3r » Wed Jan 07, 2009 8:47 am

thx for the interesting page but this does not answer my questions...
Intel i3-4130T on ASUS P9D-X
Kernel 6.6.52-gentoo SMP
-----------------------------------------------
if your problem is fixed please add something like [solved] to the topic!
Top
rada
Apprentice
Apprentice
Posts: 202
Joined: Fri Oct 21, 2005 11:39 pm
Location: Ottawa, Canada
Contact:
Contact rada
Website

  • Quote

Post by rada » Tue May 05, 2009 2:44 am

using xfs_fsr should be fine on the encrypted xfs partition. as long as there is no corruption in the filesystem.
Top
Master One
l33t
l33t
User avatar
Posts: 754
Joined: Mon Aug 25, 2003 5:14 pm
Location: Austria

  • Quote

Post by Master One » Fri Jul 17, 2009 9:38 am

I always only sticked with ext3, but since it was now again advised in the workaround.org ISPmail tutorial, I am curious about XFS again:

1. Does XFS make any sense, when it comes down to a lot of small files (like /var/vmail with all the emails in maildir format)?
2. Anybody tried XFS on a netbook with an Atom N270, 1 or 2 GB RAM and a sloppy 16GB SSD?

BTW Nowadays I always only use the filesystem on top of LVM, which often sits on top of a luks-encrypted partition, which is either on a Software- or Hardware-RAID1. Does that have any influence? Wasn't it a typical recommendation in the past, NOT to use XFS on a Software-RAID?

For me, it's either ext3 or XFS for a general purpose filesystem, what's good for your server, should also be good for your workstation / laptop / netbook. Can it?

EDIT: Just played around a little, with the following conclusions:

- Still no "barriers" on LVM ("Disabling barriers, not supported by the underlying device")
- It is still not possible to change the log size after fs creation ("~# xfs_growfs -L 16384 MOUNTPOINT" -> "xfs_growfs: log growth not supported yet")

The first issue seems to be bad, because it was mentioned, that with "nobarriers" there is a larger dataloss to be expected in case of unclean shutdown / power-off, the second issue is bad, if you use a distro-installer on which you can not edit the filesystem creation options (I was thinking of the Debian Installer). Something like "not supported yet" does not really represent such an old and mature filesystem as a stable and production-ready one...

EDIT2: Did some read-up in the XFS FAQ on xfs.org, and found some interesting info concerning disk write cache and the barrier/nobarrier mountoption:

- If you have a single drive, it's good to leave barriers & disk write cache on.
- If you have a hardware-raid-controller with battery backed controller cache and cache in write back mode, it is advised to use the nobarrier mountoption, and to disable the individual disks' write caches.

Now what should one do concerning the barrier/nobarrier and disk write cache options, if using

- a Software-RAID1?
- LVM on top of a Software-RAID1?
- LVM on top of a luks-encrypted Software-RAID1?
- LVM on a Hardware-RAID1 with battery backed controller cache and cache in write back mode?
- LVM on top of a luks-encrypted Hardware-RAID1 with battery backed controller cache and cache in write back mode?

As mentioned, XFS on top of LVM leads to disables barriers anyway, so are you supposed to disable disks' write caches in every case, where nobarrier is used?

It even gets more confusing, if virtualization is used, which makes me believe, that one better & safer sticks with good old ext3 instead... :roll:
Las torturas mentales de la CIA
Top
erm67
l33t
l33t
User avatar
Posts: 653
Joined: Tue Nov 01, 2005 5:31 pm
Location: EU
Contact:
Contact erm67
Website

  • Quote

Post by erm67 » Mon Apr 12, 2010 1:52 pm

Anyone has experimented with the lazy-counters feature of xfs? it looks promising and quite a recent addition.
[XFS] Lazy Superblock Counters

When we have a couple of hundred transactions on the fly at once, they all
typically modify the on disk superblock in some way.
create/unclink/mkdir/rmdir modify inode counts, allocation/freeing modify
free block counts.

When these counts are modified in a transaction, they must eventually lock
the superblock buffer and apply the mods. The buffer then remains locked
until the transaction is committed into the incore log buffer. The result
of this is that with enough transactions on the fly the incore superblock
buffer becomes a bottleneck.

The result of contention on the incore superblock buffer is that
transaction rates fall - the more pressure that is put on the superblock
buffer, the slower things go.

The key to removing the contention is to not require the superblock fields
in question to be locked. We do that by not marking the superblock dirty
in the transaction. IOWs, we modify the incore superblock but do not
modify the cached superblock buffer. In short, we do not log superblock
modifications to critical fields in the superblock on every transaction.
In fact we only do it just before we write the superblock to disk every
sync period or just before unmount.
mkfs.xfs wrote: lazy-count=value

This changes the method of logging various persistent counters in the superblock. Under metadata intensive workloads, these counters are updated and logged frequently enough that the superblock updates become a serialisation point in the filesystem. The value can be either 0 or 1.

With lazy-count=1, the superblock is not modified or logged on every change of the persistent counters. Instead, enough information is kept in other parts of the filesystem to be able to maintain the persistent counter values without needed to keep them in the superblock. This gives significant improvements in performance on some configurations. The default value is 0 (off) so you must specify lazy-count=1 if you want to make use of this feature.
xfs_admin wrote: -c 0|1 Enable (1) or disable (0) lazy-counters in the filesystem. This
operation may take quite a bit of time on large filesystems as
the entire filesystem needs to be scanned when this option is
changed.

With lazy-counters enabled, the superblock is not modified or
logged on every change of the free-space and inode counters.
Instead, enough information is kept in other parts of the
filesystem to be able to maintain the counter values without
needing to keep them in the superblock. This gives significant
improvements in performance on some configurations and metadata
intensive workloads.
Ok boomer
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Top
lightvhawk0
Guru
Guru
User avatar
Posts: 388
Joined: Fri Nov 07, 2003 12:59 am

  • Quote

Post by lightvhawk0 » Wed May 05, 2010 12:06 am

I turned on lazy counters and restored my backup to my xfs drive

Code: Select all

meta-data=/dev/md0               isize=256    agcount=16, agsize=7599984 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=121599744, imaxpct=25
         =                       sunit=16     swidth=32 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =external               bsize=4096   blocks=59367, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=131072 blocks=0, rtextents=0
it took about nine minutes before I turned on lazy-count=1
after I reformatted and returned my backup it knocked an entire minute off


EDIT just a note I moved the log to an external device and now my system is much more silent.
If God has made us in his image, we have returned him the favor. - Voltaire
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Sat Dec 11, 2010 1:39 am

for those folks that use one (or more) of those new harddrives with the "Advanced Sector Format" (4 KiB sectors)

make sure you have set:

mkfs.xfs -s size=4096

when creating the partition and

before that:

that your partitions are aligned to MiB (via gparted) or multiples of 4 KiB
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
Enlight
Advocate
Advocate
User avatar
Posts: 3519
Joined: Thu Oct 28, 2004 9:42 am
Location: Alsace (France)

  • Quote

Post by Enlight » Tue Aug 23, 2011 10:41 pm

Hi folks,

For those of you who don't know it, since 2.6.35, xfs had a new mount option '-o delaylog', which improved a lot metadata operations. From 2.6.39 this option is on by default and basicaly the default setups are probably the best you can get for a non specific usage (even noatime is useless because filesystems are all using relatime by default now).

results of decent benchmarks can be seen on slides 24 and next of this paper : http://www.redhat.com/summit/2011/prese ... s_2011.pdf

basicaly, now xfs competes with btrfs and ext4 at file creation and we mean small files here (investigations are made to make it even better) and xfs is now the fastets filesystem for deletion
(no, i made no typo).


edit : btw xfs was already 3 times faster than ext4 and 6 times faster than btrfs at iterating through the created files (see slide 20) so if you get that each created file is supposed to be read at least once, the conclusions should be obvious!

Hope you will enjoy your xfs filesystem even more than before!
Top
rada
Apprentice
Apprentice
Posts: 202
Joined: Fri Oct 21, 2005 11:39 pm
Location: Ottawa, Canada
Contact:
Contact rada
Website

  • Quote

Post by rada » Tue Sep 06, 2011 6:49 pm

This link has some interesting ideas for optimizing XFS for a raid partition. https://raid.wiki.kernel.org/index.php/RAID_setup#XFS
Top
JeffBlair
Apprentice
Apprentice
Posts: 175
Joined: Fri May 23, 2003 2:44 am
Location: USA, Lone star state

  • Quote

Post by JeffBlair » Sat Jan 21, 2012 4:50 am

OK, I'm about to re-do my RAID5 array, and want to tweek my drive.
This will be for serving out blue-ray/DVD's for a couple of PC's. So, here's what I'm going to run so far
It's on a Dual Core Intel by the way running x64, and the array will be about 10T at the largest..unless I get a new server case. ;)


mkfs.xfs -l internal,size=128m -d agcount=20 -b size=8192 lazy-count=1 /dev/sdc1


and, of course the normal "noatime,logbufs=8,nobarrier,nodiratime" in fstab


So, does that look right for serving out 5gig files?
Also, would it be better to move the journal file to another drive? And, if so, how would I do that?

Thanks for all the help guys.
Top
Post Reply

122 posts
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5

Return to “Documentation, Tips & Tricks”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic