Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
XFS on steroids
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4, 5  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
prymitive
Apprentice
Apprentice


Joined: 13 Jun 2004
Posts: 260

PostPosted: Tue Sep 16, 2008 8:33 pm    Post subject: Reply with quote

TSP__ wrote:
I been using XFS for a while, now...i am thinking in tweak a bit my fstab. i used mkfs.xfs without options to make my / and also for my /home. it's is safe to add

Code:

logbufs=8


right now? i only use noatime in fstab for xfs in both partitions. Any other hint?

Cheers!


Yes, it's safe. Putting nobarrier will speed up writes a lot, but in case of power loss You may lose more data as they will be in ram; if You got laptop You got backup power already included so it's safe then.
Back to top
View user's profile Send private message
rada
Apprentice
Apprentice


Joined: 21 Oct 2005
Posts: 202
Location: Ottawa, Canada

PostPosted: Wed Sep 17, 2008 5:24 am    Post subject: Reply with quote

Using XFS for my /home partition, I have problems whenever there is a crash, with aMule and uTorrent, the .part.met files for aMule [sources for a partially downloaded file] and the resume.dat for uTorrent [all the loaded torrents] files are written to 0. Any way around this? No, eh? I changed back to ext3 because of this issue for that partition.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Wed Sep 17, 2008 9:21 am    Post subject: Reply with quote

rada wrote:
Using XFS for my /home partition, I have problems whenever there is a crash, with aMule and uTorrent, the .part.met files for aMule [sources for a partially downloaded file] and the resume.dat for uTorrent [all the loaded torrents] files are written to 0. Any way around this? No, eh? I changed back to ext3 because of this issue for that partition.


that's NOT an issue ;)

it's an specific feature of xfs
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
rada
Apprentice
Apprentice


Joined: 21 Oct 2005
Posts: 202
Location: Ottawa, Canada

PostPosted: Wed Sep 17, 2008 1:30 pm    Post subject: Reply with quote

figured as much. oh well.
Back to top
View user's profile Send private message
TSP__
n00b
n00b


Joined: 16 Sep 2008
Posts: 21

PostPosted: Thu Sep 18, 2008 7:47 pm    Post subject: Reply with quote

prymitive wrote:
TSP__ wrote:
I been using XFS for a while, now...i am thinking in tweak a bit my fstab. i used mkfs.xfs without options to make my / and also for my /home. it's is safe to add

Code:

logbufs=8


right now? i only use noatime in fstab for xfs in both partitions. Any other hint?

Cheers!


Yes, it's safe. Putting nobarrier will speed up writes a lot, but in case of power loss You may lose more data as they will be in ram; if You got laptop You got backup power already included so it's safe then.


Thanks for this info. BTW: nobarrier don't get colour in fstab using vim with syntax on, which is something who take my attention, since i am running Gentoo on a Laptop, this option seams good for me.

Cheers!
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 283

PostPosted: Fri Sep 26, 2008 6:23 pm    Post subject: Reply with quote

I'm using Reiser4 and XFS for my partitions in my setup. I originally formatted my primary partition for server use with this:
Code:
mkfs.xfs -f -d agcount=24 -l internal,size=128m -L MediaServer /dev/sda7

The partition is a 500GiB chunk of disk space on my 640GB drive. I can into a problem when I was making use of the partition and copying files over to it. Throughput performance was good, but whether I was scp-ing a few large (4 to 7 GiB) files to the disk or actively making a mirror of a website, the latency I experienced was horrible! I have a Q6700 and the disk is a brand new Seagate Barracuda 7200.11 SATA 3Gb/s 640-GB. Since I've had no problems with Reiser4 under disk activity, I did some research into XFS. I'm using the Anticipatory Scheduler btw. Mount options remain the same in my fstab at:
Code:
logbufs=8,noatime


It is only stated once or twice in this thread, but agcount and agsize are more important when it comes to interactivity of the filesystem. Splitting up a 500 GiB partition 24 ways means it is separated into ~21 GiB logical chucks where it'll queue requests from active processes. for a server where you have a bunch of little files, I'd imagine having multiple pending requests in a 21 GiB region can take a while to process. XFS docs suggest having at least 1 agcount per 4 GiB of disk space used. The minimum size for agsize is 16 MiB. I considered the specs of my drive as it has a 32 MB cache, and at the beginning of the partition I get read speeds of 115 MiB/sec according to hdparm -t.

Taking all of this into account I backed up my partition and reformatted with the following settings:
Code:
mkfs.xfs -f -d agsize=128m -l size=32m -L MediaServer /dev/sda7

Note I did not specify internal for the journal since I only have one drive in the system. I realize that this may be a bit of overkill as it is equivalent to setting agcount=16016, but instead of taking 6 to 10 seconds to load my phpsysinfo page under disk activity, I get my usual .5-.6 seconds initial load time. I had a bunch of friends load the page as well with the same results. I also have noticed a lot less disk thrashing on large files.

I have not used bonnie to test throughput though, but the anecdotal evidence is strong. From the whole experience my opinion is to set agcount/agsize based on disk size, not by the number of CPUs/cores you have in your system.

Edited disk activity, scheduler, and agcount=16016.
_________________
Atlas (HDTV PVR, HTTP & Media server)
http://mobrienphotography.com/
Back to top
View user's profile Send private message
arth1
n00b
n00b


Joined: 29 Nov 2006
Posts: 24

PostPosted: Wed Oct 01, 2008 2:42 am    Post subject: Reply with quote

biggyL wrote:
Hello All,

I'd like to share xfsdump script (I wrote awhile ago) I'm using to route 1 full and 6 incremental backups during the week.
I'm using xfsdump to make dumps and xfsinvutil to prune (manage) sessions on the date of xfsdump.

[chop]
Any comments very appreciated.


Well, I have written a script myself, that does an automated dump of all xfs volumes on a system that has the dump flag set in /etc/fstab (that's the 0 or 1 in the second to last field), and handles incremental backups and automated pruning of the dump inventory.
It also calls /usr/local/sbin/xfsbackup.local, which is a user-provided script that can be used to set extended attributes for files and directories that should NOT be executed.
("chattr -R +d /var/tmp" is an example of what you might put there.)

Anyhow, you can find "xfsbackup" at http://www.broomstick.com/tech/xfsbackup

Place it in /usr/local/sbin (or location of choice), edit the commented defaults at the top (like destination for the backup, which I have set to /var/backup/ which is an NFS mount on my host, but obviously should be changed to suit your use), and set up a cron job for it.

I use a staggered Tower Of Hanoi approach for backups, which is a reasonable compromise between backup/restore time and disk usage.

Code:

1   3   1-31/16   *   *   /usr/local/sbin/xfsbackup -l 0
1   3   9-31/16   *   *   /usr/local/sbin/xfsbackup -l 2
1   3   5-31/8   *   *   /usr/local/sbin/xfsbackup -l 4
1   3   3-31/4   *   *   /usr/local/sbin/xfsbackup -l 6
1   3   2-31/2   *   *   /usr/local/sbin/xfsbackup -l 8


This makes a full copy at 3:01 AM on the 1st and 17th of every month, and a staggered incremental backup otherwise, (level 0, 8, 6, 8, 4, 8, 6, 8, 2, 8, 6, 8, 4, 8, 6, 8).

For now, there's no corresponding restore script, but the procedure is as follows: If using compression, uncompress the backup files first. Then look at the xfsrestore man page -- it's not that hard to figure out. It's saved my neck a couple of times, when hard drives died.
Back to top
View user's profile Send private message
arth1
n00b
n00b


Joined: 29 Nov 2006
Posts: 24

PostPosted: Wed Oct 01, 2008 3:04 am    Post subject: Reply with quote

vinboy wrote:
WAT THE HELL!!! I thought my HDD was going to die.
The HDD is brand new.

I formatted my exterhal HDD (500GB) connected through USB2.0.

With XFS (used settings in the first post):
-When writing to the HDD, the max speed was 20MB/s
-The HDD sounded like it was going to explode! The head was moving here and there all the time!

With EXT2:
-Mas speed 29MB/s <---- 50% improvement over XFS.
-The writing operation is so quiet, hardly notice anything.

please advise what was going on?


This is almost certainly due to allocation groups (AGs). XFS divides each partition into several "sub-partitions", which improves speed on RAID systems with multiple CPUs, makes it less likely that you'll lose the entire volume in case of disk corruption, but only up to a fraction corresponding to number of allocation groups, and also reduces the chance of disk failure because the load will be spread out over the entire disk.
So far, so good. But here's the problem you likely see:

A hard drive is much faster near the start of the disk than it is near the end. The drive platters read much like an LP (remember those?), starting at the outside, and moving inwards. The outermost tracks thus move much faster past the drive head, and can contain more data per rotation, which leads to faster speeds. A drive being 3x as fast near the start of the disk as the end is not uncommon.

With lots of allocation groups, the load will be spread out over the entire disk. So you write not only to the faster outer tracks, but also to the slower inner tracks. This means that the drive will be slower when empty or near empty. EXT2/3 will start at the start of the disk, and write inwards. Thus, on an empty disk, EXT2/3 will often be faster, simply because it writes to the faster outer sectors first.
However, as the disk fills up, that advantage will disappear. At around 1/2-2/3 full, the advantage is negated, and when close to full, XFS has a distinct advantage.

Speed tests should really not be performed on empty drives, unless you plan to keep the drive almost empty at all times. Fill it up with as much random data as you expect having during normal operations, and then test the speed. Then you get a far more realistic test of the speed.

Speaking of allocation groups... I strongly recommend against lowering the number to 2 like the OP says. The speed advantage is really only there for empty disks (because you write to the start and the middle of the disk with two AGs, and never near the slower end), but you risk losing up to half the drive if there is irrecoverable corruption to the b-tree. For a drive that's 50-80% full, like most drives become over time, there is really no speed advantage to speak of, and even a speed slowdown as you get closer to 100%. Plus, with a quad core (or dual core with hyperthreading), you lose the advantage of multiple operations being prepared in parallel. At the very least, don't go below 4 AGs (or 8 with quad core with HT), and set it even higher if you think the disk will become very near full.
Back to top
View user's profile Send private message
cpu
Tux's lil' helper
Tux's lil' helper


Joined: 09 Nov 2003
Posts: 122
Location: POLAND/ZG

PostPosted: Wed Oct 01, 2008 9:07 am    Post subject: Reply with quote

I've chosed XFS for my /home partition on server too but I have some problems with XFS:
1. Every power down causes a some data loss - I found this when I used xfs_repair - anyone have solution for this? How to examine XFS?
2. I often have data corruption when I transfer files (15-20GB) from my local network to server by SAMBA. Yesterday I even had hard lockup of my server at the end of files transfering...

Thanks in advance for help
_________________
[img]http://imagegen.last.fm/top5-green/artists/cpu.gif[/img]
Back to top
View user's profile Send private message
arth1
n00b
n00b


Joined: 29 Nov 2006
Posts: 24

PostPosted: Thu Oct 02, 2008 4:39 pm    Post subject: Reply with quote

cpu wrote:
I've chosed XFS for my /home partition on server too but I have some problems with XFS:
1. Every power down causes a some data loss - I found this when I used xfs_repair - anyone have solution for this? How to examine XFS?
2. I often have data corruption when I transfer files (15-20GB) from my local network to server by SAMBA. Yesterday I even had hard lockup of my server at the end of files transfering...

Thanks in advance for help


Try asking in one of the appropriate fora -- the description for this one clearly states:

Quote:
Unofficial documentation for various parts of Gentoo Linux. Note: This is not a support forum.


When you ask, don't forget to post the relevant /etc/fstab entry as well as the output from "xfs_info /".
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 283

PostPosted: Wed Oct 22, 2008 8:53 am    Post subject: Reply with quote

I switched to CFQ for my IO Scheduler to test things out. Much better responsiveness. So I thought I'd decrease the number of allocation groups to see if the advice given here was applicable. I set agsize to 1GiB on reformat and ran the same scenario I will be coming across several times a week: Copying 6 to 8 GiB files from one partition to another while serving files via apache. Well. Even with CFQ, the reduced allocation groups (down to about 1/4 of what I had for the 500GiB partition) I saw a visible increase in the latency of reading small files from the disk, regardless of filesystem type. I'm on amd64, gentoo-sources patched with Reiser4, on a SATA II 640GB Seagate disk. Given that I'm on amd64, This might just be the whole system responsiveness issue most have complained of though since it was easy to get rid of, I thought not. Does anyone have any suggestions? I'm thinking of switching these two large partitions over to Ext3 but the amount of space the super-block takes up detracts me.

In my anecdotal observations I see that, if the goal is interactivity and responsiveness of a system, agcount/agsize should be set not based on the number of CPUs/core a user has or the theoretical number of simultaneous reads and writes, but based on a multiple of the read/write speed of the disk mechanism itself. I'm still new to Linux and Gentoo so I haven't gotten around to using bonnie to test for various scenarios.

With 6 SATA ports on my current motherboard and continuing plans of recording HDTV, XFS (over LVMS when I get there) seems to be the most logical solution so I'd like to learn to utilize this filesystem properly for performance in throughput and responsiveness. Guess it is analogous to a regular kernel and an fully preemtible kernel if I'm not mistaken.
_________________
Atlas (HDTV PVR, HTTP & Media server)
http://mobrienphotography.com/
Back to top
View user's profile Send private message
snIP3r
l33t
l33t


Joined: 21 May 2004
Posts: 853
Location: germany

PostPosted: Sun Nov 23, 2008 11:34 am    Post subject: Reply with quote

hi all!

i have a question about the xfs_fsr tool. i use a partition that is encypted via dm-crypt. i have this fragmentation:

Code:

area52 ~ # xfs_db -c frag -r /dev/mapper/stuff
actual 128404, ideal 70103, fragmentation factor 45.40%


i want to know if it is save to defrag the whole encrypted partition? has someone finished something like my config successfully?

thx in advance
snIP3r
_________________
Intel i3-4130T on ASUS P9D-X
Kernel 5.15.88-gentoo SMP
-----------------------------------------------
if your problem is fixed please add something like [solved] to the topic!
Back to top
View user's profile Send private message
kowal
n00b
n00b


Joined: 23 Apr 2003
Posts: 40

PostPosted: Mon Jan 05, 2009 12:19 am    Post subject: xfs performance tweaks Reply with quote

Good read
http://everything2.com/index.pl?node_id=1479435
Back to top
View user's profile Send private message
snIP3r
l33t
l33t


Joined: 21 May 2004
Posts: 853
Location: germany

PostPosted: Wed Jan 07, 2009 8:47 am    Post subject: Reply with quote

thx for the interesting page but this does not answer my questions...
_________________
Intel i3-4130T on ASUS P9D-X
Kernel 5.15.88-gentoo SMP
-----------------------------------------------
if your problem is fixed please add something like [solved] to the topic!
Back to top
View user's profile Send private message
rada
Apprentice
Apprentice


Joined: 21 Oct 2005
Posts: 202
Location: Ottawa, Canada

PostPosted: Tue May 05, 2009 2:44 am    Post subject: Reply with quote

using xfs_fsr should be fine on the encrypted xfs partition. as long as there is no corruption in the filesystem.
Back to top
View user's profile Send private message
Master One
l33t
l33t


Joined: 25 Aug 2003
Posts: 754
Location: Austria

PostPosted: Fri Jul 17, 2009 9:38 am    Post subject: Reply with quote

I always only sticked with ext3, but since it was now again advised in the workaround.org ISPmail tutorial, I am curious about XFS again:

1. Does XFS make any sense, when it comes down to a lot of small files (like /var/vmail with all the emails in maildir format)?
2. Anybody tried XFS on a netbook with an Atom N270, 1 or 2 GB RAM and a sloppy 16GB SSD?

BTW Nowadays I always only use the filesystem on top of LVM, which often sits on top of a luks-encrypted partition, which is either on a Software- or Hardware-RAID1. Does that have any influence? Wasn't it a typical recommendation in the past, NOT to use XFS on a Software-RAID?

For me, it's either ext3 or XFS for a general purpose filesystem, what's good for your server, should also be good for your workstation / laptop / netbook. Can it?

EDIT: Just played around a little, with the following conclusions:

- Still no "barriers" on LVM ("Disabling barriers, not supported by the underlying device")
- It is still not possible to change the log size after fs creation ("~# xfs_growfs -L 16384 MOUNTPOINT" -> "xfs_growfs: log growth not supported yet")

The first issue seems to be bad, because it was mentioned, that with "nobarriers" there is a larger dataloss to be expected in case of unclean shutdown / power-off, the second issue is bad, if you use a distro-installer on which you can not edit the filesystem creation options (I was thinking of the Debian Installer). Something like "not supported yet" does not really represent such an old and mature filesystem as a stable and production-ready one...

EDIT2: Did some read-up in the XFS FAQ on xfs.org, and found some interesting info concerning disk write cache and the barrier/nobarrier mountoption:

- If you have a single drive, it's good to leave barriers & disk write cache on.
- If you have a hardware-raid-controller with battery backed controller cache and cache in write back mode, it is advised to use the nobarrier mountoption, and to disable the individual disks' write caches.

Now what should one do concerning the barrier/nobarrier and disk write cache options, if using

- a Software-RAID1?
- LVM on top of a Software-RAID1?
- LVM on top of a luks-encrypted Software-RAID1?
- LVM on a Hardware-RAID1 with battery backed controller cache and cache in write back mode?
- LVM on top of a luks-encrypted Hardware-RAID1 with battery backed controller cache and cache in write back mode?

As mentioned, XFS on top of LVM leads to disables barriers anyway, so are you supposed to disable disks' write caches in every case, where nobarrier is used?

It even gets more confusing, if virtualization is used, which makes me believe, that one better & safer sticks with good old ext3 instead... :roll:
_________________
Las torturas mentales de la CIA
Back to top
View user's profile Send private message
erm67
l33t
l33t


Joined: 01 Nov 2005
Posts: 653
Location: EU

PostPosted: Mon Apr 12, 2010 1:52 pm    Post subject: Reply with quote

Anyone has experimented with the lazy-counters feature of xfs? it looks promising and quite a recent addition.

Quote:
[XFS] Lazy Superblock Counters

When we have a couple of hundred transactions on the fly at once, they all
typically modify the on disk superblock in some way.
create/unclink/mkdir/rmdir modify inode counts, allocation/freeing modify
free block counts.

When these counts are modified in a transaction, they must eventually lock
the superblock buffer and apply the mods. The buffer then remains locked
until the transaction is committed into the incore log buffer. The result
of this is that with enough transactions on the fly the incore superblock
buffer becomes a bottleneck.

The result of contention on the incore superblock buffer is that
transaction rates fall - the more pressure that is put on the superblock
buffer, the slower things go.

The key to removing the contention is to not require the superblock fields
in question to be locked. We do that by not marking the superblock dirty
in the transaction. IOWs, we modify the incore superblock but do not
modify the cached superblock buffer. In short, we do not log superblock
modifications to critical fields in the superblock on every transaction.
In fact we only do it just before we write the superblock to disk every
sync period or just before unmount.


mkfs.xfs wrote:
lazy-count=value

This changes the method of logging various persistent counters in the superblock. Under metadata intensive workloads, these counters are updated and logged frequently enough that the superblock updates become a serialisation point in the filesystem. The value can be either 0 or 1.

With lazy-count=1, the superblock is not modified or logged on every change of the persistent counters. Instead, enough information is kept in other parts of the filesystem to be able to maintain the persistent counter values without needed to keep them in the superblock. This gives significant improvements in performance on some configurations. The default value is 0 (off) so you must specify lazy-count=1 if you want to make use of this feature.


xfs_admin wrote:
-c 0|1 Enable (1) or disable (0) lazy-counters in the filesystem. This
operation may take quite a bit of time on large filesystems as
the entire filesystem needs to be scanned when this option is
changed.

With lazy-counters enabled, the superblock is not modified or
logged on every change of the free-space and inode counters.
Instead, enough information is kept in other parts of the
filesystem to be able to maintain the counter values without
needing to keep them in the superblock. This gives significant
improvements in performance on some configurations and metadata
intensive workloads.

_________________
Ok boomer
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
lightvhawk0
Guru
Guru


Joined: 07 Nov 2003
Posts: 388

PostPosted: Wed May 05, 2010 12:06 am    Post subject: Reply with quote

I turned on lazy counters and restored my backup to my xfs drive

Code:
meta-data=/dev/md0               isize=256    agcount=16, agsize=7599984 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=121599744, imaxpct=25
         =                       sunit=16     swidth=32 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =external               bsize=4096   blocks=59367, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=131072 blocks=0, rtextents=0


it took about nine minutes before I turned on lazy-count=1
after I reformatted and returned my backup it knocked an entire minute off


EDIT just a note I moved the log to an external device and now my system is much more silent.
_________________
If God has made us in his image, we have returned him the favor. - Voltaire
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Dec 11, 2010 1:39 am    Post subject: Reply with quote

for those folks that use one (or more) of those new harddrives with the "Advanced Sector Format" (4 KiB sectors)

make sure you have set:

mkfs.xfs -s size=4096

when creating the partition and

before that:

that your partitions are aligned to MiB (via gparted) or multiples of 4 KiB
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
Enlight
Advocate
Advocate


Joined: 28 Oct 2004
Posts: 3519
Location: Alsace (France)

PostPosted: Tue Aug 23, 2011 10:41 pm    Post subject: Reply with quote

Hi folks,

For those of you who don't know it, since 2.6.35, xfs had a new mount option '-o delaylog', which improved a lot metadata operations. From 2.6.39 this option is on by default and basicaly the default setups are probably the best you can get for a non specific usage (even noatime is useless because filesystems are all using relatime by default now).

results of decent benchmarks can be seen on slides 24 and next of this paper : http://www.redhat.com/summit/2011/presentations/summit/decoding_the_code/thursday/wheeler_t_0310_billion_files_2011.pdf

basicaly, now xfs competes with btrfs and ext4 at file creation and we mean small files here (investigations are made to make it even better) and xfs is now the fastets filesystem for deletion
(no, i made no typo).


edit : btw xfs was already 3 times faster than ext4 and 6 times faster than btrfs at iterating through the created files (see slide 20) so if you get that each created file is supposed to be read at least once, the conclusions should be obvious!

Hope you will enjoy your xfs filesystem even more than before!
Back to top
View user's profile Send private message
rada
Apprentice
Apprentice


Joined: 21 Oct 2005
Posts: 202
Location: Ottawa, Canada

PostPosted: Tue Sep 06, 2011 6:49 pm    Post subject: Reply with quote

This link has some interesting ideas for optimizing XFS for a raid partition. https://raid.wiki.kernel.org/index.php/RAID_setup#XFS
Back to top
View user's profile Send private message
JeffBlair
Apprentice
Apprentice


Joined: 23 May 2003
Posts: 175
Location: USA, Lone star state

PostPosted: Sat Jan 21, 2012 4:50 am    Post subject: Reply with quote

OK, I'm about to re-do my RAID5 array, and want to tweek my drive.
This will be for serving out blue-ray/DVD's for a couple of PC's. So, here's what I'm going to run so far
It's on a Dual Core Intel by the way running x64, and the array will be about 10T at the largest..unless I get a new server case. ;)


mkfs.xfs -l internal,size=128m -d agcount=20 -b size=8192 lazy-count=1 /dev/sdc1


and, of course the normal "noatime,logbufs=8,nobarrier,nodiratime" in fstab


So, does that look right for serving out 5gig files?
Also, would it be better to move the journal file to another drive? And, if so, how would I do that?

Thanks for all the help guys.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4, 5
Page 5 of 5

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum