| View previous topic :: View next topic |
| Author |
Message |
TheEldest n00b

Joined: 19 Aug 2008 Posts: 7
|
Posted: Fri Sep 26, 2008 3:38 pm Post subject: Which filesystem for Gentoo NAS using software RAID5? |
|
|
I've got a Gentoo box set up as a network attached storage server. It's running an Athlon 64 2.0GHz (512K L2) and 1GB of memory. I've got a 15GB--old-as-sin--hard drive for the OS and 3x 640GB Western Digital drives (2 platters each!) in RAID5 using md.
I've got Samba running for my wife and myself (she's got a mac; I've got a pc) as well as TorrentFlux to download files directly to the server.
It's currently using EXT3 on the RAID. Is this the best drive?
We don't write much to the drive except via TorrentFlux. It's mostly large files.
So large files mostly being read over software RAID. What's the best filesystem? |
|
| Back to top |
|
 |
Cyker Veteran

Joined: 15 Jun 2006 Posts: 1746
|
Posted: Fri Sep 26, 2008 6:09 pm Post subject: |
|
|
ext3 is fine. I don't know how much difference you'd get between the rest, but tweaking things like the Stripe size, inode size (Larger values are faster for bigger file writes but slower for smaller ones), and also drescherjm informed me of an excellent tip which boosts write speeds massively:
https://forums.gentoo.org/viewtopic-t-673067-highlight-raid5.html
There's also hdparm stuff like setra which you can tweak the readahead which can be tuned for specific read/write patterns to boost performance.
These things will all boost performance far more than the filesystem choice I reckon. |
|
| Back to top |
|
 |
drescherjm Advocate

Joined: 05 Jun 2004 Posts: 2790 Location: Pittsburgh, PA, USA
|
Posted: Sat Sep 27, 2008 3:06 am Post subject: |
|
|
Here is some samples data I did today trying to help someone tune a raid card:
Here are some more results of stripe_cache_size on a reiserfs filesystem. Perhaps increasing
the 1024 to 2048 or 4096 may help.
# free -m
total used free shared buffers cached
Mem: 2010 1992 17 0 53 1647
-/+ buffers/cache: 291 1718
Swap: 4424 0 4423
datastore0 ~ # cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 79
model name : AMD Athlon(tm) 64 Processor 3200+
stepping : 2
cpu MHz : 2000.000
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext
fxsr_opt rdtscp lm 3dnowext 3dnow up pni cx16 lahf_lm svm cr8_legacy
bogomips : 4021.83
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc
Here I am using software raid 6 with 6 X 320 GB Segate 7200.10 drives.
datastore0 ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md0 : active raid1 sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
256896 blocks [6/6] [UUUUUU]
md2 : active raid6 sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1] sda4[0]
1199283200 blocks level 6, 256k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid6 sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
46909440 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
The first test starts with the system default stripe cache:
datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 141.612 s, 60.7 MB/s
datastore0 ~ # echo 1024 > /sys/block/md1/md/stripe_cache_siz
datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 112.452 s, 76.4 MB/s
datastore0 ~ # echo 2048 > /sys/block/md1/md/stripe_cache_size
datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 65.093 s, 132 MB/s
For reproducibility I run again with 1024
datastore0 ~ # echo 1024 > /sys/block/md1/md/stripe_cache_size
datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 109.951 s, 78.1 MB/s
Now 4096
datastore0 ~ # echo 4096 > /sys/block/md1/md/stripe_cache_size
datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 59.2806 s, 145 MB/s _________________ John
My gentoo overlay
Instructons for overlay
Last edited by drescherjm on Sat Sep 27, 2008 3:18 am; edited 2 times in total |
|
| Back to top |
|
 |
drescherjm Advocate

Joined: 05 Jun 2004 Posts: 2790 Location: Pittsburgh, PA, USA
|
Posted: Sat Sep 27, 2008 3:14 am Post subject: |
|
|
Now a second test. This time on a 5 drive raid 5 using 4 seagate 7200.11s and 1 7200.10. This machine is an Athlon X2 5000 with 8GB of memory and uses the same M2N motherboard as the machine above:
datastore2 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 44.8402 s, 192 MB/s
datastore2 ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
md0 : active raid1 sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
256896 blocks [5/5] [UUUUU]
md1 : active raid5 sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
46909440 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
datastore2 ~ # mount | grep /dev/md1
/dev/md1 on / type xfs (rw,noatime,sunit=128,swidth=512,ikeep,noquota)
And the tuning param that I used:
echo 1024 > /sys/block/md1/md/stripe_cache_size
Now to show that the 8GB is not just caching everything:
datastore2 samba-temp # dd if=/dev/zero of=bigfile bs=1M count=15000
15000+0 records in
15000+0 records out
15728640000 bytes (16 GB) copied, 83.324 s, 189 MB/s
datastore2 samba-temp # rm bigfile
datastore2 samba-temp # dd if=/dev/zero of=bigfile1 bs=1M count=15000
15000+0 records in
15000+0 records out
15728640000 bytes (16 GB) copied, 85.7504 s, 183 MB/s
samba-temp is on md2 (not shown above in the mdadm) this is also raid5 but this one has lvm on it (only one PV) so that the xfs filesystem is under lvm and then raid5. _________________ John
My gentoo overlay
Instructons for overlay |
|
| Back to top |
|
 |
TheEldest n00b

Joined: 19 Aug 2008 Posts: 7
|
Posted: Mon Sep 29, 2008 4:50 pm Post subject: Question about dd |
|
|
Ok, I've never used dd.
And before I go testing all willy-nilly, is that a non-destructive test? (I'd assume it is; that you're only testing unused blocks; but it's better to be safe than sorry ...)
Can I use that command on a disk that's storing stuff?
(thanks for the bit about stripe_cache_size) |
|
| Back to top |
|
 |
drescherjm Advocate

Joined: 05 Jun 2004 Posts: 2790 Location: Pittsburgh, PA, USA
|
Posted: Mon Sep 29, 2008 5:58 pm Post subject: |
|
|
This just makes a large file on the filesystem so it is nondestructive. See the bigfile and bigfile1 these are these are the filenames. You could have included an entire path there as well (like I did with /bigfile). In these examples I mostly changed directory to the destination. _________________ John
My gentoo overlay
Instructons for overlay |
|
| Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Tue Feb 10, 2009 2:17 pm Post subject: |
|
|
Hello, if i perform this tests, i get 65mb/s on my raid5 with 3x 1.5tb seagate.
the cpu (AMD3800 X2) is at 100% on both CPUs.
| Code: | dd if=/dev/zero of=/mnt/md1/bigfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 16.4443 s, 65.3 MB/s |
| Code: | cat /sys/block/md1/md/stripe_cache_size
8192 |
| Code: |
Personalities : [linear] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdc[0] sda[2] sdb[1]
2930276992 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none> |
|
|
| Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|