Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
XFS on steroids
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4, 5  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
Sachankara
l33t
l33t


Joined: 11 Jun 2004
Posts: 696
Location: Stockholm, Sweden

PostPosted: Sun Dec 10, 2006 11:29 pm    Post subject: Reply with quote

irondog wrote:
I'm using XFS on LVM2. Is this a problem?

Code:
Filesystem "dm-1": Disabling barriers, not supported by the underlying device
XFS mounting filesystem dm-1
Ending clean XFS mount for filesystem: dm-1
Nope. :)
_________________
Gentoo Hardened Linux 2.6.21 + svorak (Swedish dvorak)
Back to top
View user's profile Send private message
irondog
l33t
l33t


Joined: 07 Jul 2003
Posts: 715
Location: Voor mijn TV. Achter mijn pc.

PostPosted: Mon Dec 11, 2006 9:02 am    Post subject: Reply with quote

Sachankara wrote:
Nope. :)
Nope? Wouldn't I have barriers otherwise, I.E. when not using LVM, or are barriers worthless?
_________________
Alle dingen moeten onzin zijn.
Back to top
View user's profile Send private message
whiskas
n00b
n00b


Joined: 11 Sep 2003
Posts: 58
Location: Bucharest, Romania

PostPosted: Mon Dec 11, 2006 9:22 am    Post subject: Reply with quote

I'm using XFS on top of some EVMS volumes.
I'm getting the same barriers not supported message when mounting, but otherwise everything works allright.
I'm a little bit curious about the in-kernel device-mapper implementation and how it doesn't know to implement write barriers on top of the real hardware. Perhaps someone with more knowledge could provide more information on this...?
Back to top
View user's profile Send private message
Thaidog
Veteran
Veteran


Joined: 19 May 2004
Posts: 1036
Location: Hilton Head, SC

PostPosted: Sun Dec 17, 2006 1:13 am    Post subject: Reply with quote

What is the status on Linux and XFS realtime subvolumes? (GIO... etc)

Does 2.6 support any of that and if so how do you implement it?
Back to top
View user's profile Send private message
irondog
l33t
l33t


Joined: 07 Jul 2003
Posts: 715
Location: Voor mijn TV. Achter mijn pc.

PostPosted: Sun Dec 17, 2006 12:07 pm    Post subject: Reply with quote

What could be the possible consequence of not having write barriers enabled? Anyone?

I have to say there is very little information to find about JFS. So I decided to find it out myself. I'm testing a combination of JFS / XFS now. The overall feeling is quite good, but that's also the case whenever I start using a newly formatted ext3 filesystem. So lets see what happens when time goes by.
_________________
Alle dingen moeten onzin zijn.
Back to top
View user's profile Send private message
Thaidog
Veteran
Veteran


Joined: 19 May 2004
Posts: 1036
Location: Hilton Head, SC

PostPosted: Mon Dec 18, 2006 3:01 am    Post subject: Reply with quote

irondog wrote:
What could be the possible consequence of not having write barriers enabled? Anyone?

I have to say there is very little information to find about JFS. So I decided to find it out myself. I'm testing a combination of JFS / XFS now. The overall feeling is quite good, but that's also the case whenever I start using a newly formatted ext3 filesystem. So lets see what happens when time goes by.


Some situations will keep the log data in cache instead of getting written to disk first. If your system crashes or has power loss your disk could lose serious data or metadata to the point of not being recoverable.
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Sat Feb 10, 2007 2:55 pm    Post subject: Reply with quote

Hello All,

I have Pentium II (Deschutes) with first 10GB (/dev/hda) and second 60GB(/dev/hdc) disk.
After reading this thread and some SGI docs and FAQs I came with this options for creating FS and mounting the disks:

1) To create XFS on hda:

Code:
# mkfs.xfs -l internal,size=128m -d agcount=2 /dev/hda


I've also seen "–d unwritten=0" option:
mkfs – Unwritten Extents
• Unwritten extents are used to support pre-allocation.
• Default is enabled.
• To disable unwritten extents you would use:
# mkfs –d unwritten=0 device
•Filesystem write performance may be negatively affected for unwritten file extents,
since extra filesystem transactions are required to convert extent flags for the
range of the file written.

So my question:
Is it safe to add –d unwritten=0 option to increase performance like this (or will I lose some essential functionality)?:

Code:
# mkfs.xfs -l internal,size=128m -d agcount=2 –d unwritten=0 /dev/hda


2) To prevent data lost in case of power outage(Disabling the write back cache):
Add the following to local.start:
Code:
# hdparm -W0 /dev/hda
# hdparm -W0 /dev/hdc
# blktool /dev/hda wcache off
# blktool /dev/hdc wcache off


Right?

3) Mount options:

On this thread it's suggested that the mount options should be "noatime,logbufs=8"

But what about "osyncisdsync" mount option.

• osyncisdsync
– Writes to files opened with the O_SYNC flag set will behave as if the O_DSYNC flag
had been used instead.
– This can result in better performance without compromising data safety.
– However timestamp updates from O_SYNC writes can be lost if the system crashes.
Use osyncisosync to disable this setting.

So do you think it is safe to add "osyncisdsync" mount option to fstab?


I'd appreciate any comments/answers.
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Feb 23, 2007 9:25 am    Post subject: Reply with quote

biggyL wrote:

I've also seen "–d unwritten=0" option:
mkfs – Unwritten Extents
• Unwritten extents are used to support pre-allocation.
• Default is enabled.
• To disable unwritten extents you would use:
# mkfs –d unwritten=0 device
•Filesystem write performance may be negatively affected for unwritten file extents,
since extra filesystem transactions are required to convert extent flags for the
range of the file written.

So my question: Is it safe to add –d unwritten=0 option to increase performance like this (or will I lose some essential functionality)?:


I think that performance gains with unwritten extents are negligible, so I would rather pass that option out. Write performance is affected, because FS needs to convert extents from unwritten to written throughout the site of file that is being written. If someone could make a few tests, that would be nice, but I think it's not some magic option.

biggyL wrote:

2) To prevent data lost in case of power outage(Disabling the write back cache):
Add the following to local.start:
Code:
# hdparm -W0 /dev/hda
# hdparm -W0 /dev/hdc
# blktool /dev/hda wcache off
# blktool /dev/hdc wcache off



Disabling writeback cache also significantly reduces performance, but then again, test it first, maybe it's not such a big deal.

biggyL wrote:
3) Mount options:

On this thread it's suggested that the mount options should be "noatime,logbufs=8"
But what about "osyncisdsync" mount option.

• osyncisdsync
– Writes to files opened with the O_SYNC flag set will behave as if the O_DSYNC flag
had been used instead.
– This can result in better performance without compromising data safety.
– However timestamp updates from O_SYNC writes can be lost if the system crashes.
Use osyncisosync to disable this setting.

So do you think it is safe to add "osyncisdsync" mount option to fstab?


I haven't tested this option either... Could someone run Bonnie and post results?
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
octoploid
n00b
n00b


Joined: 21 Oct 2006
Posts: 65

PostPosted: Fri Feb 23, 2007 6:35 pm    Post subject: Reply with quote

jsosic wrote:

biggyL wrote:
3) Mount options:

On this thread it's suggested that the mount options should be "noatime,logbufs=8"
But what about "osyncisdsync" mount option.

• osyncisdsync
– Writes to files opened with the O_SYNC flag set will behave as if the O_DSYNC flag
had been used instead.
– This can result in better performance without compromising data safety.
– However timestamp updates from O_SYNC writes can be lost if the system crashes.
Use osyncisosync to disable this setting.

So do you think it is safe to add "osyncisdsync" mount option to fstab?


I haven't tested this option either... Could someone run Bonnie and post results?


There is no need to add "osyncisdsync" to fstab, because it is enabled by default since 2002...
_________________
Myself and mine gymnastic ever
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Sat Feb 24, 2007 2:55 pm    Post subject: Reply with quote

octoploid

Right,


[esandeen@neon linux-2.6.20]$ grep -r osyncisdsync fs/xfs/xfs_vfsops.c
} else if (!strcmp(this_char, "osyncisdsync")) {
"XFS: osyncisdsync is now the default, option is deprecated."); [esandeen@neon linux-2.6.20]$


jsosic

Here is a response from Timothy Shimmin from SGI (at least his e-mail addr. have @sgi.com :)) on my "unwritten=0" query:

My understanding (although I'm not familiar with that code), is that unwritten extents are used in space preallocation.
So unless you reserve space for a file it will not have an effect.
And if you do, then setting "unwritten=0" will speed up writes because it doesn't need to flag the unwritten extents and write out the extra transactions for this.
If the unwritten extents aren't flagged as such then there can be a security issue where one can read old data (other's data) for these unwritten parts.
In fact, the security issue on preallocation (1997-98 sgi-pv#705217) was what motivated the idea of flagging extents as unwritten in the first place.
----------

So my choice is to set "unwritten=0" on this particular machine (PII with only one console access user - root ) :D
Back to top
View user's profile Send private message
prymitive
Apprentice
Apprentice


Joined: 13 Jun 2004
Posts: 260

PostPosted: Thu Mar 15, 2007 8:48 pm    Post subject: Reply with quote

I tried xfs but it was so slooooooooow, good thing I figure it out, mount Your xfs partition with nobarrier, it is default since 2.6.17 and it flushes write cache to often.
Back to top
View user's profile Send private message
dentharg
Guru
Guru


Joined: 10 Aug 2004
Posts: 438
Location: /poland/wroclaw

PostPosted: Wed Apr 25, 2007 6:56 am    Post subject: Reply with quote

I am really interested in data journaling in XFS. Is it in there somewhere? I have had quite a few lost files on XFS when power goes down (those files were filled with garbage). Did something change in that matter?
Back to top
View user's profile Send private message
fik
n00b
n00b


Joined: 27 Feb 2006
Posts: 13

PostPosted: Mon Apr 30, 2007 5:37 pm    Post subject: Reply with quote

prymitive wrote:
I tried xfs but it was so slooooooooow, good thing I figure it out, mount Your xfs partition with nobarrier, it is default since 2.6.17 and it flushes write cache to often.


You are right, "nobarrier" speeds up the writing performance of xfs significantly :D, as evidenced by my small benchmark:

time emerge bash

nobarrier,logbufs=8: 2:48.237
logbufs=8 : 3:24.496
nobarrier : 2:50.432
default : 3:23.414

(I added a code to fill the cache with other data before emerge and I did 4 runs, the real time of the quickest is reported)

But is the "nobarrier" safe? :?
Back to top
View user's profile Send private message
blkdragon
n00b
n00b


Joined: 20 Nov 2006
Posts: 50

PostPosted: Tue May 01, 2007 2:06 am    Post subject: Reply with quote

hey, i've been trying out xfs, but don't know what options to put in fstab for it...

I'm running a P4 1.6Ghz with a 20Gb, and a 6.5G HDD, and xfs is on the 6.5Gb hdd...

any suggestions?
_________________
Lovin' Choc Cake...

Toshiba Portege 7020CT
PII, 366Mhz, 192Mb RAM, 30GB HDD, XP & Gentoo

Hewlett Packard 8833
PIII, 1Ghz, 512Mb RAM, 80+40Gb HDD, XP & Gentoo

Non-Branded Computer
P4, 1.6Ghz, 512Mb RAM, 160Gb HDD, XP & Gentoo
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Tue May 01, 2007 8:22 am    Post subject: Reply with quote

blkdragon
Below is mine fstab example:

Code:
/dev/hda1               /                       xfs             noatime,nodiratime,logbufs=8    1 1
/dev/hda2               none                    swap            sw                              0 0
/dev/hdc3               /var/tmp/portage        xfs             noatime,nodiratime,logbufs=8    0 0
/dev/hdc2               /data                   xfs             noatime,nodiratime,logbufs=8    0 0
/dev/hdc1               none                    swap            sw                              0 0
/dev/cdrom              /mnt/cdrom              iso9660         noauto,ro                       0 0
/dev/fd0                /mnt/floppy             auto            noauto                          0 0

# NOTE: The next line is critical for boot!
proc                    /proc           proc            defaults        0 0

# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
# POSIX shared memory (shm_open, shm_unlink).
# (tmpfs is a dynamically expandable/shrinkable ramdisk, and will
#  use almost no memory if not populated with files)
shm                     /dev/shm        tmpfs           nodev,nosuid,noexec     0 0


Last edited by biggyL on Tue May 01, 2007 8:31 am; edited 1 time in total
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Tue May 01, 2007 8:30 am    Post subject: Reply with quote

Hello All,

I'd like to share xfsdump script (I wrote awhile ago) I'm using to route 1 full and 6 incremental backups during the week.
I'm using xfsdump to make dumps and xfsinvutil to prune (manage) sessions on the date of xfsdump.

Code:
# cat /scripts/xfsdump.sh
#!/bin/bash

DATE=`/usr/bin/date +%m/%d/%Y`; /usr/bin/xfsinvutil -n -m "file`/usr/bin/date +%w`" -M "`/usr/bin/uname -n`:/" $DATE
/usr/bin/rm /data/backups/backup`/usr/bin/date +%w`.file
/usr/bin/xfsdump -e -l `/usr/bin/date +%w` -L "dump hda1(/) of `/usr/bin/uname -n`.`/bin/dnsdomainname` at `/usr/bin/date +%F` level `/usr/bin/date +%w`" -M "file`/usr/bin/date +%w`" -f /data/backups/backup`/usr/bin/date +%w`.file /
/usr/bin/cp -R /var/lib/xfsdump/. /data/backups/xfs_inventory_backup/


This is a sample cronjob:
Code:
10 1 * * * (/scripts/xfsdump.sh | /bin/mailx -s "`/usr/bin/uname -n`.`/bin/dnsdomainname` daily xfsdump level `/usr/bin/date +\%w` status" leonk@mydomain.com)



Any comments very appreciated.

Enjoy
Back to top
View user's profile Send private message
timbo
Apprentice
Apprentice


Joined: 29 Jul 2002
Posts: 231
Location: New Zealand

PostPosted: Sat Jun 02, 2007 1:55 am    Post subject: Reply with quote

Is this bad?...

Code:

dinglemouse ~ # /usr/bin/xfs_db -c frag -r /dev/hda4
actual 321901, ideal 1129, fragmentation factor 99.65%
dinglemouse ~ # xfs_fsr /dev/hda4
/media start inode=0
insufficient freespace for: ino=1107: size=8175333596: ignoring
dinglemouse ~ # /usr/bin/xfs_db -c frag -r /dev/hda4
actual 246182, ideal 1126, fragmentation factor 99.54%
dinglemouse ~ #


After something lihe four hours of thrashing the HDD for a 0.09% improvement.... this is on my mythtv videos partition.

Code:

dinglemouse ~ # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda3             9.2G  5.4G  3.5G  61% /
udev                  252M  2.7M  249M   2% /dev
/dev/hda4             223G  216G  7.1G  97% /media
/dev/sda1             459G  192G  244G  45% /media/TheStore
shm                   252M     0  252M   0% /dev/shm
dinglemouse ~ #     


Regards
Tim
8)
_________________
Linux User: 303160
Back to top
View user's profile Send private message
fik
n00b
n00b


Joined: 27 Feb 2006
Posts: 13

PostPosted: Thu Jun 28, 2007 11:28 am    Post subject: Reply with quote

timbo wrote:
Is this bad?...

Code:

dinglemouse ~ # /usr/bin/xfs_db -c frag -r /dev/hda4
actual 321901, ideal 1129, fragmentation factor 99.65%
dinglemouse ~ # xfs_fsr /dev/hda4
/media start inode=0
insufficient freespace for: ino=1107: size=8175333596: ignoring
dinglemouse ~ # /usr/bin/xfs_db -c frag -r /dev/hda4
actual 246182, ideal 1126, fragmentation factor 99.54%
dinglemouse ~ #





I think your partition is heavily fragmented. And you cannot defragment it, as there is not enough free space. First move some files to make space and then re-run xfs_fsr. It seems that you need at least 8175333596 bytes, i.e. 7.6 GB insted 7.1 GB actually free, but if you have much more free space, xfs_fsr will be faster.

Or you may try shake http://vleu.net/shake/
Back to top
View user's profile Send private message
vipernicus
Veteran
Veteran


Joined: 17 Jan 2005
Posts: 1462
Location: Your College IT Dept.

PostPosted: Sun Jul 01, 2007 7:08 pm    Post subject: Reply with quote

I don't see how XFS is faster than other filesystems.

On ext3:
time tar -xvjf linux-2.6.21.tar.bz2
takes ~31s

On XFS:
time tar -xvjf linux-2.6.21.tar.bz2
takes ~58s

This is on a SATA-II drive. For my everyday workload, it seems that XFS is the SLOWEST filesystem (next to NTFS that is).

I tried standard XFS and then I tried to follow this guide as well, but I still get about the same performance either way.

The only thing my SATA controller does not support is NCQ. Does XFS perform drastically better with NCQ support?
_________________
Viper-Sources Maintainer || nesl247 Projects || vipernicus.org blog
Back to top
View user's profile Send private message
prymitive
Apprentice
Apprentice


Joined: 13 Jun 2004
Posts: 260

PostPosted: Sun Jul 01, 2007 7:20 pm    Post subject: Reply with quote

vipernicus wrote:
I don't see how XFS is faster than other filesystems.

On ext3:
time tar -xvjf linux-2.6.21.tar.bz2
takes ~31s

On XFS:
time tar -xvjf linux-2.6.21.tar.bz2
takes ~58s

This is on a SATA-II drive. For my everyday workload, it seems that XFS is the SLOWEST filesystem (next to NTFS that is).

I tried standard XFS and then I tried to follow this guide as well, but I still get about the same performance either way.

The only thing my SATA controller does not support is NCQ. Does XFS perform drastically better with NCQ support?


Did You tried with mount -o nobarrier ?
Back to top
View user's profile Send private message
vipernicus
Veteran
Veteran


Joined: 17 Jan 2005
Posts: 1462
Location: Your College IT Dept.

PostPosted: Mon Jul 02, 2007 5:20 am    Post subject: Reply with quote

prymitive wrote:
vipernicus wrote:
I don't see how XFS is faster than other filesystems.

On ext3:
time tar -xvjf linux-2.6.21.tar.bz2
takes ~31s

On XFS:
time tar -xvjf linux-2.6.21.tar.bz2
takes ~58s

This is on a SATA-II drive. For my everyday workload, it seems that XFS is the SLOWEST filesystem (next to NTFS that is).

I tried standard XFS and then I tried to follow this guide as well, but I still get about the same performance either way.

The only thing my SATA controller does not support is NCQ. Does XFS perform drastically better with NCQ support?


Did You tried with mount -o nobarrier ?

Wasn't in the guide, what is its use?
_________________
Viper-Sources Maintainer || nesl247 Projects || vipernicus.org blog
Back to top
View user's profile Send private message
a7thson
Apprentice
Apprentice


Joined: 08 Apr 2006
Posts: 175
Location: your pineal gland

PostPosted: Mon Jul 02, 2007 1:01 pm    Post subject: Reply with quote

vipernicus wrote:
prymitive wrote:
vipernicus wrote:
I don't see how XFS is faster than other filesystems.


Did You tried with mount -o nobarrier ?

Wasn't in the guide, what is its use?


This became an issue at around kernel 2.6.17 where write barriers became a default option; some kind soul pointed me in this direction (http://lkml.org/lkml/2006/5/19/33), which is a thread on LKML where an XFS developer is discussing the merits/tradeoffs of write barriers and comments on some benchmark results. They (SGI XFS devs) later addressed this in the XFS FAQ, where the official word
from (http://oss.sgi.com/projects/xfs/faq.html) is:
Code:

Write barrier support.

Write barrier support is enabled by default in XFS since 2.6.17. It is disabled by mounting the filesystem with "nobarrier". Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:

    * "Disabling barriers, not supported with external log device"
    * "Disabling barriers, not supported by the underlying device"
    * "Disabling barriers, trial barrier write failed"

If the filesystem is mounted with an external log device then we currently don't support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn't support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.

Q. Should barriers be enabled with storage which has a persistent write cache?

Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not warranted and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with "nobarrier".

Hope that's (somewhat) helpful and gives some context. Not sure it's quite the answer you're looking for, however.
_________________
Sager NP6165 | i7-3610QM | GeForce 650M
Gigabyte GA-880GMA-USB3 | AMD 1100T | Radeon HD 4250
MSI P6N-SLI | Q6600 | Sapphire HD 6870
Back to top
View user's profile Send private message
vipernicus
Veteran
Veteran


Joined: 17 Jan 2005
Posts: 1462
Location: Your College IT Dept.

PostPosted: Mon Jul 02, 2007 3:43 pm    Post subject: Reply with quote

a7thson wrote:
Not sure it's quite the answer you're looking for, however.

No, that's great information, I just needed to know if removing it would cause more harm than good.

I mounted the partition with nobarrier, and wow, big difference.

real 0m31.367s

Edit:
Although, ext3 w/ -o noatime,commit=60,data=writeback I get:

real 0m26.831s
_________________
Viper-Sources Maintainer || nesl247 Projects || vipernicus.org blog
Back to top
View user's profile Send private message
a7thson
Apprentice
Apprentice


Joined: 08 Apr 2006
Posts: 175
Location: your pineal gland

PostPosted: Mon Jul 02, 2007 4:03 pm    Post subject: Reply with quote

LOL - glad it helped. I'm laughing because I nearly junked XFS as well back then after upgrading to a >2.6.16 kernel, not realizing that the new feature had been defaulted. FYI - as you can read, it's not recommended to disable the write barrier on a system with no RAID/provisions for recovery but honestly I've run a laptop happily with most of the filesystem under XFS for a long time with little trouble and no recovery issues to speak of despite lockups, kernel oopses, and booting unstable/testing kernels [such as viper-sources :-D] on that machine.
vipernicus wrote:
a7thson wrote:
Not sure it's quite the answer you're looking for, however.

No, that's great information, I just needed to know if removing it would cause more harm than good.

I mounted the partition with nobarrier, and wow, big difference.

real 0m31.367s


Quote:

Edit:
Although, ext3 w/ -o noatime,commit=60,data=writeback I get:

:-D I run / on ext3 with similar options actually. Mostly I entrust larger files, torrent/p2p downloads, media and raw media, plus distfiles,packages and /home to XFS
_________________
Sager NP6165 | i7-3610QM | GeForce 650M
Gigabyte GA-880GMA-USB3 | AMD 1100T | Radeon HD 4250
MSI P6N-SLI | Q6600 | Sapphire HD 6870
Back to top
View user's profile Send private message
vinboy
n00b
n00b


Joined: 18 Jun 2006
Posts: 69

PostPosted: Sat Mar 01, 2008 3:51 am    Post subject: Reply with quote

WAT THE HELL!!! I thought my HDD was going to die.
The HDD is brand new.

I formatted my exterhal HDD (500GB) connected through USB2.0.

With XFS (used settings in the first post):
-When writing to the HDD, the max speed was 20MB/s
-The HDD sounded like it was going to explode! The head was moving here and there all the time!

With EXT2:
-Mas speed 29MB/s <---- 50% improvement over XFS.
-The writing operation is so quiet, hardly notice anything.

please advise what was going on?
[/list]
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4, 5  Next
Page 3 of 5

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum