Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED] Which filesystem should I use?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
shuuraj
n00b
n00b


Joined: 13 Jan 2014
Posts: 38

PostPosted: Thu Aug 28, 2014 11:40 am    Post subject: [SOLVED] Which filesystem should I use? Reply with quote

I'm currently building a 15TB hardware raid5 (HP Smarty Array P410)

Any recommandations for a file system?
Was running ext4 in the past but thoguht I could/should try something else...


Last edited by shuuraj on Sun Aug 31, 2014 1:38 pm; edited 1 time in total
Back to top
View user's profile Send private message
shuuraj
n00b
n00b


Joined: 13 Jan 2014
Posts: 38

PostPosted: Thu Aug 28, 2014 11:54 am    Post subject: Reply with quote

To give some more informations.

I want to use encryption.
All kinds of Data will be stored on this volume, mostly big files tho.
Back to top
View user's profile Send private message
apathetic
n00b
n00b


Joined: 28 Aug 2014
Posts: 36

PostPosted: Thu Aug 28, 2014 11:56 am    Post subject: Reply with quote

ZFS might be a good choice.
Back to top
View user's profile Send private message
Roman_Gruber
Advocate
Advocate


Joined: 03 Oct 2006
Posts: 3846
Location: Austro Bavaria

PostPosted: Thu Aug 28, 2014 2:16 pm    Post subject: Reply with quote

Okay.

I suggest you use an lvm container and on top of that LUKS. I use it for a while.


Please use it in that fashion:
lvm container => LUKS => ext 4

Lvm has some awesome features which could be useful later when you want to move your data or exchange hardware and so on...

I moved my ext 4 (inside luks, inside of lvm) during running my system and keeping on doing my stuff.

BTW it is my /
Back to top
View user's profile Send private message
shuuraj
n00b
n00b


Joined: 13 Jan 2014
Posts: 38

PostPosted: Thu Aug 28, 2014 2:28 pm    Post subject: Reply with quote

Thank you for your input.

I do not need LVM at all.

I'm just curious if theres a filesystem that is more reliable/ performce better in my construction.
Back to top
View user's profile Send private message
mrbassie
l33t
l33t


Joined: 31 May 2013
Posts: 772
Location: over here

PostPosted: Thu Aug 28, 2014 4:03 pm    Post subject: Reply with quote

I would also recommend at least looking into zfs.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3135

PostPosted: Thu Aug 28, 2014 7:52 pm    Post subject: Reply with quote

I think JFS was said to be good with large fielsystems. Never tried it though, so farm I'm happy with ext2-3
Back to top
View user's profile Send private message
merky1
n00b
n00b


Joined: 22 Apr 2003
Posts: 51

PostPosted: Thu Aug 28, 2014 8:44 pm    Post subject: Reply with quote

szatox wrote:
I think JFS was said to be good with large fielsystems. Never tried it though, so farm I'm happy with ext2-3


I've used JFS for over a decade with no problems, but there are drawbacks. The largest is no one is maintaining it anymore. XFS seems to have had a recent revival, with RHEL now including it as a default. JFS has been shown to be "more efficient", but XFS adds things like online defrag that JFS most likely will never gain.
_________________
ooo000 WoooHooo 000ooo
Back to top
View user's profile Send private message
The Doctor
Moderator
Moderator


Joined: 27 Jul 2010
Posts: 2678

PostPosted: Thu Aug 28, 2014 8:45 pm    Post subject: Reply with quote

I'd stick with ext4.

Most other fielsystems have some peculiarities with them. For example, you cannot shrink a JFS partition. Btrfs is simply too new to consider stable and raserfs seems to be suffering from some rather bizarre corruption lately. It never really struck me as being to reliable either, but that might just be me. I know there are users who swear by it.

ZFS might be worth looking into, but it really doesn't seem to be too 'main line' yet and can have some strange issues.

As for performance, I doubt you will notice any difference between any modern filesystem. If you bench marked them, maybe.
_________________
First things first, but not necessarily in that order.

Apologies if I take a while to respond. I'm currently working on the dematerialization circuit for my blue box.
Back to top
View user's profile Send private message
shuuraj
n00b
n00b


Joined: 13 Jan 2014
Posts: 38

PostPosted: Fri Aug 29, 2014 8:12 am    Post subject: Reply with quote

Thanks for your thoughts guys.

ZFS and it features are in fact really interesting:
    Raid function: I don't need this since im using a raid controller
    checksum: I don't need this since im using a raid controller/ecc ram
    copy-on-write/snapshots: Does sound interesting, but I don't have the slightest idea how this works
    RAM: It need a lot of ram, which I dislike



I really think I will stick to ext4 if I don't find arguments for an other fs.
Back to top
View user's profile Send private message
mrbassie
l33t
l33t


Joined: 31 May 2013
Posts: 772
Location: over here

PostPosted: Fri Aug 29, 2014 7:29 pm    Post subject: Reply with quote

shuuraj wrote:
Thanks for your thoughts guys.

ZFS and it features are in fact really interesting:
    Raid function: I don't need this since im using a raid controller
    checksum: I don't need this since im using a raid controller/ecc ram
    copy-on-write/snapshots: Does sound interesting, but I don't have the slightest idea how this works
    RAM: It need a lot of ram, which I dislike



I really think I will stick to ext4 if I don't find arguments for an other fs.


Raidz is better than hardware raid from what I understand (in other words I've read that it is). It certainly appears more flexible according to the documentation.

how the snapshots work is: data that exists within a snapshot cannot be deleted or overwritten unless the snapshot is manually destroyed. Changes to any snapshotted data will be written somewhere else on the disk(s), the modified file is as large as the changes written to the file within the snapshot, not the size of the original file plus the changes. Same on the snapshot level. Your initial snapshot will take up zero space until the data starts to change. If you snapshot a file system with 100 1Gb files and you add 1 Kb to one of those files afterwards, your snapshot will now require 1Kb of space. In terms of how it's useful let's say you take one a day via cron and on a given day something or everything gets removed accidentally and there's no way to get it back, just rollback to yesterday's snapshot because it was never really gone, it just appears to be gone on the live filesystem. This is also very quick. Destroy an old snapshot or all your snapshots and the old versions of the data are gone forever without affecting the most recent and you get all that space back..
Snapshots can be mounted read only and files extracted from them or they can be cloned and the clone can be mounted read-write as it is a second version of your live file system i.e in a container/jail. (If I understand correctly)

As for ram, it actually doesn't require a lot, it certainly likes it though and will suck it up if available. This can be limited by setting a value in /etc/modprobe.d/zfs.conf. I've got 4 gigs on my laptop and I think I limited the arc (ram cache) to half a gig or a gig. No problems. Again zfs like ram, it doesn't need a ton of it unless you use dedup and even then it depends how much data you have. I believe 2G ram for every Tb of data is recommended. But dedup isn't switched on by default and isn't recommended yet. Dedup actually works btw and at write time same as compression.

Anyway, enough of my waffling. long day.
Back to top
View user's profile Send private message
hasufell
Retired Dev
Retired Dev


Joined: 29 Oct 2011
Posts: 429

PostPosted: Sat Aug 30, 2014 1:09 pm    Post subject: Reply with quote

apathetic wrote:
ZFS might be a good choice.

-1 (at least on linux)

I was using it for months on my gentoo box and I experienced a lot of I/O related slowdowns and micro-freezes in certain use cases. I have enough ram and cpu power, so that's not the issue. After switching to ext4 I literally felt the difference. And before you ask... no, it was not a slow hdd, it was an ssd.
Back to top
View user's profile Send private message
vaxbrat
l33t
l33t


Joined: 05 Oct 2005
Posts: 731
Location: DC Burbs

PostPosted: Sat Aug 30, 2014 6:12 pm    Post subject: Is this a rack? Reply with quote

Considering this is a P410, are you going to let the firmware do the logical volume or keep the drives separate? Do you have more than one server/array combo? If so, consider looking at Ceph:

http://wiki.gentoo.org/wiki/Ceph

If you go this route, don't let the Smart Array do the volume management. You will be at HP's mercy for monitoring the drive health. Build individual drives as btrfs so that you have an Ceph OSD for each drive. People poo poo btrfs as being unstable, but I am more than comfortable with it now. I did that wiki entry and I'm in the process of breaking up my legacy btrfs RAID arrays to become multiple OSDs.
Back to top
View user's profile Send private message
shuuraj
n00b
n00b


Joined: 13 Jan 2014
Posts: 38

PostPosted: Sat Aug 30, 2014 9:21 pm    Post subject: Reply with quote

Quote:
After switching to ext4 I literally felt the difference. And before you ask... no, it was not a slow hdd, it was an ssd.


Means that zfs was a lot superior in performace than ext4? or vice versa?

Quote:
Considering this is a P410, are you going to let the firmware do the logical volume or keep the drives separate? Do you have more than one server/array combo? If so, consider looking at Ceph:

I wanted to let the Controller create the logical Volume.

BTW: It's a VM on an ESXI
Back to top
View user's profile Send private message
vaxbrat
l33t
l33t


Joined: 05 Oct 2005
Posts: 731
Location: DC Burbs

PostPosted: Sat Aug 30, 2014 11:14 pm    Post subject: Reply with quote

Does the ESX hypervisor give you much in the way of choices other than ext4? It's a shame you don't do native libvirt and KVM because Ceph would let you put the vm containers on RADOS block devices. Those shard through the cluster and thus allow near instant migration of the VM guests. I don't think VMWare has a hook into librados at the moment.
Back to top
View user's profile Send private message
hasufell
Retired Dev
Retired Dev


Joined: 29 Oct 2011
Posts: 429

PostPosted: Sun Aug 31, 2014 3:08 am    Post subject: Reply with quote

shuuraj wrote:
Quote:
After switching to ext4 I literally felt the difference. And before you ask... no, it was not a slow hdd, it was an ssd.


Means that zfs was a lot superior in performace than ext4? or vice versa?

that the micro freezes and slowdowns were gone, so zfsonlinux sucks
Back to top
View user's profile Send private message
shuuraj
n00b
n00b


Joined: 13 Jan 2014
Posts: 38

PostPosted: Sun Aug 31, 2014 1:37 pm    Post subject: Reply with quote

Quote:

shuuraj hat Folgendes geschrieben:
Zitat:
After switching to ext4 I literally felt the difference. And before you ask... no, it was not a slow hdd, it was an ssd.


Means that zfs was a lot superior in performace than ext4? or vice versa?

that the micro freezes and slowdowns were gone, so zfsonlinux sucks



Thank you very much. I created an ext4 partition. This seems to be the most reliable fs after all.

Quote:

If you go this route, don't let the Smart Array do the volume management. You will be at HP's mercy for monitoring the drive health.

FYI

~ # esxcli hpssacli cmd -q "controller slot=1 show config detail"

Smart Array P410 in Slot 1
Bus Interface: PCI
Slot: 1
Serial Number: PACCR9SXZ3BP
Cache Serial Number: PACCQ9SYAJ9K
RAID 6 (ADG) Status: Disabled
Controller Status: OK
Hardware Revision: C
Firmware Version: 6.40
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 25% Read / 75% Write
Drive Write Cache: Enabled
Total Cache Size: 256 MB
Total Cache Memory Available: 144 MB
No-Battery Write Cache: Enabled
Battery/Capacitor Count: 0
SATA NCQ Supported: True
Number of Ports: 2 Internal only
Driver Name: hpsa
Driver Version: 5.5.0.58-1OEM
Driver Supports HP SSD Smart Path: False

Array: A
Interface Type: SATA
Unused Space: 0 MB
Status: OK
Array Type: Data



Logical Drive: 1 Size: 10.9 TB
Fault Tolerance: 5
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 1024 KB
Status: OK
Caching: Enabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C7B4A22A41F28C226CD84
Disk Name: vmhba1:C0:T0:L0
Mount Points: None
Logical Drive Label: A25A5E21PACCR9SXZ3BPBC86
Drive Type: Data
LD Acceleration Method: Controller Cache

physicaldrive 1I:0:2
Port: 1I
Box: 0
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 3 TB
Native Block Size: 4096
Rotational Speed: 5400
Firmware Revision: 80.00A80
Serial Number: WD-WMC4N0997999
Model: ATA WDC WD30EFRX-68E
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 32
Maximum Temperature (C): 41
PHY Count: 1
PHY Transfer Rate: 3.0Gbps

physicaldrive 2I:0:5
Port: 2I
Box: 0
Bay: 5
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 3 TB
Native Block Size: 4096
Rotational Speed: 5400
Firmware Revision: 80.00A80
Serial Number: WD-WMC4N1776291
Model: ATA WDC WD30EFRX-68E
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 32
Maximum Temperature (C): 42
PHY Count: 1
PHY Transfer Rate: 3.0Gbps

physicaldrive 2I:0:6
Port: 2I
Box: 0
Bay: 6
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 3 TB
Native Block Size: 4096
Rotational Speed: 5400
Firmware Revision: 80.00A80
Serial Number: WD-WMC4N1723442
Model: ATA WDC WD30EFRX-68E
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 35
Maximum Temperature (C): 41
PHY Count: 1
PHY Transfer Rate: 3.0Gbps

physicaldrive 2I:0:7
Port: 2I
Box: 0
Bay: 7
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 3 TB
Native Block Size: 4096
Rotational Speed: 5400
Firmware Revision: 80.00A80
Serial Number: WD-WCC4N1367839
Model: ATA WDC WD30EFRX-68E
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 31
Maximum Temperature (C): 36
PHY Count: 1
PHY Transfer Rate: 3.0Gbps

physicaldrive 2I:0:8
Port: 2I
Box: 0
Bay: 8
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 3 TB
Native Block Size: 4096
Rotational Speed: 5400
Firmware Revision: 80.00A80
Serial Number: WD-WCC4N1357804
Model: ATA WDC WD30EFRX-68E
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 31
Maximum Temperature (C): 36
PHY Count: 1
PHY Transfer Rate: 3.0Gbps

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250
Device Number: 250
Firmware Version: RevC
WWID: 500143800619043F
Vendor ID: PMCSIERA
Model: SRC 8x6G
Back to top
View user's profile Send private message
madchaz
l33t
l33t


Joined: 01 Jul 2003
Posts: 993
Location: Quebec, Canada

PostPosted: Sun Oct 05, 2014 5:44 am    Post subject: Reply with quote

Just some notes on what was said earlier.

Performance wise, using software defined raid generally gives better results then doing it hardware. This is especially true during rebuilds. It also makes things a lot more flexible. The main advantage is the kernel being in charge of the IO, thus being able to put priority on workload over rebuild. I've install entire systems this way while the raid was building and you'd never have been able to tell the difference. With hardware raid, it's another story.

Using lurks for encryption is also a very good idea.
_________________
Someone asked me once if I suffered from mental illness. I told him I enjoyed every second of it.
www.madchaz.com A small candle of a website. As my lab specs on it.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum