View previous topic :: View next topic |
Author |
Message |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Fri Feb 06, 2009 12:27 am Post subject: Heavy encryption performance problems |
|
|
Hello,
I have a truecrypt 6.1 encrypted container. My problem is that this contailer is realy slow. I switched my complete hardware because the CPU was at 100% everytime i used the encrypted container. Now i only have about 35% cpu-usage on a AMD X2 with 2x2200Mhz and 1.5GB ram and still its realy slow.
I have tested it with dmcrypt and /dev/loop too. same problems.
the hdd where the container is placed has this hdparm output:
Code: | hdparm -tT /dev/md1
/dev/md1:
Timing cached reads: 280 MB in 2.00 seconds = 139.66 MB/sec
Timing buffered disk reads: 292 MB in 3.02 seconds = 96.80 MB/sec
|
the container:
Code: |
hdparm -tT /dev/mapper/truecrypt1
/dev/mapper/truecrypt1:
Timing cached reads: 302 MB in 2.01 seconds = 150.30 MB/sec
Timing buffered disk reads: 72 MB in 3.03 seconds = 23.75 MB/sec
|
I uses AES256 for encryption and the AES-i586 module is loaded. sometimes i get only 1mb/s over a short time.
Code: | vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
3 0 0 421084 23604 1013596 0 0 39424 40 19367 8123 0 32 68 0
0 0 0 381896 23612 1052984 0 0 39040 24 19509 8177 0 33 66 0
2 1 0 341020 23636 1092204 0 0 39360 0 19194 8043 2 34 52 12
0 0 0 301036 23644 1132792 0 0 40544 0 20040 8387 1 33 64 1
0 0 0 261920 23644 1172672 0 0 39808 0 19653 8226 0 34 66 0
0 0 0 222184 23644 1212464 0 0 39920 0 19444 8132 0 32 67 1
2 0 0 182544 23652 1251936 0 0 39548 24 19543 8236 0 33 65 1
1 0 0 142524 23652 1291992 0 0 39940 0 19653 8309 0 33 67 0
1 0 0 102236 23652 1332328 0 0 40320 16 19745 8271 0 33 67 0
1 0 0 64156 23652 1370240 0 0 38012 0 18895 7880 0 32 68 0
0 0 0 32576 19044 1406184 0 0 39428 0 19656 8190 0 33 67 0
0 0 0 30864 15732 1411392 0 0 39296 24 19493 7913 0 34 66 0
0 0 0 32180 15732 1409452 0 0 39040 0 19222 8263 0 34 66 0
3 0 0 32288 15724 1409772 0 0 36096 0 18120 7646 0 31 69 0
0 0 0 30700 15724 1411560 0 0 9516 0 5988 9466 0 21 57 22
|
I use a mdadm created raid5-system for the container. If i use a normal harddrive i get around 25mb/s.
Iam not a linux crack, maybe i have forgotted some of the musthaves
I would be lucky if i get around 35mb/s.
maybe my cpu is still to slow
THX for help... |
|
Back to top |
|
 |
Abraxas l33t


Joined: 25 May 2003 Posts: 814
|
Posted: Fri Feb 06, 2009 9:16 pm Post subject: |
|
|
What is the value of "/proc/sys/kernel/random/entropy_avail"? |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Fri Feb 06, 2009 10:37 pm Post subject: |
|
|
try twofish-lrw-benbi:wp256 with 256bit keysize, that's what I ended up with for my raid.
and check /sys/block/md0/md/stripe_cache_size, up it to atleast 1024.
Using dmcrypt on 5 devices, raid5 with twofish on top I'm getting 150mb/sec with a quick hdparm test now. |
|
Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Sat Feb 07, 2009 9:29 am Post subject: |
|
|
I cant switch teh AES256 mode so easy, because the Contanier is realy huge
But i will test it on a small new container.
Code: |
cat /proc/sys/kernel/random/entropy_avail
3466
|
have used google and have seen numbers around 11000. Is 3466 enough ?
Code: |
cat /sys/block/md1/md/stripe_cache_active
0
|
is it correct that there is a "0" ?
I cant change this value (permission denied).
I have resized the stripe_cache_size with "nano" from 256 -> 1024
after a reboot the cachesize is back to 256. How can i set it correct ?
now i get a cpu usage of around 67% and ~40% more speed:
Code: |
hdparm -tT /dev/mapper/truecrypt1
/dev/mapper/truecrypt1:
Timing cached reads: 524 MB in 2.00 seconds = 261.79 MB/sec
Timing buffered disk reads: 100 MB in 3.03 seconds = 32.99 MB/sec
|
Code: |
hdparm -tT /dev/md1
/dev/md1:
Timing cached reads: 524 MB in 2.00 seconds = 261.93 MB/sec
Timing buffered disk reads: 300 MB in 3.00 seconds = 99.93 MB/sec
|
that means that my encryption eats about 60% from the possible disc performance. Maybe there are some more great tips ?
is it sure that the encryptionprocces uses both CPUs ? can i perform a test to get this info ?
I have tested encryption on my nonraid device with slow speed too. Is there the same switch for "normal" devices ? |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Sat Feb 07, 2009 9:43 am Post subject: |
|
|
/sys/block/md1/md/stripe_cache_active = the activly used stripe cache, you can do
watch cat /sys/block/md1/md/stripe_cache_active to monitor it while you do a copy, to see if it maxes the cache or not.
you can put echo 1024 > /sys/block/md0/md/stripe_cache_size in /etc/conf.d/local.start to make the change "permanent" (not really permanent, it just changes it on every boot).
If you have truecrypt after /dev/md1 it will not be using both your cpu's unless the encryption algorithms were written for it, as /dev/md1 is one device.
I have encryption (device mapper) on /dev/sda /dev/sdb /dev/sdc etc, and a /dev/md0 constists of /dev/mapper/encrypted-drive1 and so on. This results in me getting 5 kcryptd devices in use under load.
You can use top under load to see if it's maxing your cpu or not. |
|
Back to top |
|
 |
Paczesiowa Guru

Joined: 06 Mar 2006 Posts: 593 Location: Oborniki Śląskie, Poland
|
Posted: Sat Feb 07, 2009 3:35 pm Post subject: |
|
|
neuron, using twofish with lrw doesn't seem right. twofish is less secure than serpent, and slower than aes. there are already known problems with lrw mode (nothing to worry about, but still...). and iirc under truecrypt only xts uses native kernel code, other modes use truecrypt code working in userspace which is slower. |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Sat Feb 07, 2009 9:28 pm Post subject: |
|
|
Paczesiowa wrote: | neuron, using twofish with lrw doesn't seem right. twofish is less secure than serpent, and slower than aes. there are already known problems with lrw mode (nothing to worry about, but still...). and iirc under truecrypt only xts uses native kernel code, other modes use truecrypt code working in userspace which is slower. |
When I tested twofish-asm, it was considerably faster than aes-asm on 64bit, and lrw's speed advantages were also huge. The data being encrypted dont need protection over the entire database, and it's not really an issue unless someone manages to get the entire thing decrypted. |
|
Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Sun Feb 08, 2009 2:12 pm Post subject: |
|
|
if i watch /sys/block/md1/md/stripe_cache_active its everytime "0". Never have seen another number there.
Another problem: if i copy files over samba i get "speedjumps". I have around 20mb/s. But if i copy much files sometimes the speed drops to 11mb/s for some secs. Switching to Intel gigabitlancard brought a performanceboost. But still too slow. On average i get only 15MB/s over time...
if i copy small files i get constant 24MB/s. Only on large files i get the jumps.
CPUload is around 65%. Is there a possibility to view the pci/Pci-e load ?
EDIT: after some test it seems that this has nothing to do with the encryption.
I have this if i copy to the Linux server. If i copy from the speed is constant...
btw: thx for all help here 
Last edited by unic.ori on Sun Feb 08, 2009 3:03 pm; edited 1 time in total |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Sun Feb 08, 2009 2:54 pm Post subject: |
|
|
This is my top output when unpacking something on my encrypted raid:
Code: |
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5255 root 15 -5 0 0 0 S 14 0.0 10:07.50 md0_raid5
5208 root 15 -5 0 0 0 S 11 0.0 10:04.73 kcryptd
5240 root 15 -5 0 0 0 S 11 0.0 10:10.35 kcryptd
5216 root 15 -5 0 0 0 S 11 0.0 10:06.81 kcryptd
5196 root 15 -5 0 0 0 S 10 0.0 10:04.04 kcryptd
5231 root 15 -5 0 0 0 S 10 0.0 10:05.16 kcryptd
|
Monitoring stripe_cache_active I see it around 700-1024 (and I have max set to 1024).
Do you have a write-intent bitmap on the raid drive? If so there's your problem. |
|
Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Sun Feb 08, 2009 9:26 pm Post subject: |
|
|
have to use google
but:
Code: |
mdadm -G /dev/md1 -b none
mdadm: no bitmap found on /dev/md1
|
strange that my cacheactivity is 0 everytime...
so i think i dont have this option set
@neuron: which CPU do you have ?
On a normal partiton i get this transferjumps with samba too... will test it now on the local machine... |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Sun Feb 08, 2009 9:29 pm Post subject: |
|
|
I'm on a 3.4ghz quad core.
I'm gonna guess with a 65% cpu load, you have 100% on one cpu (the encryption), and some extra load due to hd activity. So your only encrypting in a single thread, which is whats slowing you down, and why your mdadm cache size isn't growing.
If you can test it on the raid, without encryption, and then monitor stripe_cache_active you should see some activity. If not maybe you've managed to turn off buffering/caching somewhere? |
|
Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Sun Feb 08, 2009 11:56 pm Post subject: |
|
|
hi,
okay, your cpu is a little faster
I have tested on the unencrypted part of the array. Then i can see that the cache is used. So, for some reason the cache isnt used if i use the container that is on that drive.
***
after i have made some tests i get 25mb/s when i copy files from server.
at this CPUload:
Code: |
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 30824 21056 1382348 0 0 471 2690 1506 2036 1 7 88 4
0 0 0 30876 21056 1382388 0 0 0 0 18 20 0 0 100 0
0 0 0 30332 21064 1382980 0 0 7296 0 950 553 0 7 90 3
2 0 0 30172 21064 1383868 0 0 50548 0 6488 3587 0 53 40 6
1 0 0 30920 21064 1383880 0 0 51200 0 6675 3622 0 55 41 3
2 0 0 30848 21064 1384508 0 0 51328 0 6732 3632 0 56 40 4
1 0 0 30880 20956 1385156 0 0 51200 40 6646 3576 1 55 40 4
1 0 0 31536 20684 1385388 0 0 49024 36 6423 3420 0 52 40 8
2 0 0 31004 20308 1386984 0 0 50428 12 6656 3565 1 54 40 5
3 0 0 31388 19924 1387696 0 0 50948 0 6656 3530 0 54 42 4
3 0 0 30604 19540 1389664 0 0 51324 0 6639 3585 0 55 41 4
1 0 0 31116 19148 1390008 0 0 51200 0 6708 3671 1 55 40 4
2 0 0 31456 18660 1391048 0 0 50052 52 6606 3620 0 54 40 5
1 0 0 31092 18276 1392524 0 0 50304 12 6595 3607 0 54 42 4
2 0 0 30460 17888 1394336 0 0 53632 0 7045 3706 1 58 38 4
1 0 0 30452 17500 1395724 0 0 54272 0 7252 3714 1 59 38 2
2 1 0 31268 17116 1395732 0 0 53888 0 7221 3738 0 58 37 5
3 0 0 30756 16744 1397332 0 0 51324 36 6916 3441 0 55 38 7
1 1 0 30404 16360 1398372 0 0 52980 28 7024 3547 0 56 37 6
3 0 0 30812 15848 1399268 0 0 53500 0 7281 3634 0 58 35 6
2 0 0 30856 15464 1399964 0 0 54144 0 7206 3666 1 58 36 5
....
0 0 0 31232 13316 1403228 0 0 0 40 31 30 0 0 100 0
0 0 0 31232 13316 1403228 0 0 0 12 26 16 0 0 100 0
0 0 0 31232 13328 1403216 0 0 0 36 29 36 0 0 99 1
0 0 0 31232 13328 1403240 0 0 0 12 18 10 0 0 100 0
|
and to Server i only get around 16MB/s at average.
CPUload:
Code: |
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 31372 13344 1403236 0 0 5923 2027 1880 1904 1 18 77 5
0 0 0 31356 13344 1403244 0 0 0 0 105 156 0 0 100 0
1 0 0 151988 13344 1282344 0 0 0 0 80 19 0 1 99 0
0 0 0 704988 13508 729768 0 0 324 0 9960 14000 1 23 73 3
0 2 0 678124 13676 756504 0 0 424 0 13286 20212 1 8 72 18
0 2 0 674016 14576 759780 0 0 2440 0 1041 2385 0 3 2 95
3 0 0 631512 14760 800264 0 0 412 33040 14286 17821 2 55 24 19
1 1 0 592216 14764 840168 0 0 8 24352 13091 17528 2 64 34 1
0 2 0 577800 16196 853828 0 0 6036 36 3295 6247 0 16 10 73
1 1 0 555888 19436 872620 0 0 15516 0 2882 6959 0 10 8 83
5 1 0 520504 20356 905808 0 0 4052 51920 8685 21138 1 69 4 26
2 0 0 485576 20360 940144 0 0 0 39496 11927 15046 2 67 30 1
1 0 0 443540 20368 981492 0 0 0 45352 13860 16524 3 67 29 0
2 0 0 403008 20376 1021388 0 0 0 37164 13849 17327 2 66 32 0 #on this line the "66" + the "2" before it is the cpuload
1 0 0 363752 20376 1060416 0 0 0 32928 13888 17482 2 56 42 0
3 0 0 325924 20388 1096420 0 0 8 45424 11578 15537 2 57 40 1
3 0 0 288120 20388 1133172 0 0 0 37080 12888 16582 2 57 41 0
1 0 0 250696 20388 1171516 0 0 0 32960 12353 15719 2 55 43 0
2 0 0 228832 20404 1192832 0 0 0 42728 8605 12101 1 45 53 0
0 0 0 202560 20404 1218872 0 0 0 18460 14940 21151 2 26 72 0
3 0 0 160136 20412 1259308 0 0 0 28892 16466 21506 2 46 51 0
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 1 0 116152 20416 1301416 0 0 8 37880 14979 18526 2 66 26 6
4 0 0 85948 20416 1333180 0 0 0 24740 9541 16582 0 69 3 27
1 0 0 51876 20424 1366976 0 0 0 32648 13302 20937 2 55 43 0
5 0 0 30612 20420 1386416 0 0 0 51900 12725 17204 2 67 30 0
0 0 0 30504 20428 1387172 0 0 0 19160 7484 11658 2 39 55 4
0 0 0 30436 20428 1387172 0 0 4 0 25 10 0 0 100 1 [b]# strange here: nearly no CPU usage![/b]
0 0 0 30436 20428 1387172 0 0 0 0 12 6 0 0 100 0
0 0 0 30436 20432 1387168 0 0 0 32 47 42 0 0 99 1
0 0 0 30436 20440 1387172 0 0 0 48 17 20 0 0 100 0
0 0 0 30436 20448 1387192 0 0 0 20 26 28 0 0 100 0
0 0 0 30436 20448 1387192 0 0 0 0 13 6 0 0 100 0
0 0 0 30612 20448 1387196 0 0 0 0 42 34 0 1 100 0
0 0 0 30612 20448 1387196 0 0 0 0 13 10 0 0 100 0
0 0 0 30612 20448 1387196 0 0 0 0 11 10 0 0 100 0
0 0 0 30612 20456 1387196 0 0 0 40 23 25 0 0 100 0
0 0 0 30612 20456 1387196 0 0 0 0 10 6 0 0 100 0
0 0 0 30612 20456 1387196 0 0 0 0 9 6 0 0 100 0
2 0 0 30648 20460 1385956 0 0 0 24740 8528 10760 1 38 61 0
4 0 0 31112 20452 1384680 0 0 0 45352 13415 17186 2 58 40 0
0 0 0 31800 20444 1384228 0 0 0 33168 13919 17278 2 66 33 0
3 0 0 31624 20452 1382552 0 0 0 41312 13165 17036 1 56 43 0
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 30832 20448 1383908 0 0 0 32976 14004 17818 2 62 36 0
0 0 0 31304 20460 1382396 0 0 0 8340 2165 3003 1 13 86 0
0 0 0 31288 20460 1382396 0 0 0 0 14 8 0 0 100 0
0 0 0 31288 20460 1382396 0 0 0 0 16 12 0 0 100 0
0 0 0 31288 20460 1382400 0 0 0 0 12 6 0 0 100 0
0 0 0 31288 20468 1382400 0 0 0 36 21 18 0 0 100 0
0 0 0 31288 20472 1382400 0 0 0 28 36 28 0 0 100 0
0 0 0 31288 20472 1382400 0 0 0 0 15 10 0 0 100 0
0 0 0 31464 20472 1382400 0 0 0 0 41 28 0 1 100 0
0 0 0 31464 20472 1382400 0 0 0 0 8 8 0 0 100 0
0 0 0 31464 20472 1382400 0 0 0 0 11 10 0 0 100 0
1 0 0 31224 20480 1382400 0 0 0 29692 1663 2658 0 15 85 0
0 0 0 31464 20480 1382400 0 0 0 16980 1378 2215 0 13 87 0
0 0 0 31464 20480 1382400 0 0 0 0 12 17 0 0 100 0
0 0 0 31464 20480 1382400 0 0 0 0 15 14 0 0 100 0
0 0 0 31380 20480 1382492 0 0 0 0 3443 5334 0 3 97 0
1 0 0 30544 20484 1382748 0 0 0 56 16498 25634 1 13 85 1
2 0 0 30324 20480 1381128 0 0 0 37116 16108 21194 2 60 39 0
3 0 0 31684 20484 1377256 0 0 0 49396 16437 18778 2 78 21 0
2 0 0 30252 20476 1379148 0 0 0 45396 12939 15754 3 78 20 0
1 1 0 30728 20464 1378620 0 0 0 24716 12944 19198 2 73 17 9
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
5 0 0 30840 12760 1383768 0 0 0 32876 9708 12648 2 39 59 0
5 0 0 30660 12776 1383456 0 0 0 41576 13218 16158 2 68 30 0
3 0 0 31064 12768 1384048 0 0 0 32972 13637 17502 2 63 35 0
3 0 0 31500 12772 1383172 0 0 0 45296 13239 16832 1 65 33 0
0 0 0 31972 12772 1383524 0 0 0 32984 14095 17683 2 63 34 0
1 0 0 31696 12772 1382548 0 0 0 45336 12825 16235 2 65 33 0
2 0 0 31652 12772 1382288 0 0 0 41208 14236 17239 2 66 32 0
1 0 0 30744 12772 1383636 0 0 0 37060 13618 17372 2 64 34 0
2 0 0 31372 12776 1383112 0 0 0 41280 13738 16926 2 70 27 0
1 0 0 31864 12772 1382500 0 0 0 37132 13504 17204 1 58 41 0
4 0 0 31432 12776 1382544 0 0 0 45356 12756 16523 2 59 39 0
2 0 0 30776 12784 1384248 0 0 4 55880 10750 13928 3 70 27 0
1 0 0 31080 12784 1384052 0 0 0 5292 15620 24037 1 17 82 0
3 0 0 31456 12780 1381516 0 0 0 37180 16981 20846 3 63 34 0
2 0 0 31772 12776 1380516 0 0 0 51420 15164 16771 3 80 12 5
2 0 0 31288 12776 1381956 0 0 0 41236 11527 14919 2 82 17 0
1 1 0 31360 12796 1383228 0 0 68 37188 12231 15195 3 79 18 0
3 0 0 31488 12796 1382620 0 0 0 37144 12648 18913 1 57 41 0
3 0 0 31420 12792 1383032 0 0 0 41200 12667 16138 2 57 40 0
3 0 0 31248 12796 1383044 0 0 0 37308 13848 16931 2 65 33 0
3 0 0 30412 12796 1383948 0 0 0 41176 12482 15985 2 63 35 0
|
Strange is that it seems that sometimes the CPU hast nothing to do if i write to the server. There must be another bottleneck than cpu that i havent found yet.
If i dont use encryption i get a much better samba speed after some tweaks: 25mb/s to server and 32mb/s from server with max 10% CPU load.
So, i think that i need a more powerful cpu than a AMD3800 X2 (2x2000MHZ). very bad, but i dont have any hope that i will find the other bottleneck, but i dont give up yet
Now iam thinking about reinstalling my server with a 64-bit gentoo. I hope that will give me some more speed.
This time i will write a tutorial, as i need to search so much things everstime i reinstall, because i have forgotten most things over time
Some infos for other readers:
I dont have a huge performance-difference between dm-crypt and truecrypt. dm-crypt seems to be ~5% faster on my machine. As truecrypt has a easy to use container support i would prefer truecrypt (for 6.1 you have to use the laymen overlay) if you want to use containers. |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Mon Feb 09, 2009 8:29 am Post subject: |
|
|
which mdadm options do you create with? Chunksize, and what options are you passing to the filesystem? Make sure those are correct, as they matter a lot.
It seems your encrypting a bit, then stopping to write it, then encrypting again, it should be possible to put the write bits into cache, so you can encrypt and write to disk at the same time. |
|
Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Mon Feb 09, 2009 10:02 pm Post subject: |
|
|
I have mostly used default values.
So here are my mdadmconfig for md1. Maybe u have a tip for me.
Code: |
at /proc/mdstat
Personalities : [linear] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdc[0] sda[2] sdb[1]
2930276992 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
|
and:
Code: |
mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Mon Jan 26 14:42:15 2009
Raid Level : raid5
Array Size : 2930276992 (2794.53 GiB 3000.60 GB)
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Mon Feb 9 23:56:48 2009
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : e97945de:cda04930:1aa942d0:17d8c605
Events : 0.66563
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 16 1 active sync /dev/sdb
2 8 0 2 active sync /dev/sda
|
filesysteminformation: i dont know what u need here and i dont know where i get/set more filesysteminformations. I use ext4 as filesystem.
Code: | hdparm /dev/md1
/dev/md1:
readonly = 0 (off)
readahead = 512 (on)
geometry = 732569248/2/4, sectors = 5860553984, start = 0 |
|
|
Back to top |
|
 |
drescherjm Advocate

Joined: 05 Jun 2004 Posts: 2792 Location: Pittsburgh, PA, USA
|
Posted: Tue Feb 10, 2009 1:39 pm Post subject: |
|
|
One of your problems is you need more disks. I mean raid5 with only 3 disks is not the best setup. _________________ John
My gentoo overlay
Instructons for overlay |
|
Back to top |
|
 |
drescherjm Advocate

Joined: 05 Jun 2004 Posts: 2792 Location: Pittsburgh, PA, USA
|
Posted: Tue Feb 10, 2009 1:44 pm Post subject: |
|
|
Quote: | So, i think that i need a more powerful cpu than a AMD3800 X2 (2x2000MHZ). |
Is that an AM2 chip? If so you should be able to upgrade to a 2.9GHz (5600) or 3.1GHz (6000) version pretty easy. _________________ John
My gentoo overlay
Instructons for overlay |
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Tue Feb 10, 2009 1:44 pm Post subject: |
|
|
chunksizes has to align to the filesystem, you could try these settings:
#raid5, chunksize 128, stride = chunk / blocksize = 128 / 4k = 32
mdadm --create --chunk=128 -l 5 -n 5 /dev/...
mkfs.ext3 -b 4096 -E stride=32 /dev/mapper/raid
And for 256kb chunk:
#raid5, chunksize 256, stride = chunk / blocksize = 256 / 4k = 64
mdadm --create --chunk=256 -l 5 -n 5 /dev/...
mkfs.ext3 -b 4096 -E stride=64 /dev/mapper/raid |
|
Back to top |
|
 |
unic.ori n00b

Joined: 02 Apr 2008 Posts: 18
|
Posted: Tue Feb 10, 2009 3:07 pm Post subject: |
|
|
Its a S939
As i have no place for my 2.7tb data i cant make a new filesystem.
where can i find which stride size in my existing filesystem ?
Code: |
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 390f1a85-84da-4a19-9f60-f2b61f3b9bdb
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype $
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 183148544
Block count: 732569248
Reserved block count: 36628462
Free blocks: 14851058
Free inodes: 183148526
First block: 0
Block size: 4096
Fragment size: 4096
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 183148544
Block count: 732569248
Reserved block count: 36628462
Free blocks: 14851058
Free inodes: 183148526
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 849
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Mon Jan 26 18:21:07 2009
Last mount time: Tue Feb 10 14:14:32 2009
Last write time: Tue Feb 10 14:14:32 2009
Last write time: Tue Feb 10 14:14:32 2009
Mount count: 45
Maximum mount count: 100
Last checked: Mon Jan 26 18:21:07 2009
Check interval: 15552000 (6 months)
Next check after: Sat Jul 25 19:21:07 2009
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 1dab5124-ab6c-435d-b6eb-2c2be1a7048d
Journal backup: inode blocks
Journal size: 128M
|
at least the blocksize seems to be right  |
|
Back to top |
|
 |
drescherjm Advocate

Joined: 05 Jun 2004 Posts: 2792 Location: Pittsburgh, PA, USA
|
|
Back to top |
|
 |
neuron Advocate


Joined: 28 May 2002 Posts: 2371
|
Posted: Tue Feb 10, 2009 3:51 pm Post subject: |
|
|
sorry but I have no idea how to get the chunksize from an existing filesystem |
|
Back to top |
|
 |
|