View previous topic :: View next topic |
Author |
Message |
squareHat Tux's lil' helper
Joined: 28 Apr 2003 Posts: 89 Location: London
|
Posted: Tue Feb 03, 2004 12:06 am Post subject: Building a Fast Network Storage Box |
|
|
Been using gentoo, for a while.
I would like to boot and run all my Gentoo boxes of a fast, reliable and cheap network storage device. Needless to say the this new box must run gentoo.
I have little experience with NFS
To start with I wan't to get a together some suitable hardware.
I have a spare 400Mhz Pentium II, I have plenty of 100meg ethernet cards, I want to use cheap IDE disks,
I wish to setup raid for reliability.
Ideally I want to get similar speeds to local IDE disk 20mbits per sec. (Am I dreaming:-)
I am thinking I should use AFS.
Is this basic hardware up to the job? do I need gigabit ethernet?
Feel feel free to comment with any suggestions.
Thanks |
|
Back to top |
|
|
steveb Advocate
Joined: 18 Sep 2002 Posts: 4564
|
Posted: Tue Feb 03, 2004 12:27 am Post subject: |
|
|
the requirement: Quote: | Ideally I want to get similar speeds to local IDE disk 20mbits per sec. (Am I dreaming:-) | is probalby an typo.
you want probably to have 20mb/s transfer. and this is "dreaming" with 100mbit nic's.
because 100MBit/s = 102400KBit/s = 104857600Bit/s
and to help you do the math:
104857600 Bits / 8 Bits = 13107200 Bytes
13107200 Bytes / 1024 Bytes = 12800 Kilobytes
12800 Kilobytes / 1024 Kilobytes = 12.5 Megabytes
okay... 12.5 MB/s is the THEORETICAL transfer for 100MBit/s
if you want to use this for planing or for calculation, then don't expect more then 80% of usage of this THEORETICAL speed. in reality you could get some time more then 12.5MB/s (depends if your nic (on both ends) can realtime compress the date (i have seen such nic's, but i doubth you have them).
however... 80% of 12.5 MB/s would give you about 10 MB/s. now if you nic supports duplex (on both ends), then you could probably end up sending and recieving 10 MB/s and this would give you about 20 MB/s. but this is just pure theory. in real you will probably not get that much and in real you are not the only one using the connection to the server (more connections to 1 nic in the server = less for all the connection to the server).
gigabit ethernet would help you to get somewhere close to factor 10 of 100 MBit. but then your server needs to read that fast. and when you connect 2 clients to 1 server, then don't expect all of them to have 100mb/s! since the harddisk is not even near that speed.
cheers
SteveB |
|
Back to top |
|
|
squareHat Tux's lil' helper
Joined: 28 Apr 2003 Posts: 89 Location: London
|
Posted: Tue Feb 03, 2004 2:23 pm Post subject: |
|
|
Gigabit ethernet shouldn't break the bank, so let's assume that.
Will a PII 400 Meg do the job, or will it now become the bottleneck. |
|
Back to top |
|
|
steveb Advocate
Joined: 18 Sep 2002 Posts: 4564
|
Posted: Tue Feb 03, 2004 5:20 pm Post subject: |
|
|
squareHat wrote: | Gigabit ethernet shouldn't break the bank, so let's assume that.
Will a PII 400 Meg do the job, or will it now become the bottleneck. | PII@400MHz is not the bottleneck. but you must use PCI as the interface for the nic. because the PCI bus has an speed of 33MHz and an bus widht of 32 Bits.
i do again the math for you:
33MHz * 32Bits = 33000000 * 32Bits = 1056000000Bits = 132000000Bytes = 128906.25Kilobytes = 125.885009765625Megabytes
i used an divisor of 1024 for the math. and this means, that you have about 125MB/s transfer on the PCI bus. if you have more than one device in the PCI bus and they comunicate at the same time as the gigabit nic, then the speed will get lower then the 125MB/s (i think, that all the devices share the 125MB/s speed. i am not 100% sure if DMA access to the PCI device allows you to work aroud this limit...)
now, you only need to get disks in the server, wich allow you to thransfer that fast data from the disk to the nic on the server and then to the nic on your pc.
cheers
SteveB |
|
Back to top |
|
|
squareHat Tux's lil' helper
Joined: 28 Apr 2003 Posts: 89 Location: London
|
Posted: Mon Feb 16, 2004 10:42 am Post subject: |
|
|
So with gigabit ehternet, pentium II 400Mhz, only one PCI gigbit ethernet card, on the PCI bus, and 2 IDE drives (both on the master channel).
And AFS.
Would I get similar performance to having local IDE disk?
Or would I be very disappointed? |
|
Back to top |
|
|
ctford0 l33t
Joined: 25 Oct 2002 Posts: 774 Location: Lexington, KY,USA
|
Posted: Mon Feb 16, 2004 1:12 pm Post subject: |
|
|
It depends on the age of the computers. I have a AMD K6-2 400 MHz machine (just as an example of technology at that time). The best performance that I can get out of the hard drives is around 15 M/s for the disk reads in the hdparm test.
Chris |
|
Back to top |
|
|
jesterspet Apprentice
Joined: 05 Feb 2003 Posts: 215 Location: Atlanta
|
Posted: Mon Feb 16, 2004 1:34 pm Post subject: |
|
|
Another possibility here would be to go with multiple wireless NIC's, but only if you know how to (or are willing to learn how to) secure a wireless hub/router.
The speeds of wireless can be better than the wired counterparts. _________________ (X) Yes! I am a brain damaged lemur on crack, and would like to buy your software package for $499.95 |
|
Back to top |
|
|
Mjo n00b
Joined: 26 Jan 2004 Posts: 16 Location: Sweden
|
Posted: Mon Feb 16, 2004 2:14 pm Post subject: |
|
|
squareHat: If you are going to access the storage from several computers concurrently access times on the hard disks will become important for the speed, expecially if you are using some kind of striping raid (or writing to a mirror raid).
If you use striping, all your harddisks will need to seek, and if two or more users are accessing data on the same drives, your disks will be seeking almost constantly.
Just a warning, this is something I didn't think about when building my first raid system.
Then I would of course recommend a lot of memory in the fileserver to serve as harddisk cache, but you've probably thought about this already. |
|
Back to top |
|
|
secondshadow Guru
Joined: 23 Jun 2003 Posts: 362
|
Posted: Mon Feb 16, 2004 2:57 pm Post subject: |
|
|
Just to add to what Mjo said, when you have this constant seeking, the quality and speed of the drives becomes less than trivial. Constant seeking can be very hard on disks which is why quite often network arrays like this are created (in many environments where reliability is concerned) using a RAID5 with a spare drive or two lying around (yes, they all have to be the same size and should have similar performance, so buying them at the so you get the same model is prefered), this way if one drive fails, which it undoubtedly will, you can simply replace the drive and th array will rebuild it. At that point it your (probably) talking about a hardware array and you start getting outside the realm of cheap. Something to consider though: Cheap drives=nightmare. I've had several old Maxtor's fail on me because they were the cheap ones within about a year to a year and a half. Investigate the quality of your drives before you invest in them. Also, large caches on the drives might help alleaviate SOME of the stress imposed by a network storage environment, I think. Anyone with more experiance here should correct me if I'm wrong. |
|
Back to top |
|
|
secondshadow Guru
Joined: 23 Jun 2003 Posts: 362
|
Posted: Tue Feb 17, 2004 3:55 am Post subject: |
|
|
Just found this hunting for SATA drive prices that might interest you:
http://www.newegg.com/app/viewProductDesc.asp?description=22-156-001&depa=1
Tritton Technologies 120GB Network Attached Storage (NAS), Model TRI-NAS120, Retail
Specifications:
Capacity: 120GB
File Sharing Protocals: CIFs, HTTP, NFS AFP (Appleshare)
Network Clients Supported: Windows, UNIX, Linux, Mac
Security: User (Name & Password), Sharing Level (read/write)
Ports: one 10/100BaseT Lan Port
Features: E-mail Notification, Event Log, SNMP MIB II Support, Web Page Based Management, S.M.A.R.T. Support, Ultra ATA Drives upto 250GB
Remark: Retail (see pictures for details)
This might be an adaquate solution for you if 100Megabit is still a viable option. I would venture to say that gigabit devices like this one also exist. Worth a look I suppose. |
|
Back to top |
|
|
squareHat Tux's lil' helper
Joined: 28 Apr 2003 Posts: 89 Location: London
|
Posted: Tue Feb 17, 2004 4:18 pm Post subject: |
|
|
My goal, was to build it myself, from old stuff plus a few new bits the disk and the ethernet card
i.e. make it cheap and learn something in the process.
Once I get it working I will post a how to... if people are interested |
|
Back to top |
|
|
secondshadow Guru
Joined: 23 Jun 2003 Posts: 362
|
Posted: Tue Feb 17, 2004 4:45 pm Post subject: |
|
|
Ah. Okay. Well, just from my personal experiance I will say that using 100 M/bit you won't see anywhere NEAR the maximum transfer rates on average. If you really want to see performace even remotely close to local disk you might consider Gigabit Ethernet. Also, you may want to look at some cheapish sata drives along with hardware controller that supports command queueing which would help alleaviate some of the wear-and-tear on the drive. The problem posed here is that that isn't cheap by any means. The drives aren't really too bad if you go with 7,200 rpm instead of 10,000 rpm, but a hardware controller w/ command queueing isn't cheap ($200 on the low end...and I'm not sure that the $200 ones support command queueing to begin with). I think right now the biggest problem you are looking at is that speed across a network isn't too great and if you have several clients online all the time then you are looking at really bad wear on the disks, which is where RAID5 and a hardware controller would help...and as a side-effect of RAID5 if one disk goes bad, you pop in another identical disk and it rebuilds the failed disk onto the new one. But again, you're talking about not quite cheap when looking at RAID5's. I saw some cheap 30gig UDMA disks on pricewatch the other night though which may help offset the cost of a controller if you decide to go that route. |
|
Back to top |
|
|
johnmc Tux's lil' helper
Joined: 04 Oct 2003 Posts: 81 Location: Kansas City, MO
|
Posted: Wed Feb 18, 2004 2:17 am Post subject: |
|
|
One thing about RAID 5, try not to use more than about 5 physical drives in it. The overhead of writing parity on too many drives can degrade performance. I forget the equation that describes the problem.
Personally, (and if money isn't that tight) I'd consider an alternative to RAID 5 such as mirrored stripe sets ( RAID 0+1, or something like that). You might see better performance, especially disk write performance, by going this route. And a good controller with lots of cache
Downside is that instead of using n+1 disks you have to use n+n disks.
I've heard stories that RAID 5 coupled with 1GB onboard controller cache gives pretty good performance, so I'm going to experiment with some new Hitachi equipment we have at work soon, I think the 9500 series array with about 30 72GB drives. Ah, good times _________________ Pass the ribs!
----------------------------
17" G4 1.5GHZ Powerbook
Toshiba 5205-S703
----------------------------
"Survived the Dotcom crash and happily toiling for the print media again" |
|
Back to top |
|
|
|