Skip to main content

Your browser may not be compatible with all the features on this site. Consider upgrading to a modern browser for an improved experience.

View Post [edit]

Poster: dunno Date: May 31, 2004 10:26pm
Forum: petabox Subject: my shot at massive network storage.

looking at my brother's (I do not kid, he's got two of these things [passes all understanding... but hey]) Sun A5000, I came up with this. (the reason the A5000 made me think, is it's a SAN array box that's rackmounted, holds 14 half height SCSI drives, 7 per side. also for inspiration, the Gateway 840 series DAS bos, 2U rackmounted, single sided, holding 12 SATA drives, translating the ATA signals into between 1, and 4 SCSI channels... for a cool $22,000 moins drives.

so, find a friendly mechanical engineer, or what not, and get him to build you a metal box that fits in a 2U space, holds 12 1" drives, and is about 15cm in the dimension what isn't height or width. find the OEM what provides the backplanes for gateway, or if those don't work, some other one. slap one stage front of a 2U space, slap another stage rear in the same space, in between, but a PSU, 2 pieces from this list:
http://castle.pricewatch.com/search/searchmc.idq?cr=athlon+XP&qc=%22ATHLON%22*+AND+%22XP%22*+AND+%40ctd+306&i=306&ct=Computer&c=Motherboard+Combos&mi=N&m=N
a power supply or 4 redundant or not, which can power the drives (top dollar for quality).

put some 8 port SATA cards ($100) $100, and 4pt's $50, 12 drives per Athlon XP 2400 or so.

unit cost:
$9,600 for 24 hitachi 400GB 7.2Krpm drives
$144 for 2 athlon XP 2400 mobo fan combos
$38 quantity 2 256MB 2100 modules
$256 4 Promise SATA 4pt cards
$35 Intel PRO/1000T
$527 case & misc
$10600

make a linux config that has two responsibilities, doing the networking, and making those 24 drives into a RAID 5 array 75% utilization :
400/1.024
390.625*24
9.375*.75
7.031 TB per unit
1000/7.031=142.23
15*10600=159000*10=1.6 million per PB 400K for tape seems pretty optimistic.

network boot these computers off a few maintnance computers with maybe an array of cdroms each with the OS, or something. undervolt/clock the processors.

realistic?
$1,600,000 per rack for 105 TB compared to
1,200,905 per rack for 78& one eighth TB

Reply [edit]

Poster: dunno Date: Jun 2, 2004 10:33pm
Forum: petabox Subject: Re: my shot at massive network storage.

on second thought, RAID shouldn't be implemented at the array level, I'd put it at the node level.
RAIN, redundant array of inexpensive nodes.

node:
400GB HDD qty:12 $:4,800
AthlonXP 2.4 & mobo qty:1 $:72
256MB PC2100 qty:1 $:19
8pt SATA card qty:1 $:100
4pt SATA card qty:1 $:50
intel pro/1000T qty:2 $:70
case & misc qty:? $:189? (this would be half, the cost of the 2U case would be split between two nodes)
tot qty:1 $:5600

storage: 4,687.5 gB

for every 6 nodes (or whatever), one will be a hot spare, the other 5 will each be a virtual drive, with a controller treating them like they're in a RAID 5 array.

if a node drops out, it will immediately be replaced, the networking will all be redundant.

HA servers will be at the head of one or more array of nodes. they will be very very redundant.e. g. http://www.necsam.com/servers/files/320La_product_guide.pdf

maybe some non x86 server (redundant again) as the front-end. HPPA, NEC SH, Fujitsu, SUN, SGI, whatever.

it'd be more expensive, but it'd be a true enterprise alternative, though the software would need quite an investment to hold up US$2 Million+

Reply [edit]

Poster: andyj Date: Jun 13, 2004 10:55pm
Forum: petabox Subject: Re: my shot at massive network storage.

I am the engineer on a remarkably similar project.

Some of the greatest challenges are physical and
mechanical. Moving away from a standard 19" racking
system towards dedicated disk blocks (thermaly coupled to copper bus bars) has increased storage density and ease of maintainance. Power supply requirements are a serious issue too, 60Kw is a lot of power (our system comes in at 34Kw/PB)
and the use of individual fan cooled switched mode PSUs has been replaced in the current design by
a high efficiency 12V DC bus. We are looking at
local hydrogen fuel cells as petanode backup supplies. Internally, we are still considering forced air (open) cooling vs microbore gas cooling. Reliability dictates that the number of fans be minimised.

Reply [edit]

Poster: foundation Date: Nov 10, 2004 11:52pm
Forum: petabox Subject: Re: my shot at massive network storage.

sounds like you are going for a more expensive, reliable components model rather than the more google model (if it dies, it doesn't matter because we have multiple copies elsewhere and it was a cheap component anyway)

In the past I've worked for a company that made reliable blade based server systems. Are you running dual DC buses? If so, do you consider the OR circuit a single point of failure?

Re: the fewer fans, if you don't end up going with DC bus power (-48V is telecom standard) you could look at the new fanless more efficient AC power supplies such as the antec phantom or industrial grade equivalents (search for compact pci power supplies)

You've probably looked at all this but I'm procrastinating on my real work at the moment so I'm posting...

Reply [edit]

Poster: angelbassmuffin Date: Dec 21, 2005 1:11pm
Forum: petabox Subject: Re: my shot at massive network storage.

mmmmmmmmmmmmmmmmm