Skip to main content

Your browser may not be compatible with all the features on this site. Consider upgrading to a modern browser for an improved experience.

View Post [edit]

Poster: foundation Date: Nov 11, 2004 1:18am
Forum: petabox Subject: Re: Filesystem

So do you worry about virii/worms/stupidusers that delete or corrupt a file, which then gets rsynced to be deleted or corrupt on the other box? or do you use something like rsync backup which keeps versions?

Also, if a box goes down do you have a spare box, that then gets rsync with the mirror to become the host? or does that have to happen manually?

Reply [edit]

Poster: brewster Date: Nov 11, 2004 1:40am
Forum: petabox Subject: Re: Filesystem

we use rsync with a backup directory to allow us to watch for changes and be able to get things back from the trash bin if needed.

Reply [edit]

Poster: dunno Date: Jun 16, 2005 1:55pm
Forum: petabox Subject: Re: Filesystem

I assume the UDP broadcast system to find which nodes have what I assume is easier to implement, then say, a couple of small dedicated boxes with a database of all of the file locations... it seems that unless you have a small number of large files that the UDP system... well, I'll bust say that I'd say that it seems like a timebomb.

just a spur of the moment thought, but you could have a 2 tier data system, where the first tier is JBOD, and is generally the front end, and a second tier that has the same dataset as the first, except that it used RAID 5 at some level... maybe it could also be stagered time wise, the backup could be 2 days behind the first tier, with a rather reliable pool keeping the changelog between the first tier, and the 2 day old backup tier... that way you'd have all your information in two place, and you'd have some measure of protection against virii type corrpution that bypasses safeguards like redundancy... an well.

Reply [edit]

Poster: foundation Date: Jul 15, 2005 5:08am
Forum: petabox Subject: Re: Filesystem

At my company (not the archive) we're implementing a large storage system for image storage (almost entirely a write once, read many for some, and read almost never for the rest)
and we are looking at mogilefs. Mogilefs uses mysql to track file locations, and automatically replicates the number of copies required. So you can say I want at all times there to be 2 copies of this data and three copies of this other data. And when you lose a server, it detects a copy is inaccessible and starts replicating a new copy. It does the transfers over http or nfs. Because we have written the front end, we don't need a posix compliant file system, we can use the client libraries. Something to consider for people implementing large systems, and a way to avoid RAID (it's raid-ish over the network really).