[plug] Filesystems for slow networks
Brad Campbell
brad at wasp.net.au
Fri Feb 20 09:56:13 WST 2009
Adon Metcalfe wrote:
> I use Glusterfs ;P
>
> http://www.gluster.org/
>
> basically it has a feature called AFR (Automatic File Replication) which
> replicates a file to all servers in the afr group when it is
> created/accessed if available (uses extended attributes to keep track of
> this data), and because its done only when a file is accessed, if there
> is a period of downtime, the network load isn't saturated when it comes
> back up, it only syncs the files that people access that gluster figures
> out are out of sync. I'm using 1.3.10 from intrepids repo, though 2.0 is
> meant to be wayyyy better (can do all sorts of cool stuff see:
> http://www.gluster.org/docs/index.php/Whats_New_v2.0, specifically
> atomic write support).
>
> And its incredibly simple to configure, 1 config file for server, 1
> config file for client, replication can be specified on either the
> client or server, uses existing filesystem with extended attributes as
> data store, so really easy to move data into :)
This looks like it might do what I need also.
There are a few of us setting up a distributed "home-office" system where we can all work from home
and share resources. One of the bug requirements for this is a replicated file store at each house.
The idea is along the lines of a single file-server like you would have in a normal office, but
because of the speed/latency of home ADSL connections I thought it would work better if we could
cluster the storage with local distributed replicas. I'd not found anything that looked remotely
suitable until you pointed out Glusterfs. This *looks* like it might do what we need, but I can't
find any information on using more than one distributed replica. Can you elaborate a little on your
usage case for the FS ?
Brad
--
Dolphins are so intelligent that within a few weeks they can
train Americans to stand at the edge of the pool and throw them
fish.
More information about the plug
mailing list