[plug] Filesystems for lots of inodes

Byron Hammond byronester at gmail.com
Wed Jan 8 13:39:55 AWST 2020


I'm keeping my eye on this thread with great interest.

I'm really curious to see what your findings are and how you got there.

Cheers

from my Tablet

On Tue, 7 Jan. 2020, 8:16 pm Brad Campbell, <brad at fnarfbargle.com> wrote:

> On 4/1/20 12:40 pm, Brad Campbell wrote:
> > G'day All,
> >
> > I have a little backup machine that has a 4TB drive attached.
> >
> > Every night it logs into all my machines and does a rotating hardlink
> > rsync to back them up.
> >
> > Currently there are about 36 directories and each of those has ~105
> > hardlinked backups.
> >
>
> So I thought I was clever in developing a methodlology that would allow
> me to do a few clever tests with new filesystems using existing data.
>
> So, I started with a filesystem on the USB drive and cloned that into a
> (4TB) file for loopback mounting hosted on a quick(ish) RAID-6.
>
> I thought I'd dump/restore that onto a spare partition I keep for
> testing on a scratch RAID-5, but found that dump/restore was good for
> about 2GB/hr in this scenario. So after about 6Gb I killed that.
>
> I thought I could use tar in a pipe to replicate instead, and that
> worked a lot faster (compared to dump/restore anyway). I got ~100G into
> that and had a re-think about making sure I only had to do this once,
> so I used tar piped through pigz and then over the network onto another
> array on a test box.
>
> The theory there was having the whole thing as one big tar.gz meant I
> could untar it rapidly onto different filesystems and test it out.
>
> Nice in theory, but > 48 hours later and it's nearly done. Raw bandwidth
> is fine. When it hits the static backup directory and large files I get
> full network bandwidth (~300MB/s), but it bogs down once it hits the
> rotating hardlinks again.
>
> Interesting, with 800MB/s of disk and 300MB/s of network it's absolutely
> limited by the random IO limitations of the disks. With ~10MB/s of
> actual data the disks are pinned 100% usage with huge latencies just
> dealing with the fragmentation.
>
> So, I *will* test these filesystems, but just getting this thing
> archived in a format where I can replicate it in less than 2 days has
> proven "challenging" so far.
>
>
> --
> An expert is a person who has found out by his own painful
> experience all the mistakes that one can make in a very
> narrow field. - Niels Bohr
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20200108/a89269a7/attachment.html>


More information about the plug mailing list