<div dir="auto">I'm keeping my eye on this thread with great interest.<div dir="auto"><br></div><div dir="auto">I'm really curious to see what your findings are and how you got there.</div><div dir="auto"><br></div><div dir="auto">Cheers<br><br><div data-smartmail="gmail_signature" dir="auto">from my Tablet</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 7 Jan. 2020, 8:16 pm Brad Campbell, <<a href="mailto:brad@fnarfbargle.com">brad@fnarfbargle.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 4/1/20 12:40 pm, Brad Campbell wrote:<br>
> G'day All,<br>
> <br>
> I have a little backup machine that has a 4TB drive attached.<br>
> <br>
> Every night it logs into all my machines and does a rotating hardlink <br>
> rsync to back them up.<br>
> <br>
> Currently there are about 36 directories and each of those has ~105 <br>
> hardlinked backups.<br>
> <br>
<br>
So I thought I was clever in developing a methodlology that would allow <br>
me to do a few clever tests with new filesystems using existing data.<br>
<br>
So, I started with a filesystem on the USB drive and cloned that into a <br>
(4TB) file for loopback mounting hosted on a quick(ish) RAID-6.<br>
<br>
I thought I'd dump/restore that onto a spare partition I keep for <br>
testing on a scratch RAID-5, but found that dump/restore was good for <br>
about 2GB/hr in this scenario. So after about 6Gb I killed that.<br>
<br>
I thought I could use tar in a pipe to replicate instead, and that <br>
worked a lot faster (compared to dump/restore anyway). I got ~100G into <br>
that and had a re-think about making sure I only had to do this once, <br>
so I used tar piped through pigz and then over the network onto another <br>
array on a test box.<br>
<br>
The theory there was having the whole thing as one big tar.gz meant I <br>
could untar it rapidly onto different filesystems and test it out.<br>
<br>
Nice in theory, but > 48 hours later and it's nearly done. Raw bandwidth <br>
is fine. When it hits the static backup directory and large files I get <br>
full network bandwidth (~300MB/s), but it bogs down once it hits the <br>
rotating hardlinks again.<br>
<br>
Interesting, with 800MB/s of disk and 300MB/s of network it's absolutely <br>
limited by the random IO limitations of the disks. With ~10MB/s of <br>
actual data the disks are pinned 100% usage with huge latencies just <br>
dealing with the fragmentation.<br>
<br>
So, I *will* test these filesystems, but just getting this thing <br>
archived in a format where I can replicate it in less than 2 days has <br>
proven "challenging" so far.<br>
<br>
<br>
-- <br>
An expert is a person who has found out by his own painful<br>
experience all the mistakes that one can make in a very<br>
narrow field. - Niels Bohr<br>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank" rel="noreferrer">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank" rel="noreferrer">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
</blockquote></div>