<div dir="ltr">LizardFS is a bag of hurt with dead development. Proceed with hella caution if you go that route. I hope it changes and becomes worth pursuing though.<div><br></div><div>MFSpro is justifiable around 50TiB and up, until then it's not really worth it.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 25, 2021 at 3:22 PM William Kenworthy <<a href="mailto:billk@iinet.net.au">billk@iinet.net.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Thanks Ben and Paul - this backs up my readings/experience.</p>
<p>I will shortly need a new archive drive because I have lest than
80Gb left on the 2Tb WD green I have been using for a few years.
As performance isn't an issue I will likely go with a Seagate
Barracuda this time (still debating shingled or not because this
use is more cost sensitive than performance on writing new data
across a network - so low priority, busy, but not excessively so
when in use - I am happy to allow time for the shingling
resilvering to complete as long as it doesn't impact time to
actually backup the data too much.)</p>
<p>Moosefs is more difficult to quantify whats needed - currently:</p>
<p>8 hosts (8 HDD, 1x M2.SSD, 6x arm32, 1x arm64 and 1x intel - all
odroid using gentoo)</p>
<p>~21Tb space, 3/4 in use. I could delete some as there is
duplicate data stored so if I lose a drive I can reclaim space
easily as well as decrease the goal in some places.</p>
<p>As well, I am using storage classes. High use data has mostly 1
chunk on the intel/SSD for performance and others on HDD's. I
have sc's ranging from 1 to 4 copies with 2, 3 and 4 in common use
... for example things like VM's where there are hot spots with
temp file creation I have 2 copies (2SH) whereas backups and user
data have 4 copies 4HHHH or 4SHHH depending on priority (eg,
/home). Currently I have one WD Green drive I would already toss
if in a commercial system, and two Seagate NAS drives I am not
totally happy with.<br>
</p>
<p>For these, definitely non-shingled (CMR) 7200rpm around 4TB seems
ideal - but is a NAS optimised drive useful or a waste for
moosefs? - vibration of nearby drives is the only thing I can
think of. Some are bound together (5x odroid HC2) and some are in
pairs in relatively heavy PC case baymounts (removed/pinched -
from my sons ongoing gaming PC build :) placed on a desk. I am
staring to lean towards the WD blacks for this, but the HGST lines
WD are starting to integrate are interesting though more expensive
... <br>
</p>
<p>I would love to have MFSpro but cant justify it as super uptime
isn't necessary, EC isn't really attractive at my scale and
multiple masters isn't essential as I have plenty of alternative
systems I could bring in quickly ... though I am watching lizardfs
and just might jump to it to get the multiple masters that is in
the free tier.<br>
</p>
<p>BillK</p>
<p><br>
</p>
<div>On 25/4/21 1:19 pm, Benjamin wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">+1 to all of it, cheers Paul.
<div dir="auto"><br>
</div>
<div dir="auto">I think it's worth going for the cheapest
externals you can get, shucking them, then using MooseFS since
you're already planning to.</div>
<div dir="auto"><br>
</div>
<div dir="auto">I'd use copies=3 and if you're storing more than
50TB talk to me about mfspro.</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, 25 Apr 2021, 13:03
Paul Del, <<a href="mailto:p@delfante.it" target="_blank">p@delfante.it</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hello Bill
<div><br>
</div>
<div>My 2 cents worth<br>
<div><br>
</div>
<div>I am sure you know the common things that can
increase your hard drives life and performance:</div>
<div>Temperature</div>
<div>Humidity</div>
<div>VIbration</div>
<div>Heavy Writes</div>
<div>Heaving Logging</div>
<div>Clean/Reliable power</div>
<div>Data throughput</div>
<div><br>
</div>
<div>The rust hard drives I have seen the most failures
with are: (I recommend avoiding)</div>
<div>WD Green</div>
<div>WD Blue</div>
<div>Hitachi Deskstar</div>
<div>(Not The server drives)</div>
<div>
<div><br>
</div>
<div>The rust hard drives I recommend the most are:</div>
<div>WD Black 7200rpm or better</div>
<div>Seagate 7200pm or better</div>
<div>(Not Red, Blue, Green, Purple)</div>
<div><br>
</div>
<div>If you are doing the moose distribute setup</div>
<div>You could always choose two different brands/types</div>
<div><br>
</div>
<div>if you want to know more specific things about
which hard drive failures. Check out this from
backblaze, I am sure there's more around. Which is one
Benjamin sent around ages ago.</div>
<div><a href="https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/" rel="noreferrer" target="_blank">https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/</a><br>
</div>
<div><a href="https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/" rel="noreferrer" target="_blank">https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/</a><br>
</div>
<div><br>
</div>
<div>Thanks Paul</div>
<div><br>
</div>
<div><br>
</div>
<div>On Sat, 24 Apr 2021, 09:02 William Kenworthy, <<a href="mailto:billk@iinet.net.au" rel="noreferrer" target="_blank">billk@iinet.net.au</a>>
wrote:<br>
<br>
> Just musing on what changes I could make to
streamline my systems:<br>
><br>
> After a recent stray "r m - r f " with a space
in it I ended up<br>
> removing both most of my active data files, VM's
etc ... and the online<br>
> backups - ouch!<br>
><br>
> I have restored from offline backups and have
noticed a ~10years old WD<br>
> green drive showing a few early symptoms of
failing (SMART).<br>
><br>
> With the plethora of colours now available (!)
now what drive is best for<br>
> a:<br>
><br>
> 1. moosefs chunkserver (stores files for
VM's, data including the<br>
> mail servers user files, home directories and of
course the online<br>
> borgbackup archives - the disks are basically
hammered all the time.)<br>
><br>
> 2. offline backups (~2tb data using
borgbackup to backup the online<br>
> borgbackup repo, used twice a week for a few
minutes at a time.)<br>
><br>
> My longest serving drives are WD greens 2Tb which
until now have just<br>
> keep ticking along. The failing drive is a WD
Green - I have run<br>
> badblocks on it overnight with no errors so far
so it might have<br>
> internally remapped the failed sectors ok - I am
using xfs which does<br>
> not have badblock support. Most drives spent
previous years in btrfs<br>
> raid 10's or ceph so they have had a hard life!<br>
><br>
> Newer WD Reds and a Red pro have failed over the
years but I still have<br>
> two in the mix (6tb and 2tb)<br>
><br>
> Some Seagate Ironwolfs that show some SMART
errors Backblaze correlate<br>
> with drive failure and throw an occasional USB
interface error but<br>
> otherwise seem OK.<br>
><br>
> There are shingled, non-shingled drives,
surveillance, NAS flavours etc.<br>
> - but what have people had success with? - or
should I just choose my<br>
> favourite colour and run with it?<br>
><br>
> Thoughts?<br>
><br>
> BillK<br>
</div>
</div>
</div>
<img alt="" style="display: flex;" src="https://mailtrack.io/trace/mail/68659f62480c3e88ffa2f3ae2fde66bb4c88a16f.png?u=7051199" width="0" height="0"></div>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a>
PLUG Membership: <a href="http://www.plug.org.au/membership" target="_blank">http://www.plug.org.au/membership</a></pre>
</blockquote>
</div>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote></div>