<div dir="auto"><div>Nearly all are either rebadged reds (in the case of WD) or literally just IronWolf Pros or Exos. I shucked 62 drives recently and out of 50 Seagates, like 3 were IWP and the rest were all enterprise Exos.<div dir="auto"><br></div><div dir="auto">Surely it's binned to some extent but I'd swear by shucked drives in terms of value if you're using redundancy anyways, especially moose.</div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, 26 Apr 2021, 12:05 William Kenworthy, <<a href="mailto:billk@iinet.net.au">billk@iinet.net.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<p>Hi Ben, <br>
</p>
<p>I have heard of shucking drives but have not gone down that
route as it seems too fraught - I do suspect they might be
inferior (software, hardware?) from the cheap prices some of them
are alone. I am using USB because hardware like odroid xu4, c4
only have usb connectors (no SATA, no IDE etc.) ... even the HC2
uses an admittedly good quality internal USB3 to SATA. There are
some interesting threads on the xu4 forums on why linux UAS and
USB in general performs so badly (very common, poor quality
adaptors often using Seagate firmware that may not be standards
compliant) compared to other systems as people were blaming the
hardware - in LinuxLand sometimes the pursuit of perfection gets
in the way of just getting the job done at all.<br>
</p>
<p>BillK<br>
</p>
<div>On 26/4/21 8:46 am, Benjamin wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">Ah. Well, the reason to get externals is because
they're trivial to "shuck", turning them into internal drives.
It becomes difficult to warranty them, and the reliability is
obviously not as good as "real NAS drives", but if you're
getting more than 3 drives they definitely become worth it as
you essentially get 50% more TB/$. If you only have 1-2 drives,
it's not worth the risks.
<div dir="auto"><br>
</div>
<div dir="auto">mfspro becomes worth it once the cost of the
licencing is low enough to offset the equivalent cost of
drives due to the erasure coding which is a really, really
awesome implementation. The break even point is 50TiB mostly
because I think that's the minimum they sell. I personally use
it to great effect, but YMMV.</div>
<div dir="auto"><br>
</div>
<div dir="auto">18TB and up CMR drives are fine, I haven't
noticed any latency issues with any of my use cases.</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, 25 Apr 2021, 19:57
William Kenworthy, <<a href="mailto:billk@iinet.net.au" target="_blank" rel="noreferrer">billk@iinet.net.au</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<p>Thanks Shaun,</p>
<p> good points. I have MFS set up as a single disk per
host except in one case where there are two matched WD
Greens. Data flows are via a VLAN segmented network
between hosts so are isolated. MFS is a chunkserver
system and isn't even close to RAID in concept so avoids
its issues. The MFS recommendations which I largely
followed are to use raw disks with XFS - if you have a
JBOD, LVM or RAID underlying a storage area it will defeat
some of the failure detection and mitigation strategies
MFS uses. With MFS a complete host failure will only take
down its storage and the MFS will completely self-heal
without data loss around the failure as long as there is
spare space and enough recovery time. Currently I can
take any two or three smaller chunkservers completely
off-line at the same time with no lost data or effect on
users and once healed the data redundancy is restored.<br>
</p>
<p>I have a habit of collecting castoff's and re-purposing
hardware so very little of my gear is a similar purchase
in timing or type - something that MFS deals with quite
elegantly as its mostly independent of operating
systems/hardware - I am even mixing 32bit and 64bit
operating systems on arm, arm64 and intel and while I
currently use Gentoo/openrc there is no reason I cant use
a different linux on each host :)</p>
<p>I would think the response times for 8TB and above are
because they are mostly SMR, not the data density per se?
Can you confirm as I don't think its a problem with CMR
drives? WD and Seagate have been caught out sneaking SMR
drives into NAS (where resilvering and SMR are a real
problem) and other product lines and have suffered some
consumer backlash because of it - Both companies now have
lists of which drive and type are SMR or CMR.<br>
</p>
<p>One point I would highlight is USB connected disks can be
a problem (reliability of the connection, throughput is
fine), particularly if UAS is involved and no-name
adaptors. Unfortunately for me all bar the Intel host
with an M.2NVME drive they are either builtin USB3 or
no-name USB3 adaptors so I can speak to experience ...</p>
<p>BillK</p>
<p><br>
</p>
<div>On 25/4/21 6:38 pm, <a href="mailto:plug_list@holoarc.net" rel="noreferrer noreferrer" target="_blank">plug_list@holoarc.net</a>
wrote:<br>
</div>
<blockquote type="cite">
<p>My 2 cents (and apologies if this has been covered
already):</p>
<p>I went the other route of building a NAS and having
storage off the NAS instead of vSAN or Distributed File
system approach. My experience/thoughts with consumer
grade hardware on my NAS (using mdadm and ZFS):<br>
</p>
<ol>
<li>Run the same speed etc ideally in the same RAID
group (not sure if mooseFS counters this with using
RAM as cache?). I have been caught out with thinking I
was getting 7.2K RPM drive just find the manufacture
changed drive speeds between different sizes in the
same series of drives (e.g. WD Red I think).
Personally I dislike 5.9k RPM drives...unless they're
in big Enterprise SAN/S3 solution.<br>
</li>
<li>Uses different brands and <b>batch numbers - </b>last
thing you want is have bad batch and they all start
failing around the same time - e.g. buying 5 x WD
blues from same store at the same time is bad idea
(and yes its pain).<br>
</li>
<li>8 TB and above drives have long response latency
(due to density) and thus be careful what
configuration you use and make sure it can handle long
build time</li>
<li>I have had drives die from HGST, Seagate and WD over
the years...HGST died the quickly and were pain to
replace under warranty from memory. <br>
</li>
</ol>
<p>-Shaun<br>
</p>
<div>On 25/04/2021 3:26 pm, Benjamin wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">It's not worth getting anything other
than cheapest non-SMR drives IMO for nearly any use
case... you can get performance by aggregating enough
drives anyways</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, Apr 25, 2021
at 3:25 PM Benjamin <<a href="mailto:zorlin@gmail.com" rel="noreferrer noreferrer" target="_blank">zorlin@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">LizardFS is a bag of hurt with dead
development. Proceed with hella caution if you go
that route. I hope it changes and becomes worth
pursuing though.
<div><br>
</div>
<div>MFSpro is justifiable around 50TiB and up,
until then it's not really worth it.</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, Apr 25,
2021 at 3:22 PM William Kenworthy <<a href="mailto:billk@iinet.net.au" rel="noreferrer noreferrer" target="_blank">billk@iinet.net.au</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Thanks Ben and Paul - this backs up my
readings/experience.</p>
<p>I will shortly need a new archive drive
because I have lest than 80Gb left on the
2Tb WD green I have been using for a few
years. As performance isn't an issue I will
likely go with a Seagate Barracuda this time
(still debating shingled or not because this
use is more cost sensitive than performance
on writing new data across a network - so
low priority, busy, but not excessively so
when in use - I am happy to allow time for
the shingling resilvering to complete as
long as it doesn't impact time to actually
backup the data too much.)</p>
<p>Moosefs is more difficult to quantify whats
needed - currently:</p>
<p>8 hosts (8 HDD, 1x M2.SSD, 6x arm32, 1x
arm64 and 1x intel - all odroid using
gentoo)</p>
<p>~21Tb space, 3/4 in use. I could delete
some as there is duplicate data stored so if
I lose a drive I can reclaim space easily as
well as decrease the goal in some places.</p>
<p>As well, I am using storage classes. High
use data has mostly 1 chunk on the intel/SSD
for performance and others on HDD's. I have
sc's ranging from 1 to 4 copies with 2, 3
and 4 in common use ... for example things
like VM's where there are hot spots with
temp file creation I have 2 copies (2SH)
whereas backups and user data have 4 copies
4HHHH or 4SHHH depending on priority (eg,
/home). Currently I have one WD Green drive
I would already toss if in a commercial
system, and two Seagate NAS drives I am not
totally happy with.<br>
</p>
<p>For these, definitely non-shingled (CMR)
7200rpm around 4TB seems ideal - but is a
NAS optimised drive useful or a waste for
moosefs? - vibration of nearby drives is the
only thing I can think of. Some are bound
together (5x odroid HC2) and some are in
pairs in relatively heavy PC case baymounts
(removed/pinched - from my sons ongoing
gaming PC build :) placed on a desk. I am
staring to lean towards the WD blacks for
this, but the HGST lines WD are starting to
integrate are interesting though more
expensive ... <br>
</p>
<p>I would love to have MFSpro but cant
justify it as super uptime isn't necessary,
EC isn't really attractive at my scale and
multiple masters isn't essential as I have
plenty of alternative systems I could bring
in quickly ... though I am watching lizardfs
and just might jump to it to get the
multiple masters that is in the free tier.<br>
</p>
<p>BillK</p>
<p><br>
</p>
<div>On 25/4/21 1:19 pm, Benjamin wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">+1 to all of it, cheers
Paul.
<div dir="auto"><br>
</div>
<div dir="auto">I think it's worth going
for the cheapest externals you can get,
shucking them, then using MooseFS since
you're already planning to.</div>
<div dir="auto"><br>
</div>
<div dir="auto">I'd use copies=3 and if
you're storing more than 50TB talk to me
about mfspro.</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun,
25 Apr 2021, 13:03 Paul Del, <<a href="mailto:p@delfante.it" rel="noreferrer noreferrer" target="_blank">p@delfante.it</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hello Bill
<div><br>
</div>
<div>My 2 cents worth<br>
<div><br>
</div>
<div>I am sure you know the common
things that can increase your hard
drives life and performance:</div>
<div>Temperature</div>
<div>Humidity</div>
<div>VIbration</div>
<div>Heavy Writes</div>
<div>Heaving Logging</div>
<div>Clean/Reliable power</div>
<div>Data throughput</div>
<div><br>
</div>
<div>The rust hard drives I have
seen the most failures with are:
(I recommend avoiding)</div>
<div>WD Green</div>
<div>WD Blue</div>
<div>Hitachi Deskstar</div>
<div>(Not The server drives)</div>
<div>
<div><br>
</div>
<div>The rust hard drives I
recommend the most are:</div>
<div>WD Black 7200rpm or better</div>
<div>Seagate 7200pm or better</div>
<div>(Not Red, Blue, Green,
Purple)</div>
<div><br>
</div>
<div>If you are doing the moose
distribute setup</div>
<div>You could always choose two
different brands/types</div>
<div><br>
</div>
<div>if you want to know more
specific things about which hard
drive failures. Check out this
from backblaze, I am sure
there's more around. Which is
one Benjamin sent around ages
ago.</div>
<div><a href="https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/</a><br>
</div>
<div><a href="https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/</a><br>
</div>
<div><br>
</div>
<div>Thanks Paul</div>
<div><br>
</div>
<div><br>
</div>
<div>On Sat, 24 Apr 2021, 09:02
William Kenworthy, <<a href="mailto:billk@iinet.net.au" rel="noreferrer noreferrer noreferrer" target="_blank">billk@iinet.net.au</a>>
wrote:<br>
<br>
> Just musing on what changes
I could make to streamline my
systems:<br>
><br>
> After a recent stray "r m
- r f " with a space in it I
ended up<br>
> removing both most of my
active data files, VM's etc ...
and the online<br>
> backups - ouch!<br>
><br>
> I have restored from
offline backups and have noticed
a ~10years old WD<br>
> green drive showing a few
early symptoms of failing
(SMART).<br>
><br>
> With the plethora of
colours now available (!) now
what drive is best for<br>
> a:<br>
><br>
> 1. moosefs chunkserver
(stores files for VM's, data
including the<br>
> mail servers user files,
home directories and of course
the online<br>
> borgbackup archives - the
disks are basically hammered all
the time.)<br>
><br>
> 2. offline backups
(~2tb data using borgbackup to
backup the online<br>
> borgbackup repo, used twice
a week for a few minutes at a
time.)<br>
><br>
> My longest serving drives
are WD greens 2Tb which until
now have just<br>
> keep ticking along. The
failing drive is a WD Green - I
have run<br>
> badblocks on it overnight
with no errors so far so it
might have<br>
> internally remapped the
failed sectors ok - I am using
xfs which does<br>
> not have badblock support.
Most drives spent previous years
in btrfs<br>
> raid 10's or ceph so they
have had a hard life!<br>
><br>
> Newer WD Reds and a Red pro
have failed over the years but I
still have<br>
> two in the mix (6tb and
2tb)<br>
><br>
> Some Seagate Ironwolfs that
show some SMART errors Backblaze
correlate<br>
> with drive failure and
throw an occasional USB
interface error but<br>
> otherwise seem OK.<br>
><br>
> There are shingled,
non-shingled drives,
surveillance, NAS flavours etc.<br>
> - but what have people had
success with? - or should I just
choose my<br>
> favourite colour and run
with it?<br>
><br>
> Thoughts?<br>
><br>
> BillK<br>
</div>
</div>
</div>
<img alt="" style="display:flex" src="https://mailtrack.io/trace/mail/68659f62480c3e88ffa2f3ae2fde66bb4c88a16f.png?u=7051199" width="0" height="0"></div>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer noreferrer noreferrer" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer noreferrer noreferrer" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer noreferrer" target="_blank">plug@plug.org.au</a>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer noreferrer" target="_blank">committee@plug.org.au</a>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></pre>
</blockquote>
</div>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer noreferrer" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer noreferrer" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote>
</div>
</blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer noreferrer" target="_blank">plug@plug.org.au</a>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer noreferrer" target="_blank">committee@plug.org.au</a>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></pre>
</blockquote>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer noreferrer" target="_blank">plug@plug.org.au</a>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer noreferrer" target="_blank">committee@plug.org.au</a>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></pre>
</blockquote>
</div>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" rel="noreferrer noreferrer" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" rel="noreferrer noreferrer" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>_______________________________________________
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank" rel="noreferrer">plug@plug.org.au</a>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" target="_blank" rel="noreferrer">http://lists.plug.org.au/mailman/listinfo/plug</a>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank" rel="noreferrer">committee@plug.org.au</a>
PLUG Membership: <a href="http://www.plug.org.au/membership" target="_blank" rel="noreferrer">http://www.plug.org.au/membership</a></pre>
</blockquote>
</div>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank" rel="noreferrer">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank" rel="noreferrer">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote></div></div></div>