[plug] musing on HDD types (William Kenworthy)

William Kenworthy billk at iinet.net.au
Mon Apr 26 13:47:04 AWST 2021


That is interesting!

BillK


On 26/4/21 1:40 pm, Benjamin wrote:
> Nearly all are either rebadged reds (in the case of WD) or literally
> just IronWolf Pros or Exos. I shucked 62 drives recently and out of 50
> Seagates, like 3 were IWP and the rest were all enterprise Exos.
>
> Surely it's binned to some extent but I'd swear by shucked drives in
> terms of value if you're using redundancy anyways, especially moose.
>
>
> On Mon, 26 Apr 2021, 12:05 William Kenworthy, <billk at iinet.net.au
> <mailto:billk at iinet.net.au>> wrote:
>
>     Hi Ben,
>
>     I have heard  of shucking drives but have not gone down that route
>     as it seems too fraught - I do suspect they might be inferior
>     (software, hardware?) from the cheap prices some of them are
>     alone.  I am using USB because hardware like odroid xu4, c4 only
>     have usb connectors (no SATA, no IDE etc.) ... even the HC2 uses
>     an admittedly good quality internal USB3 to SATA.  There are some
>     interesting threads on the xu4 forums on why linux UAS and USB in
>     general performs so badly (very common, poor quality adaptors
>     often using Seagate firmware that may not be standards compliant)
>     compared to other systems as people were blaming the hardware - in
>     LinuxLand sometimes the pursuit of perfection gets in the way of
>     just getting the job done at all.
>
>     BillK
>
>     On 26/4/21 8:46 am, Benjamin wrote:
>>     Ah. Well, the reason to get externals is because they're trivial
>>     to "shuck", turning them into internal drives. It becomes
>>     difficult to warranty them, and the reliability is obviously not
>>     as good as "real NAS drives", but if you're getting more than 3
>>     drives they definitely become worth it as you essentially get 50%
>>     more TB/$. If you only have 1-2 drives, it's not worth the risks.
>>
>>     mfspro becomes worth it once the cost of the licencing is low
>>     enough to offset the equivalent cost of drives due to the erasure
>>     coding which is a really, really awesome implementation. The
>>     break even point is 50TiB mostly because I think that's the
>>     minimum they sell. I personally use it to great effect, but YMMV.
>>
>>     18TB and up CMR drives are fine, I haven't noticed any latency
>>     issues with any of my use cases.
>>
>>     On Sun, 25 Apr 2021, 19:57 William Kenworthy, <billk at iinet.net.au
>>     <mailto:billk at iinet.net.au>> wrote:
>>
>>         Thanks Shaun,
>>
>>             good points.  I have MFS set up as a single disk per host
>>         except in one case where there are two matched WD Greens. 
>>         Data flows are via a VLAN segmented network between hosts so
>>         are isolated.  MFS is a chunkserver system and isn't even
>>         close to RAID in concept so avoids its issues.  The MFS
>>         recommendations which I largely followed are to use raw disks
>>         with XFS - if you have a JBOD, LVM or RAID underlying a
>>         storage area it will defeat some of the failure detection and
>>         mitigation strategies MFS uses.  With MFS a complete host
>>         failure will only take down its storage and the MFS will
>>         completely self-heal without data loss around the failure as
>>         long as there is spare space and enough recovery time. 
>>         Currently I can take any two or three smaller chunkservers
>>         completely off-line at the same time with no lost data or
>>         effect on users and once healed the data redundancy is restored.
>>
>>         I have a habit of collecting castoff's and re-purposing
>>         hardware so very little of my gear is a similar purchase in
>>         timing or type - something that MFS deals with quite
>>         elegantly as its mostly independent of operating
>>         systems/hardware - I am even mixing 32bit and 64bit operating
>>         systems on arm, arm64 and intel and while I currently use
>>         Gentoo/openrc there is no reason I cant use a different linux
>>         on each host :)
>>
>>         I would think the response times for 8TB and above are
>>         because they are mostly SMR, not the data density per se? 
>>         Can you confirm as I don't think its a problem with CMR
>>         drives?  WD and Seagate have been caught out sneaking SMR
>>         drives into NAS (where resilvering and SMR are a real
>>         problem) and other product lines and have suffered some
>>         consumer backlash because of it - Both companies now have
>>         lists of which drive and type are SMR or CMR.
>>
>>         One point I would highlight is USB connected disks can be a
>>         problem (reliability of the connection, throughput is fine),
>>         particularly if UAS is involved and no-name adaptors. 
>>         Unfortunately for me all bar the Intel host with an M.2NVME
>>         drive they are either builtin USB3 or no-name USB3 adaptors
>>         so I can speak to experience ...
>>
>>         BillK
>>
>>
>>         On 25/4/21 6:38 pm, plug_list at holoarc.net
>>         <mailto:plug_list at holoarc.net> wrote:
>>>
>>>         My 2 cents (and apologies if this has been covered already):
>>>
>>>         I went the other route of building a NAS and having storage
>>>         off the NAS instead of vSAN or Distributed File system
>>>         approach. My experience/thoughts with consumer grade
>>>         hardware on my NAS (using mdadm and ZFS):
>>>
>>>          1. Run the same speed etc ideally in the same RAID group
>>>             (not sure if mooseFS counters this with using RAM as
>>>             cache?). I have been caught out with thinking I was
>>>             getting 7.2K RPM drive just find the manufacture changed
>>>             drive speeds between different sizes in the same series
>>>             of drives (e.g. WD Red I think). Personally I dislike
>>>             5.9k RPM drives...unless they're in big Enterprise
>>>             SAN/S3 solution.
>>>          2. Uses different brands and *batch numbers - *last thing
>>>             you want is have bad batch and they all start failing
>>>             around the same time - e.g. buying 5 x WD blues from
>>>             same store at the same time is bad idea (and yes its pain).
>>>          3. 8 TB and above drives have long response latency (due to
>>>             density) and thus be careful what configuration you use
>>>             and make sure it can handle long build time
>>>          4. I have had drives die from HGST, Seagate and WD over the
>>>             years...HGST died the quickly and were pain to replace
>>>             under warranty from memory.
>>>
>>>         -Shaun
>>>
>>>         On 25/04/2021 3:26 pm, Benjamin wrote:
>>>>         It's not worth getting anything other than cheapest non-SMR
>>>>         drives IMO for nearly any use case... you can get
>>>>         performance by aggregating enough drives anyways
>>>>
>>>>         On Sun, Apr 25, 2021 at 3:25 PM Benjamin <zorlin at gmail.com
>>>>         <mailto:zorlin at gmail.com>> wrote:
>>>>
>>>>             LizardFS is a bag of hurt with dead development.
>>>>             Proceed with hella caution if you go that route. I hope
>>>>             it changes and becomes worth pursuing though.
>>>>
>>>>             MFSpro is justifiable around 50TiB and up, until then
>>>>             it's not really worth it.
>>>>
>>>>             On Sun, Apr 25, 2021 at 3:22 PM William Kenworthy
>>>>             <billk at iinet.net.au <mailto:billk at iinet.net.au>> wrote:
>>>>
>>>>                 Thanks Ben and Paul - this backs up my
>>>>                 readings/experience.
>>>>
>>>>                 I will shortly need a new archive drive because I
>>>>                 have lest than 80Gb left on the 2Tb WD green I have
>>>>                 been using for a  few years.  As performance isn't
>>>>                 an issue I will likely go with a Seagate Barracuda
>>>>                 this time (still debating shingled or not because
>>>>                 this use is more cost sensitive than performance on
>>>>                 writing new data across a network - so low
>>>>                 priority, busy, but not excessively so when in use
>>>>                 - I am happy to allow time for the shingling
>>>>                 resilvering to complete as long as it doesn't
>>>>                 impact time to actually backup the data too much.)
>>>>
>>>>                 Moosefs is more difficult to quantify whats needed
>>>>                 - currently:
>>>>
>>>>                 8 hosts (8 HDD, 1x M2.SSD, 6x arm32, 1x arm64 and
>>>>                 1x intel - all odroid using gentoo)
>>>>
>>>>                 ~21Tb space, 3/4 in use. I could delete some as
>>>>                 there is duplicate data stored so if I lose a drive
>>>>                 I can reclaim space easily as well as decrease the
>>>>                 goal in some places.
>>>>
>>>>                 As well, I am using storage classes.  High use data
>>>>                 has mostly 1 chunk on the intel/SSD for performance
>>>>                 and others on HDD's.  I have sc's ranging from 1 to
>>>>                 4 copies with 2, 3 and 4 in common use ... for
>>>>                 example things like VM's where there are hot spots
>>>>                 with temp file creation I have 2 copies (2SH)
>>>>                 whereas backups and user data have 4 copies 4HHHH
>>>>                 or 4SHHH depending on priority (eg, /home). 
>>>>                 Currently I have one WD Green drive I would already
>>>>                 toss if in a commercial system, and two Seagate NAS
>>>>                 drives I am not totally happy with.
>>>>
>>>>                 For these, definitely non-shingled (CMR) 7200rpm
>>>>                 around 4TB seems ideal - but is a NAS optimised
>>>>                 drive useful or a waste for moosefs? - vibration of
>>>>                 nearby drives is the only thing I can think of. 
>>>>                 Some are bound together (5x odroid HC2) and some
>>>>                 are in pairs in relatively heavy PC case baymounts
>>>>                 (removed/pinched - from my sons ongoing gaming PC
>>>>                 build :) placed on a desk.  I am staring to lean
>>>>                 towards the WD blacks for this, but the HGST lines
>>>>                 WD are starting to integrate are interesting though
>>>>                 more expensive ...
>>>>
>>>>                 I would love to have MFSpro but cant justify it as
>>>>                 super uptime isn't necessary, EC isn't really
>>>>                 attractive at my scale and multiple masters isn't
>>>>                 essential as I have plenty of alternative systems I
>>>>                 could bring in quickly ... though I am watching
>>>>                 lizardfs and just might jump to it to get the
>>>>                 multiple masters that is in the free tier.
>>>>
>>>>                 BillK
>>>>
>>>>
>>>>                 On 25/4/21 1:19 pm, Benjamin wrote:
>>>>>                 +1 to all of it, cheers Paul.
>>>>>
>>>>>                 I think it's worth going for the cheapest
>>>>>                 externals you can get, shucking them, then using
>>>>>                 MooseFS since you're already planning to.
>>>>>
>>>>>                 I'd use copies=3 and if you're storing more than
>>>>>                 50TB talk to me about mfspro.
>>>>>
>>>>>                 On Sun, 25 Apr 2021, 13:03 Paul Del,
>>>>>                 <p at delfante.it <mailto:p at delfante.it>> wrote:
>>>>>
>>>>>                     Hello Bill
>>>>>
>>>>>                     My 2 cents worth
>>>>>
>>>>>                     I am sure you know the common things that can
>>>>>                     increase your hard drives life and performance:
>>>>>                     Temperature
>>>>>                     Humidity
>>>>>                     VIbration
>>>>>                     Heavy Writes
>>>>>                     Heaving Logging
>>>>>                     Clean/Reliable power
>>>>>                     Data throughput
>>>>>
>>>>>                     The rust hard drives I have seen the most
>>>>>                     failures with are: (I recommend avoiding)
>>>>>                     WD Green
>>>>>                     WD Blue
>>>>>                     Hitachi Deskstar
>>>>>                     (Not The server drives)
>>>>>
>>>>>                     The rust hard drives I recommend the most are:
>>>>>                     WD Black 7200rpm or better
>>>>>                     Seagate 7200pm or better
>>>>>                     (Not Red, Blue, Green, Purple)
>>>>>
>>>>>                     If you are doing the moose distribute setup
>>>>>                     You could always choose two different brands/types
>>>>>
>>>>>                     if you want to know more specific things about
>>>>>                     which hard drive failures. Check out this from
>>>>>                     backblaze, I am sure there's more around.
>>>>>                     Which is one Benjamin sent around ages ago.
>>>>>                     https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/
>>>>>                     <https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/>
>>>>>                     https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/
>>>>>                     <https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/>
>>>>>
>>>>>                     Thanks Paul
>>>>>
>>>>>
>>>>>                     On Sat, 24 Apr 2021, 09:02 William Kenworthy,
>>>>>                     <billk at iinet.net.au
>>>>>                     <mailto:billk at iinet.net.au>> wrote:
>>>>>
>>>>>                     > Just musing on what changes I could make to
>>>>>                     streamline my systems:
>>>>>                     >
>>>>>                     > After a recent stray "r m  - r f " with a
>>>>>                     space in it I ended up
>>>>>                     > removing both most of my active data files,
>>>>>                     VM's etc ... and the online
>>>>>                     > backups - ouch!
>>>>>                     >
>>>>>                     > I have restored from offline backups and
>>>>>                     have noticed a ~10years old WD
>>>>>                     > green drive showing a few early symptoms of
>>>>>                     failing (SMART).
>>>>>                     >
>>>>>                     > With the plethora of colours now available
>>>>>                     (!) now what drive is best for
>>>>>                     > a:
>>>>>                     >
>>>>>                     >     1. moosefs chunkserver (stores files for
>>>>>                     VM's, data including the
>>>>>                     > mail servers user files, home directories
>>>>>                     and of course the online
>>>>>                     > borgbackup archives - the disks are
>>>>>                     basically hammered all the time.)
>>>>>                     >
>>>>>                     >     2. offline backups (~2tb data using
>>>>>                     borgbackup to backup the online
>>>>>                     > borgbackup repo, used twice a week for a few
>>>>>                     minutes at a time.)
>>>>>                     >
>>>>>                     > My longest serving drives are WD greens 2Tb
>>>>>                     which until now have just
>>>>>                     > keep ticking along.  The failing drive is a
>>>>>                     WD Green - I have run
>>>>>                     > badblocks on it overnight with no errors so
>>>>>                     far so it might have
>>>>>                     > internally remapped the failed sectors ok -
>>>>>                     I am using xfs which does
>>>>>                     > not have badblock support.  Most drives
>>>>>                     spent previous years in btrfs
>>>>>                     > raid 10's or ceph so they have had a hard life!
>>>>>                     >
>>>>>                     > Newer WD Reds and a Red pro have failed over
>>>>>                     the years but I still have
>>>>>                     > two in the mix (6tb and 2tb)
>>>>>                     >
>>>>>                     > Some Seagate Ironwolfs that show some SMART
>>>>>                     errors Backblaze correlate
>>>>>                     > with drive failure and throw an occasional
>>>>>                     USB interface error but
>>>>>                     > otherwise seem OK.
>>>>>                     >
>>>>>                     > There are shingled, non-shingled drives,
>>>>>                     surveillance, NAS flavours etc.
>>>>>                     > - but what have people had success with? -
>>>>>                     or should I just choose my
>>>>>                     > favourite colour and run with it?
>>>>>                     >
>>>>>                     > Thoughts?
>>>>>                     >
>>>>>                     > BillK
>>>>>                     _______________________________________________
>>>>>                     PLUG discussion list: plug at plug.org.au
>>>>>                     <mailto:plug at plug.org.au>
>>>>>                     http://lists.plug.org.au/mailman/listinfo/plug
>>>>>                     <http://lists.plug.org.au/mailman/listinfo/plug>
>>>>>                     Committee e-mail: committee at plug.org.au
>>>>>                     <mailto:committee at plug.org.au>
>>>>>                     PLUG Membership:
>>>>>                     http://www.plug.org.au/membership
>>>>>                     <http://www.plug.org.au/membership>
>>>>>
>>>>>
>>>>>                 _______________________________________________
>>>>>                 PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>>>>>                 http://lists.plug.org.au/mailman/listinfo/plug <http://lists.plug.org.au/mailman/listinfo/plug>
>>>>>                 Committee e-mail: committee at plug.org.au <mailto:committee at plug.org.au>
>>>>>                 PLUG Membership: http://www.plug.org.au/membership <http://www.plug.org.au/membership>
>>>>                 _______________________________________________
>>>>                 PLUG discussion list: plug at plug.org.au
>>>>                 <mailto:plug at plug.org.au>
>>>>                 http://lists.plug.org.au/mailman/listinfo/plug
>>>>                 <http://lists.plug.org.au/mailman/listinfo/plug>
>>>>                 Committee e-mail: committee at plug.org.au
>>>>                 <mailto:committee at plug.org.au>
>>>>                 PLUG Membership: http://www.plug.org.au/membership
>>>>                 <http://www.plug.org.au/membership>
>>>>
>>>>
>>>>         _______________________________________________
>>>>         PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>>>>         http://lists.plug.org.au/mailman/listinfo/plug <http://lists.plug.org.au/mailman/listinfo/plug>
>>>>         Committee e-mail: committee at plug.org.au <mailto:committee at plug.org.au>
>>>>         PLUG Membership: http://www.plug.org.au/membership <http://www.plug.org.au/membership>
>>>
>>>         _______________________________________________
>>>         PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>>>         http://lists.plug.org.au/mailman/listinfo/plug <http://lists.plug.org.au/mailman/listinfo/plug>
>>>         Committee e-mail: committee at plug.org.au <mailto:committee at plug.org.au>
>>>         PLUG Membership: http://www.plug.org.au/membership <http://www.plug.org.au/membership>
>>         _______________________________________________
>>         PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>>         http://lists.plug.org.au/mailman/listinfo/plug
>>         <http://lists.plug.org.au/mailman/listinfo/plug>
>>         Committee e-mail: committee at plug.org.au
>>         <mailto:committee at plug.org.au>
>>         PLUG Membership: http://www.plug.org.au/membership
>>         <http://www.plug.org.au/membership>
>>
>>
>>     _______________________________________________
>>     PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>>     http://lists.plug.org.au/mailman/listinfo/plug <http://lists.plug.org.au/mailman/listinfo/plug>
>>     Committee e-mail: committee at plug.org.au <mailto:committee at plug.org.au>
>>     PLUG Membership: http://www.plug.org.au/membership <http://www.plug.org.au/membership>
>     _______________________________________________
>     PLUG discussion list: plug at plug.org.au <mailto:plug at plug.org.au>
>     http://lists.plug.org.au/mailman/listinfo/plug
>     <http://lists.plug.org.au/mailman/listinfo/plug>
>     Committee e-mail: committee at plug.org.au <mailto:committee at plug.org.au>
>     PLUG Membership: http://www.plug.org.au/membership
>     <http://www.plug.org.au/membership>
>
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20210426/941b581b/attachment.html>


More information about the plug mailing list