[plug] musing on HDD types (William Kenworthy)

Benjamin zorlin at gmail.com
Sun Apr 25 15:25:33 AWST 2021


LizardFS is a bag of hurt with dead development. Proceed with hella caution
if you go that route. I hope it changes and becomes worth pursuing though.

MFSpro is justifiable around 50TiB and up, until then it's not really worth
it.

On Sun, Apr 25, 2021 at 3:22 PM William Kenworthy <billk at iinet.net.au>
wrote:

> Thanks Ben and Paul - this backs up my readings/experience.
>
> I will shortly need a new archive drive because I have lest than 80Gb left
> on the 2Tb WD green I have been using for a  few years.  As performance
> isn't an issue I will likely go with a Seagate Barracuda this time (still
> debating shingled or not because this use is more cost sensitive than
> performance on writing new data across a network - so low priority, busy,
> but not excessively so when in use - I am happy to allow time for the
> shingling resilvering to complete as long as it doesn't impact time to
> actually backup the data too much.)
>
> Moosefs is more difficult to quantify whats needed - currently:
>
> 8 hosts (8 HDD, 1x M2.SSD, 6x arm32, 1x arm64 and 1x intel - all odroid
> using gentoo)
>
> ~21Tb space, 3/4 in use. I could delete some as there is duplicate data
> stored so if I lose a drive I can reclaim space easily as well as decrease
> the goal in some places.
>
> As well, I am using storage classes.  High use data has mostly 1 chunk on
> the intel/SSD for performance and others on HDD's.  I have sc's ranging
> from 1 to 4 copies with 2, 3 and 4 in common use ... for example things
> like VM's where there are hot spots with temp file creation I have 2 copies
> (2SH) whereas backups and user data have 4 copies 4HHHH or 4SHHH depending
> on priority (eg, /home).  Currently I have one WD Green drive I would
> already toss if in a commercial system, and two Seagate NAS drives I am not
> totally happy with.
>
> For these, definitely non-shingled (CMR) 7200rpm around 4TB seems ideal -
> but is a NAS optimised drive useful or a waste for moosefs? - vibration of
> nearby drives is the only thing I can think of.  Some are bound together
> (5x odroid HC2) and some are in pairs in relatively heavy PC case baymounts
> (removed/pinched - from my sons ongoing gaming PC build :) placed on a
> desk.  I am staring to lean towards the WD blacks for this, but the HGST
> lines WD are starting to integrate are interesting though more expensive
> ...
>
> I would love to have MFSpro but cant justify it as super uptime isn't
> necessary, EC isn't really attractive at my scale and multiple masters
> isn't essential as I have plenty of alternative systems I could bring in
> quickly ... though I am watching lizardfs and just might jump to it to get
> the multiple masters that is in the free tier.
>
> BillK
>
>
> On 25/4/21 1:19 pm, Benjamin wrote:
>
> +1 to all of it, cheers Paul.
>
> I think it's worth going for the cheapest externals you can get, shucking
> them, then using MooseFS since you're already planning to.
>
> I'd use copies=3 and if you're storing more than 50TB talk to me about
> mfspro.
>
> On Sun, 25 Apr 2021, 13:03 Paul Del, <p at delfante.it> wrote:
>
>> Hello Bill
>>
>> My 2 cents worth
>>
>> I am sure you know the common things that can increase your hard drives
>> life and performance:
>> Temperature
>> Humidity
>> VIbration
>> Heavy Writes
>> Heaving Logging
>> Clean/Reliable power
>> Data throughput
>>
>> The rust hard drives I have seen the most failures with are: (I recommend
>> avoiding)
>> WD Green
>> WD Blue
>> Hitachi Deskstar
>> (Not The server drives)
>>
>> The rust hard drives I recommend the most are:
>> WD Black 7200rpm or better
>> Seagate 7200pm or better
>> (Not Red, Blue, Green, Purple)
>>
>> If you are doing the moose distribute setup
>> You could always choose two different brands/types
>>
>> if you want to know more specific things about which hard drive failures.
>> Check out this from backblaze, I am sure there's more around. Which is one
>> Benjamin sent around ages ago.
>> https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/
>> https://www.backblaze.com/blog/backblaze-hard-drive-stats-q2-2020/
>>
>> Thanks Paul
>>
>>
>> On Sat, 24 Apr 2021, 09:02 William Kenworthy, <billk at iinet.net.au> wrote:
>>
>> > Just musing on what changes I could make to streamline my systems:
>> >
>> > After a recent stray "r m  - r f " with a space in it I ended up
>> > removing both most of my active data files, VM's etc ... and the online
>> > backups - ouch!
>> >
>> > I have restored from offline backups and have noticed a ~10years old WD
>> > green drive showing a few early symptoms of failing (SMART).
>> >
>> > With the plethora of colours now available (!) now what drive is best
>> for
>> > a:
>> >
>> >     1. moosefs chunkserver (stores files for VM's, data including the
>> > mail servers user files, home directories and of course the online
>> > borgbackup archives - the disks are basically hammered all the time.)
>> >
>> >     2. offline backups (~2tb data using borgbackup to backup the online
>> > borgbackup repo, used twice a week for a few minutes at a time.)
>> >
>> > My longest serving drives are WD greens 2Tb which until now have just
>> > keep ticking along.  The failing drive is a WD Green - I have run
>> > badblocks on it overnight with no errors so far so it might have
>> > internally remapped the failed sectors ok - I am using xfs which does
>> > not have badblock support.  Most drives spent previous years in btrfs
>> > raid 10's or ceph so they have had a hard life!
>> >
>> > Newer WD Reds and a Red pro have failed over the years but I still have
>> > two in the mix (6tb and 2tb)
>> >
>> > Some Seagate Ironwolfs that show some SMART errors Backblaze correlate
>> > with drive failure and throw an occasional USB interface error but
>> > otherwise seem OK.
>> >
>> > There are shingled, non-shingled drives, surveillance, NAS flavours etc.
>> > - but what have people had success with? - or should I just choose my
>> > favourite colour and run with it?
>> >
>> > Thoughts?
>> >
>> > BillK
>> _______________________________________________
>> PLUG discussion list: plug at plug.org.au
>> http://lists.plug.org.au/mailman/listinfo/plug
>> Committee e-mail: committee at plug.org.au
>> PLUG Membership: http://www.plug.org.au/membership
>
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.auhttp://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
>
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20210425/44ea2517/attachment.html>


More information about the plug mailing list