<div dir="ltr">Hi Bill,<div><br></div><div>In my case I actually compiled MooseFS from source on my ODROIDs... I'm really hoping that MooseFS can be convinced to create ARM packages like they do for the Raspberry Pi already...</div><div><br></div><div>I found it handy to use "checkinstall" to turn the compiled source into a basic usable package, which I could then distribute to the other nodes.</div><div><br></div><div>I haven't run on Gentoo and haven't experienced the same issues, so unfortunately don't have much to suggest.</div><div><br></div><div>~ B</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Sep 26, 2019 at 10:20 PM Bill Kenworthy <<a href="mailto:billk@iinet.net.au">billk@iinet.net.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Ben,<br>
after listening to your talk I decided to give moosefs rather than<br>
lizardfs a try.<br>
<br>
I started with a gentoo aarch64 pi3B with 1Tb WD green and expanded it<br>
to two odroid-HC2's and 2 existing x86_64 machines with a total of<br>
1x6TB, 2x4Tb, 7x2Tb (mix of WD greens, reds and seagate ironwolfs, some<br>
have years of 24/7 on them) and the 1Tb still on the pi.<br>
<br>
After running into gotchas with the odroid supplied image not<br>
recognising the moosefs repositories and seeing the administrative<br>
nightmare that is systemd when something doesn't work out of the box I<br>
went back to gentoo and had it all running sweetly in short order.<br>
<br>
Issues:<br>
vm images going readonly when the cluster is under load, but I am hoping<br>
that will go away once I settle on the final design and use a separate<br>
master.<br>
stale locks (again mostlty vm images) that took some chasing down and<br>
eliminating<br>
xfs as the backing store has just worked, but btrfs on the 1Tb filled up<br>
causing one cluster of stale locks but otherwise seems fine - just<br>
converted to xfs so will see how it goes.<br>
<br>
I think I'll get another odroid-HC2 for the second 4Tb drive (nice units<br>
even if only 32 bit) and start retiring the oldest 2Tb drives.<br>
<br>
Thanks for your interesting talk.<br>
<br>
BillK<br>
<br>
<br>
<br>
On 26/9/19 4:36 pm, Benjamin wrote:<br>
> Hi everyone,<br>
><br>
> The talk I gave for PLUG is having another showing, this time at<br>
> Pawsey Supercomputing Centre. If you're interested or missed it last<br>
> time, you can register for the event here:<br>
><br>
> <a href="https://pawsey.org.au/event/pawsey-seminar-moosefs-the-elastic-nas/" rel="noreferrer" target="_blank">https://pawsey.org.au/event/pawsey-seminar-moosefs-the-elastic-nas/</a><br>
><br>
> It's a talk on building a free, open-source, scale-out Network<br>
> Attached Storage system using commodity components and free software.<br>
><br>
> Enjoy!<br>
> ~ Benjamin<br>
> PLUG Chair 2019<br>
><br>
> _______________________________________________<br>
> PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a><br>
> <a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
> Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
> PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
<br>
<br>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote></div>