<div dir="ltr">Hi all, <div><br></div><div>I've been lurking here for a while and lately have gotten myself an Odroid HC2 (<a href="https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/">https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/</a>)</div><div>I'm primarily a Windows user (and .Net Developer) so my Linux exposure is limited (but I do love it. Still trying to hit that critical mass in my knowledge where I don't feel like a gumbie)</div><div><br></div><div>I was a Server support guy a long time ago (again, Windows) and this thread has me interested (given my HC2 purchase for tinkering) and so I looked up Ceph to see if it would be a good fit for tinkering with/learning. I had a look at the Ceph docs and the Installation page launches right into the installation but It seems to start well past the OS. So I'm assuming this can be installed on whatever linux distro you like/have? </div><div><br></div><div>Just wanted to drop a reply and thanks for the interesting thread. Hope no one minds if I post questions if I get stuck with anything. I usually find my way eventually, its a useful IT skill. Love having a job where you get paid to "try shit until it works". </div><div><br></div><div>cheers</div><div>Stephen </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jan 6, 2020 at 9:30 PM Gregory Orange <<a href="mailto:home@oranges.id.au">home@oranges.id.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Yep, Ceph certainly has an enterprisey scent about it. We have dozens of nodes, and we have just moved from 2x10Gb to 2x100Gb for the storage network, although we're sticking with Gb for management.<br>
<br>
Choosing the right horse for the right course is part of the wisdom required to deal with IT in a functional way, at any scale.<br>
<br>
<br>
--<br>
Gregory Orange<br>
<br>
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐<br>
On Monday, 6 January 2020 5:49 PM, Bill Kenworthy <<a href="mailto:billk@iinet.net.au" target="_blank">billk@iinet.net.au</a>> wrote:<br>
<br>
> On 6/1/20 11:41 am, Gregory Orange wrote:<br>
><br>
> > We've been using CephFS for over a year for our pilot sync system. It<br>
> > also seems to refuse to lose data, although it's hardly busy with any<br>
> > sort of production load.<br>
> > -- Gregory Orange<br>
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐<br>
> > On Sunday, 5 January 2020 8:31 PM, Chris Hoy Poy <a href="mailto:chris@hoypoy.id.au" target="_blank">chris@hoypoy.id.au</a><br>
> > wrote:<br>
> ><br>
> > > You would end up running xfs on top of ceph anyway :-) (unless you<br>
> > > don't care about your data, then you could give cephfs a try !)<br>
> > > /Chris<br>
> > > On Sun, 5 Jan 2020, 8:29 pm Gregory Orange, <<a href="mailto:home@oranges.id.au" target="_blank">home@oranges.id.au</a><br>
> > > mailto:<a href="mailto:home@oranges.id.au" target="_blank">home@oranges.id.au</a>> wrote:<br>
> > ><br>
> > > I suppose I should mention Ceph since we're talking about<br>
> > > resilient storage systems, but it's likely out of scope here.<br>
> > > Bare minimum of three physical nodes, scales up real big. Refuses<br>
> > > to lose data, despite our valiant attempts over the past three<br>
> > > years. Mimic version is probably better suited to production<br>
> > > loads than Nautilus given our recent experiences. It's an object<br>
> > > store, so if you want file, that's at least one more layer on top.<br>
> > ><br>
> > ><br>
> > > -- Gregory Orange<br>
> > ><br>
> > > Sent from phone<br>
> > ><br>
> > ><br>
> > ><br>
> > > -------- Original Message --------<br>
> > > On 4 Jan 2020, 13:20, Brad Campbell < <a href="mailto:brad@fnarfbargle.com" target="_blank">brad@fnarfbargle.com</a><br>
> > > <mailto:<a href="mailto:brad@fnarfbargle.com" target="_blank">brad@fnarfbargle.com</a>>> wrote:<br>
> > ><br>
> > ><br>
> > > On 4/1/20 1:01 pm, Bill Kenworthy wrote:<br>
> > ><br>
> > ><br>
> > > > Hi Brad,<br>
> > > ><br>
> > > > I have had a lot of pain from ext4 over the years and<br>
> > > have really<br>
> > > > only started using it again seriously recently ... and I<br>
> > > must admit, its<br>
> > > > a lot better than it was but I will move off it when I get<br>
> > > time - been<br>
> > > > burnt by it too often.<br>
> > > ><br>
> > > > reiserfs3 was my goto for inode problems in the past (its<br>
> > > still there,<br>
> > > > and I think maintained) but I moved to btrfs after the Hans<br>
> > > Reiser saga<br>
> > > > and while it has its ups and downs, stability under<br>
> > > punishment that<br>
> > > > kills ext3/4 with live scrub and snapshots made it great.<br>
> > > ><br>
> > > > Currently I am moving to moosefs on xfs and am impressed -<br>
> > > particularly<br>
> > > > with xfs so far. Live power off, various failure tests etc.<br>
> > > and I have<br>
> > > > not lost any data.<br>
> > > ><br>
> > > > For backup I use moosefs snapshots and borgbackup (main<br>
> > > repository is<br>
> > > > also on moosefs - daily + some data is 10 minutely, as well<br>
> > > as an<br>
> > > > offline borgbackup on btrfs removable drive, this once a<br>
> > > week or so) as<br>
> > > > the backup software. I previously used dirvish for many<br>
> > > years though it<br>
> > > > had a tendency to eat ext4 file systems, it was great on<br>
> > > reiserfs and<br>
> > > > btrfs.<br>
> > > ><br>
> > > > Hope this helps with ideas.<br>
> > ><br>
> > ><br>
> > > G'day Bill,<br>
> > ><br>
> > ><br>
> > > It does. Thanks. Interesting how peoples experiences differ.<br>
> > > I've always<br>
> > > used ext[234], abused them severely and never lost a byte.<br>
> > ><br>
> > ><br>
> > ><br>
> > > My only foray into an alternative filesystem was helping a<br>
> > > mate with a<br>
> > > large btrfs layout, but after it "ran out of space" and ate<br>
> > > about 13T of<br>
> > > his data, and the response from the developers was "yeah, it<br>
> > > can do<br>
> > > that" we never looked at it again. A bit like bcache, it<br>
> > > always seemed<br>
> > > to be "almost there as long as you only use it in certain<br>
> > > circumstances<br>
> > > that never expose the corner cases".<br>
> > ><br>
> > ><br>
> > ><br>
> > > I'll have a serious play with xfs and see how it performs. I<br>
> > > know all<br>
> > > the little NAS WD Mybooks I've bought over the years have all<br>
> > > had xfs as<br>
> > > their main storage pool, but I've always converted them to<br>
> > > ext[234].<br>
> > ><br>
> > ><br>
> > ><br>
> > > I'll add moosefs and borgbackup to my long list of "must take<br>
> > > a look at<br>
> > > that one day".<br>
> > ><br>
> > ><br>
> > ><br>
> > > Regards,<br>
> > > Brad<br>
> > > --<br>
> > > An expert is a person who has found out by his own painful<br>
> > > experience all the mistakes that one can make in a very<br>
> > > narrow field. - Niels Bohr<br>
> > > _______________________________________________<br>
> > > PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a> <mailto:<a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a>><br>
> > > <a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
> > > Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
> > > <mailto:<a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a>><br>
> > > PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a> <mailto:<a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a>><br>
> > > <a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
> > > Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
> > > <mailto:<a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a>><br>
> > > PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
> > ><br>
> ><br>
> > PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a><br>
> > <a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
> > Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
> > PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
><br>
> I did try ceph (admittedly a few years ago) and gave it up as a bad joke<br>
> after losing the entire system numerous times as soon as I tried to<br>
> stress it. Check google, its still happening!<br>
><br>
> I did have to go down the path of getting more ram/split networks etc<br>
> with moosefs to get the performance I wanted, but it is stable. Ceph<br>
> requires this upfront as a minimum (despite what the docs were saying)<br>
><br>
> Ceph does have a better design in that chunk placement works on a<br>
> formula (and vm images can use RDB) while moosefs keeps their location<br>
> in memory (proportional to the number of files stored - turns out be a<br>
> lot of memory!) - this does not scale as well, but with ceph its<br>
> redundant if the system crashes and isn't recoverable.<br>
><br>
> BillK<br>
><br>
> The problem is resources - moosfs can work if lightly loaded but ceph<br>
> requires full up front investment of at least two high speed networks,<br>
> and reasonably powerful servers.<br>
><br>
> PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a><br>
> <a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
> Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
> PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
<br>
<br>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer" target="_blank">http://www.plug.org.au/membership</a></blockquote></div>