[plug] xfs errors?

Brad Campbell brad at fnarfbargle.com
Tue Jul 19 18:36:45 AWST 2022


On 19/7/22 17:37, Dean Bergin wrote:
> Hello Brad,
> 
> What an interesting journey you've shared with us. By by golly I've never had this sort of 'fun'
> 
> Out of curiosity, what is your requirement that made you decide to use xfs over ext[234]?
> 
> I've used xfs in the past when managing storage in jbod scenarios - since those drives didn't change in size (other than the occasional upgrade/increase in capacity) and where all same or double capacity so easier to manage to a degree - but I switched to using ext4 when I built my software raid + LVM2 as I found that xfs doesn't seem to support shrinking.

G'day Dean,

I detailed that particular journey back in 2020 after I spent literally weeks benchmarking one against the other.

Thread starts here : http://lists.plug.org.au/pipermail/plug/2020-January/084245.html

It's a pathological scenario really, but none the less a reality of the system I put together.

> 
> I think with a good backup strategy, filesystem is somewhat less important, but I suppose it depends on the application or your specific requirements.
> 

This *is* the backup strategy. I have some 850 days of versioned backups of every machine on that drive. Have only had to use it a couple of times but those times have been invaluable : "I remember updating that document in Feb and must have accidentally deleted it some time in March. It's September, do you think you might be able to find it?". Bloody marvelous!

When the drive fills up (about every 3 years), I write the start and end dates on it and pop it in the safe and bring up a new one. Not because I have a requirement to, but if in 10 years I want something that is on it I *might* get lucky and find it spins up. If it's critical data, it'll be on that drive and the other 3 drives I'll fill up between now and then and my current machine anyway.

rsync backups with hardlinks are bloody awesome. But lots of them on one drive creates a shit-ton of inodes. Most of the current files have ~800 hard links. That requires a filesystem which manages metadata well, and xfs does it much faster than ext4. On the other hand, as I've just found out ext4 is a case of "slow and steady wins the race". xfs didn't lose any data, but its fsck handles unclean shutdowns a bit less elegantly than ext4.

Even though the old drive has 600G free, I've worked up a new drive today with an xfs v5 filesystem with all new integrity options enabled and I'll probably shelve the old one once the rsync -c is complete.

This was an interesting lesson for me, and I'll be putting my backup system raspberry pi 4 and USB attached disks on the UPS circuit. 

I learned another interesting lesson. There are instances (which I'll be trying to reproduce to verify) where Debian updates/upgrades might change the contents of a file but not the size or mtime and therefore rsync without the -c option misses it. I've spotted a few with my rsync -c run and when examining the difference it's certainly "intended" rather than corruption. That's an investigation for the "when I have time" shelf.

Bad day when you don't learn something.

Regards,
Brad


More information about the plug mailing list