[plug] xfs errors?

Brad Campbell brad at fnarfbargle.com
Mon Jul 18 09:32:15 AWST 2022

On 16/7/22 20:20, Chris Hoy Poy wrote:
> Yeah, XFS will do things like try to put an entire tree for the disk into
> ram. It's sometimes impossible to run xfs_repair without a bigger disk for
> swap overflow. (Been there, fsck'd that).

So we're over 2.5 days into the xfs_repair now. Probably didn't help that the circuit the drive was plugged
into popped the RCD yesterday morning and had to start again. Having said that it pretty rapidly picked up where it
left off.

The drive it is swapping onto was almost new. Had less than 5G read/written at the start of the process.

SMART/Health Information (NVMe Log 0x02)
Data Units Read:                    5,194,264 [2.65 TB]
Data Units Written:                 4,358,205 [2.23 TB]

  3374 root      20   0  109.8g  54.4g  55.1g   1000 D   4.8  87.6 121:02.64 xfs_repair

The patient is a 4TB spinning disk. In the process of cleaning up the filesystem thus far we've swapped
more than half the size of the disk.

I think the speed limitation is the disk under repair. iostat indicates it's pretty much pegged under
the random I/O usage pattern xfs_repair is using.

> If it's not making sounds, and you physically swap the drives - deep dive
> into your journalling options but it's probably just truncated a bunch of
> small files after a few bad unmounts / unsynced disconnects.
> It *will* truncate files in it's default config on most kernels, but it
> knows it has so it's often not a big deal.
> I can't remember if it will try to reuse those sectors if it gets crammed
> for space, I suspect it won't until they have been verified clear by an
> xfs_repair.
> But if it's a big drive with a lot of files, it's gonna be a little while.
> The algorithm is likely single threaded and aiming for correctness rather
> then throughput efficiency.

You ain't kidding!

An expert is a person who has found out by his own painful
experience all the mistakes that one can make in a very
narrow field. - Niels Bohr

More information about the plug mailing list