<div dir="auto">Yeah, XFS will do things like try to put an entire tree for the disk into ram. It's sometimes impossible to run xfs_repair without a bigger disk for swap overflow. (Been there, fsck'd that).<div dir="auto"><br></div><div dir="auto">If it's not making sounds, and you physically swap the drives - deep dive into your journalling options but it's probably just truncated a bunch of small files after a few bad unmounts / unsynced disconnects.</div><div dir="auto"><br></div><div dir="auto">It *will* truncate files in it's default config on most kernels, but it knows it has so it's often not a big deal.</div><div dir="auto"><br></div><div dir="auto">I can't remember if it will try to reuse those sectors if it gets crammed for space, I suspect it won't until they have been verified clear by an xfs_repair.</div><div dir="auto"><br></div><div dir="auto">But if it's a big drive with a lot of files, it's gonna be a little while. The algorithm is likely single threaded and aiming for correctness rather then throughput efficiency.</div><div dir="auto"><br></div><div dir="auto">/Chris</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 16 July 2022, 8:10 pm Brad Campbell, <<a href="mailto:brad@fnarfbargle.com">brad@fnarfbargle.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 16/7/22 20:03, Chris Hoy Poy wrote:<br>
> Yeah.<br>
> <br>
> XFS dumps a lot more detail out about this stuff. I've had good luck recovering files from xfs when it hits this point. <br>
> <br>
> The bad sounds are the worrying indicator, nothing good ever comes of that.<br>
<br>
Oh, when I said scary sounding it was in reference to the xfs_repair output. The drive is physically fine and passes a SMART long once a week. Every drive in every system I have/maintain gets at least a full media check weekly. All RAID gets a full scrub monthly. As they say, once caught.<br>
<br>
I'm more concerned in these errors xfs_repair is being vocal about. I've had to bring extra swap on line now, as it's eaten all 64G of physical RAM and is now >20G into the swap.<br>
Thankfully I had some spare space on a reasonably quick nvme because it's hitting that hard.<br>
<br>
Now I understand why xfs_repair died with a segfault when trying to run it on a 4G Raspberry Pi.<br>
<br>
<br>
> <br>
> If you haven't been running regular scrubs , and the volume is not full - then some bad sectors have turned up on old remnants , lucky you. Sometimes it's hardware, sometimes it's software doing dumb things or disks being disconnected at the wrong time. XFS is a journalling system, but often it's only journalling metadata , not full data. That's generally enough? "It depends".<br>
> <br>
> A regular read scrub is never terrible, as disk sectors will die silently until you need them.<br>
> <br>
> I've also had a few experiences where xfs drives have dropped a bunch of bad sectors, which the drive has remapped, and xfs_repair fixed the issues and the drive has been fine for years.<br>
> <br>
> Would I trust the drive with critical data? No. Redundancy is your friend. <br>
> <br>
> XFS and ext4 are among the two most well tested and utilised file systems on the <a href="http://kernel.org" rel="noreferrer noreferrer" target="_blank">kernel.org</a> <<a href="http://kernel.org" rel="noreferrer noreferrer" target="_blank">http://kernel.org</a>> infra, but spurious hardware problems are not unknown and sometimes meaningless. Doesn't mean you can trust the drive :-) (ugh drives. So untrustworthy to start with).<br>
> <br>
> /Chris<br>
> <br>
> <br>
> On Sat, 16 July 2022, 7:40 pm Brad Campbell, <<a href="mailto:brad@fnarfbargle.com" target="_blank" rel="noreferrer">brad@fnarfbargle.com</a> <mailto:<a href="mailto:brad@fnarfbargle.com" target="_blank" rel="noreferrer">brad@fnarfbargle.com</a>>> wrote:<br>
> <br>
> G'day All,<br>
> <br>
> Back in 2020 I did a bit of a shootout between ext4 and xfs for an rsync rotating backup repository.<br>
> Hedging bets I ended up with one 4TB drive with each and they've been doing nightly backups since ~Feb 2020.<br>
> <br>
> Let me be clear here : * I'm not having issues with either. *<br>
> <br>
> As in, the backups work, all files appear coherent, I've had no reports of problems from the kernel and frankly it all looks good.<br>
> <br>
> Last night I unmounted both drives and ran e2fsck and xfs_repair respectively just as a "Let's see how it's all doing".<br>
> <br>
> e2fsck ran to completion without an issue. xfs_repair has been spitting out errors constantly for about the last 18 hours.<br>
> <br>
> Fun stuff like : entry at block 214 offset 176 in directory inode 1292331586 has illegal name "/606316974.14676_0.srv:2,a": entry at block 214 offset 216 in directory inode 1292331586 has illegal name "/606318637.23354_0.srv:2,a": entry at block 214 offset 256 in directory inode 1292331586 has illegal name "/606318639.23364_0.srv:2,a": entry at block 214 offset 296 in directory inode 1292331586 has illegal name "/606318640.23369_0.srv:2,a": entry at block 214 offset 336 in directory inode 1292331586 has illegal name "/606318646.23391_0.srv:2,a": entry at block 214 offset 376 in directory inode 1292331586 has illegal name "/606319148.26097_0.srv:2,a": entry at block 214 offset 416 in directory inode 1292331586 has illegal name "/606319150.26107_0.srv:2,a": entry at block 214 offset 456 in directory inode 1292331586 has illegal name "/606319152.26158_0.srv:2,a": entry at block 3 offset 3816 in directory inode 1292331587 has illegal name "/606350201.7742_1.srv:2,Sa": entry<br>
> at block 3 of<br>
> fset 3856 in directory inode 1292331587 has illegal name "/606369099.14439_1.srv:2,Sa": imap claims a free inode 1292346502 is in use, correcting imap and clearing inode<br>
> cleared inode 1292346502<br>
> imap claims a free inode 1292439884 is in use, correcting imap and clearing inode<br>
> cleared inode 1292439884<br>
> imap claims a free inode 1292442224 is in use, correcting imap and clearing inode<br>
> cleared inode 1292442224<br>
> <br>
> It started with a continuous whine about indoes with bad magic and lots of scary sounding stuff during stage 3 and has settled down to this in stage 4.<br>
> <br>
> From the file names I'm seeing, I suspect they're deleted files and directories. As you'd imagine, 2 and a half years of rotating backups sees lots of stuff added, linked and deleted.<br>
> <br>
> I can stop xfs_repair, mount and check the filesystem contents. It all looks good. When I unmount and re-run xfs_repair it pretty much picks up where it left off. I've had to add an extra 32G of ram in the machine and even then I've had to limit xfs_repair to ~58G because it was using all 64G of ram and heading towards 20G of swap.<br>
> <br>
> I'm new at xfs. Generally when e2fsck reports anything like this the filesystems is toast. In this case I can't find anything missing or corrupt, but xfs_repair is going bonkers.<br>
> <br>
> This is an xfs V4 filesystem, and I've upgraded to xfsprogs 5.18, but it's all the same really.<br>
> <br>
> I've made an emergency second backup of the systems this drive was backing up in case it all goes south but despite the spew of errors the actual filesystem looks perfectly fine. Has anyone seen anything similar?<br>
> <br>
> Regards,<br>
> Brad<br>
> _______________________________________________<br>
> PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank" rel="noreferrer">plug@plug.org.au</a> <mailto:<a href="mailto:plug@plug.org.au" target="_blank" rel="noreferrer">plug@plug.org.au</a>><br>
> <a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a> <<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a>><br>
> Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank" rel="noreferrer">committee@plug.org.au</a> <mailto:<a href="mailto:committee@plug.org.au" target="_blank" rel="noreferrer">committee@plug.org.au</a>><br>
> PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a> <<a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a>><br>
> <br>
<br>
</blockquote></div>