<div dir="auto">Yeah.<div dir="auto"><br></div><div dir="auto">XFS dumps a lot more detail out about this stuff. I've had good luck recovering files from xfs when it hits this point. </div><div dir="auto"><br></div><div dir="auto">The bad sounds are the worrying indicator, nothing good ever comes of that.</div><div dir="auto"><br></div><div dir="auto">Drive is likely on its way out.</div><div dir="auto"><br></div><div dir="auto">If you haven't been running regular scrubs , and the volume is not full - then some bad sectors have turned up on old remnants , lucky you. Sometimes it's hardware, sometimes it's software doing dumb things or disks being disconnected at the wrong time. XFS is a journalling system, but often it's only journalling metadata , not full data. That's generally enough? "It depends".</div><div dir="auto"><br></div><div dir="auto">A regular read scrub is never terrible, as disk sectors will die silently until you need them.</div><div dir="auto"><br></div><div dir="auto">I've also had a few experiences where xfs drives have dropped a bunch of bad sectors, which the drive has remapped, and xfs_repair fixed the issues and the drive has been fine for years.</div><div dir="auto"><br></div><div dir="auto">Would I trust the drive with critical data? No. Redundancy is your friend. </div><div dir="auto"><br></div><div dir="auto">XFS and ext4 are among the two most well tested and utilised file systems on the <a href="http://kernel.org">kernel.org</a> infra, but spurious hardware problems are not unknown and sometimes meaningless. Doesn't mean you can trust the drive :-) (ugh drives. So untrustworthy to start with).</div><div dir="auto"><br></div><div dir="auto">/Chris</div><br><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Sat, 16 July 2022, 7:40 pm Brad Campbell, <<a href="mailto:brad@fnarfbargle.com">brad@fnarfbargle.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">G'day All,<br>
<br>
Back in 2020 I did a bit of a shootout between ext4 and xfs for an rsync rotating backup repository.<br>
Hedging bets I ended up with one 4TB drive with each and they've been doing nightly backups since ~Feb 2020.<br>
<br>
Let me be clear here : * I'm not having issues with either. *<br>
<br>
As in, the backups work, all files appear coherent, I've had no reports of problems from the kernel and frankly it all looks good.<br>
<br>
Last night I unmounted both drives and ran e2fsck and xfs_repair respectively just as a "Let's see how it's all doing".<br>
<br>
e2fsck ran to completion without an issue. xfs_repair has been spitting out errors constantly for about the last 18 hours.<br>
<br>
Fun stuff like : entry at block 214 offset 176 in directory inode 1292331586 has illegal name "/606316974.14676_0.srv:2,a": entry at block 214 offset 216 in directory inode 1292331586 has illegal name "/606318637.23354_0.srv:2,a": entry at block 214 offset 256 in directory inode 1292331586 has illegal name "/606318639.23364_0.srv:2,a": entry at block 214 offset 296 in directory inode 1292331586 has illegal name "/606318640.23369_0.srv:2,a": entry at block 214 offset 336 in directory inode 1292331586 has illegal name "/606318646.23391_0.srv:2,a": entry at block 214 offset 376 in directory inode 1292331586 has illegal name "/606319148.26097_0.srv:2,a": entry at block 214 offset 416 in directory inode 1292331586 has illegal name "/606319150.26107_0.srv:2,a": entry at block 214 offset 456 in directory inode 1292331586 has illegal name "/606319152.26158_0.srv:2,a": entry at block 3 offset 3816 in directory inode 1292331587 has illegal name "/606350201.7742_1.srv:2,Sa": entry at block 3 of<br>
fset 3856 in directory inode 1292331587 has illegal name "/606369099.14439_1.srv:2,Sa": imap claims a free inode 1292346502 is in use, correcting imap and clearing inode<br>
cleared inode 1292346502<br>
imap claims a free inode 1292439884 is in use, correcting imap and clearing inode<br>
cleared inode 1292439884<br>
imap claims a free inode 1292442224 is in use, correcting imap and clearing inode<br>
cleared inode 1292442224<br>
<br>
It started with a continuous whine about indoes with bad magic and lots of scary sounding stuff during stage 3 and has settled down to this in stage 4.<br>
<br>
>From the file names I'm seeing, I suspect they're deleted files and directories. As you'd imagine, 2 and a half years of rotating backups sees lots of stuff added, linked and deleted.<br>
<br>
I can stop xfs_repair, mount and check the filesystem contents. It all looks good. When I unmount and re-run xfs_repair it pretty much picks up where it left off. I've had to add an extra 32G of ram in the machine and even then I've had to limit xfs_repair to ~58G because it was using all 64G of ram and heading towards 20G of swap.<br>
<br>
I'm new at xfs. Generally when e2fsck reports anything like this the filesystems is toast. In this case I can't find anything missing or corrupt, but xfs_repair is going bonkers.<br>
<br>
This is an xfs V4 filesystem, and I've upgraded to xfsprogs 5.18, but it's all the same really.<br>
<br>
I've made an emergency second backup of the systems this drive was backing up in case it all goes south but despite the spew of errors the actual filesystem looks perfectly fine. Has anyone seen anything similar?<br>
<br>
Regards,<br>
Brad<br>
_______________________________________________<br>
PLUG discussion list: <a href="mailto:plug@plug.org.au" target="_blank" rel="noreferrer">plug@plug.org.au</a><br>
<a href="http://lists.plug.org.au/mailman/listinfo/plug" rel="noreferrer noreferrer" target="_blank">http://lists.plug.org.au/mailman/listinfo/plug</a><br>
Committee e-mail: <a href="mailto:committee@plug.org.au" target="_blank" rel="noreferrer">committee@plug.org.au</a><br>
PLUG Membership: <a href="http://www.plug.org.au/membership" rel="noreferrer noreferrer" target="_blank">http://www.plug.org.au/membership</a><br>
</blockquote></div></div>