[plug] xfs errors?

Dean Bergin dean.bergin at gmail.com
Tue Jul 19 17:37:56 AWST 2022


Hello Brad,

What an interesting journey you've shared with us. By by golly I've never
had this sort of 'fun'

Out of curiosity, what is your requirement that made you decide to use xfs
over ext[234]?

I've used xfs in the past when managing storage in jbod scenarios - since
those drives didn't change in size (other than the occasional
upgrade/increase in capacity) and where all same or double capacity so
easier to manage to a degree - but I switched to using ext4 when I built my
software raid + LVM2 as I found that xfs doesn't seem to support shrinking.

I think with a good backup strategy, filesystem is somewhat less important,
but I suppose it depends on the application or your specific requirements.

Anyway. Thanks again for sharing.

On Tue, 19 July 2022, 16:56 Brad Campbell, <brad at fnarfbargle.com> wrote:

> Aaaaaaaand after some 70 hours it's finished
>
> A second complete run took another ~14 hours just to double check it was
> clean.
>
> root at bkd:/mnt# /home/brad/src/xfsprogs-5.18.0/repair/xfs_repair
> /dev/mapper/backup
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>          - zero log...
>          - scan filesystem freespace and inode maps...
>          - found root inode chunk
> Phase 3 - for each AG...
>          - scan and clear agi unlinked lists...
>          - process known inodes and perform inode discovery...
>          - agno = 0
> doubling cache size to 11949296
>          - agno = 1
>          - agno = 2
>          - agno = 3
>          - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
>          - setting up duplicate extent list...
>          - check for inodes claiming duplicate blocks...
>          - agno = 0
>          - agno = 1
>          - agno = 2
>          - agno = 3
> Phase 5 - rebuild AG headers and trees...
>          - reset superblock...
> Phase 6 - check inode connectivity...
>          - resetting contents of realtime bitmap and summary inodes
>          - traversing filesystem ...
>          - traversal finished ...
>          - moving disconnected inodes to lost+found ...
> Phase 7 - verify and correct link counts...
> done
>
> The swap took a beating. This was after the first completion
>
> Data Units Read:                    5,769,811 [2.95 TB]
> Data Units Written:                 4,747,435 [2.43 TB]
>
> This was after the second clean run above.
>
> Data Units Read:                    8,134,981 [4.16 TB]
> Data Units Written:                 6,665,018 [3.41 TB]
>
> It would appear xfs_repair is *brutal* on memory if you have a complex
> filesystem (~15 billion inodes in 4TB).
>
> A quick check of the fs shows everything intact. Watching the link
> corrections fly past it was always -1 count, so I think a large delete has
> got all crossed up somewhere and things have gone off the rails.
>
> 78,164 directories containing 3.9 million files of which only 2,671 were
> unique (link count=1) in lost+found and all look like unlinked directories.
> All file contents I inspected seemed "about right"(tm).
>
> Doesn't look like anything has been lost, but I'll set the next backups to
> do an rsync -c just in case. That was a fun ride.
>
> Regards,
> Brad
> _______________________________________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/mailman/listinfo/plug
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/membership
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20220719/0d568444/attachment.html>


More information about the plug mailing list