[plug] Tapes

Brad Campbell brad at fnarfbargle.com
Thu Mar 30 17:36:11 AWST 2023


On 15/3/23 13:01, Brad Campbell wrote:
 > I've started using a combination of tar and pbzip2 for archiving, but I'm going to have to put a cheap 2TB SSD into the box with the tape drive to stage to, as I can't feed it fast enough (seeing ~130MB/s write speeds).
> As it's an internal drive, my biggest issue has been cooling. Lot of blocking case holes with gaffa tape and re-arranging fans to get the airflow required to keep it cool during a write.

Having had a play for a week or so, I've found pigz -i compresses nearly as well as lbzip2 and 3 times faster. Admittedly the CPU is an old AMD FX-8350, so it's not exactly a screamer.
I'm using a 996GB source directory of 28 rsync backups from various machines ranging from ~250M to ~590G. I tar and compress these to the staging drive before writing them to tape.

tar through lbzip2 gets them down to 753G in 5 hours 23 minutes. That is limited by compression.
tar through pigz -i gets them down to 763G in 1 hour 50 minutes. That's not limited by compression, rather tar reading the data from the "super cheap and nasty budget SSD" I chose to play with.
I bought a Crucial MX500 for the staging/spool drive, and then I thought I'd try my luck with the cheapest 2TB SSD I could buy as a source. An SP Data A55, and it's pretty bad.

I have also tested recovery of several archives by blowing holes in them, and the archives created with pigz -i recover as well as the various bzip2 incarnations.

> While this is working adequately, I've ordered a proper external enclosure with the right cooling.

I ordered an "untested" Dell external LTO-5 SAS drive on E-bay ($240 delivered) because that was about half the price of an empty enclosure. I figured the drive would be a bust but I'd get a good external SAS enclosure out of it.
Turns out the "untested" drive has no issues, and has done less work than the internal HP I bought first. By the logs it looks like someone bought it to read LTO3 and LTO4 tapes and it's only done about 50 tape reads.

About time I had a win.

I have a nice little script that generates an index file from the staging directory, then sequentially writes each tarball to tape. Bulk restore just reads the whole tape, or for selective I dump the index, mt fsf to the one I want as listed in the index, and just restore from there.

Drive encryption works well and easy to set up using stenc. No issues with read/write compatibility between either drive, so looks like I'm off to the races.

Appreciate all the pointers.

Regards,
Brad


More information about the plug mailing list