[plug] Backups

tcleary2 at csc.com.au tcleary2 at csc.com.au
Wed Jul 28 12:15:40 WST 2004


My $.02, as a long time dump fan is:

Dump both 0 and incremental is "logically easy" because dump, although it 
needs a quiescent file system for a good copy,  can also handle 
multi-volume save sets - you just keep feeding it media when they're full.

This doesn't stop you taking a dump of an active filesystem, you just play 
a game of chance that the files you need are complete and coherent when 
you restore. 

Hopefully, if cronned, you wouldn't expect all the same files to be 
corrupt at the same time, from night to night - but I guess it depends how 
scheduled you are on your machine?

As it's not a random access backup, filesystem limits don't apply - it 
does saves "block by block", and so talks to the raw interface of the 
destination device.

i.e. if using a tape drive, you'd have to use the "no rewind" device as 
the target to get more than one dump on a tape.

Dump is aware of the filesystem it's working on and doesn't save any of 
the "surplus", but does faithfully keep an image of the data on the f/s, 
including links etc.

When restoring, you can select to do it interactively - in which case you 
get a dump prompt and can go up and down your fs to add only the files you 
like, but if you want to do a full restore, it's the reverse of a save - 
just keep feeding media.

Incrementals work off the latest dump 0 date stored ( or at least it used 
to be... ;-) in /etc/dumpdates which is how it computes its' idea of which 
files to back up - it's a bit like an RDMS checkpoint.

Same with restoring from incrementals - you feed it the last dump 0, then 
1, then 2, then..... until you get to the last one you did and then you 
have the complete image since the last dump 0 on disk.

And ( for me, at least ) the great joy of dump is it's a "small and 
stupid" utility.

It does one thing well and has few complications, so when it goes off its' 
task you get an error, which is often ( but not always.. ) reasonably 
meaningful.

For tar/cpio, you have to jump through hoops to get it to "just get a 
clean image" with loads of switches and obscure gotchas, because tar is 
really for file data - not making sure the files you want are coherent, 
and cpio is for blocks, not structured data.

For me, if I want a backup of a filesystem, I'd use dump - but as you've 
alluded to, not REALLY for / , mostly /usr, /home and /var.

And I have to say that when I started using journalled filesystems, life 
became easier 'cos I could bring the fs back on line and have the backups 
go over into the working day without having people scream - like on VAXes.

Joy!    ;-)

HTH,

Regards,

tom.
----------------------------------------------------------------------------------------
Tom Cleary - Security Architect

CSC Perth

"In IT, acceptable solutions depend upon humans - Computers don't 
negotiate."
----------------------------------------------------------------------------------------
This is a PRIVATE message. If you are not the intended recipient, please 
delete without copying and kindly advise us by e-mail of the mistake in 
delivery. NOTE: Regardless of content, this e-mail shall not operate to 
bind CSC to any order or other contract unless pursuant to explicit 
written agreement or government initiative expressly permitting the use of 
e-mail for such purpose.
----------------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20040728/c661db6c/attachment.html>


More information about the plug mailing list