[plug] backups
Timothy White
weirdit at gmail.com
Fri Apr 28 21:24:40 WST 2006
On 4/22/06, Stuart Midgley <stuart.midgley at ivec.org> wrote:
> I happily rsync more than 100000 files from scratch. But then again,
> I have 320GB of memory to play with :)
>
> If you want the sort of performance you see from scp, you should use
> the -W flag. No program is going to be great at copying thousands of
> small individual files, so in that case tar should definitely be
> used. BUT, rsync should still work just fine.
>
> One major issue is the use of ssh. You will struggle to see more
> than 10MB/s via ssh. The encryption costs too much cpu time. I tend
> to use rsh where possible, or an rsync server, and happily see 40MB/s
> over our GB network. When I rsync across the country, I use a highly
> specialised rsync script which uses the -n flag to get a list of
> files that need syncing and then break the list up and get 30+
> rsync's going simultaneously. I tend to only see about 1MB/s from
> Perth to Canberra per file stream, so getting 30 streams going
> simultaneously gives me 30+MB/s bandwidth.
I'd love to see how you do all this.
Thanks!
Tim
--
Linux Counter user #273956
More information about the plug
mailing list