[plug] Filesystem for large files?

indy at THE-TECH.MIT.EDU indy at THE-TECH.MIT.EDU
Fri Feb 21 16:15:48 WST 2003


On Fri, Feb 21, 2003 at 12:29:52PM +0800, Brad Campbell wrote:

> I'm doing some VHS to DVD/VCD conversions at the moment and the drive 
> I'm capturing to is currently formatted ext2.
> As the files exceed 2GB, it appears to take progressively longer to 
> write a chunk (About 10MB). The chunk size remains constant across the 
> file write, but in the 4 hours write time it takes longer and longer to 
> write the chunks. Is this related to ext2? Has anyone got experience in 
> handling files of 20GB or larger and what would be the best filesystem 
> to be using for this?
> I was using ext3 and figured it may be something to do with journaling, 
> so degraded to ext2 but the problem remains.
> 
> I realise as I get to the inner of the disk that my write speed is going 
> to decrease, but I'm writing a 20GB file to a blank 80GB disk so it 
> should not hurt that much.

Brad,

I can't speak directly about ext2 and ext3 (I've never used a Linux machine
for this kind of file size) but back in the day (when I were a whippersnapper)
this was considered to be expected on ext2, which is why we stuck
with SGI at the time. Perhaps XFS may be worth investigating (although
the Linux port is getting mixed stability reviews) as it was always
optimized for large files (rather than the small file optimization common
in many *nix filesystems).

Indy.

-- 
Indranath Neogy
<indy at the-tech.mit.edu>



More information about the plug mailing list