[plug] [link] Linus and the fire-hose

Harry McNally harrymc at decisions-and-designs.com.au
Sun Feb 9 11:05:03 WST 2003


On Sat, 8 Feb 2003 20:21:35 +0800 Leon Brooks <leon at brooks.fdns.net> wrote:

> Comment?
> 
>     http://www.linuxjournal.com/comments.php?sid=6601&tid=5838

Hi Leon

Well I've looked across what I've just written and I guess I've
commented :) It's too hot to be trying to garden today which
was the grand plan over a beer last night. Here goes .. 

I'm not sure that I find the analogy very clear. If the argument is that 
computing needs a standard then $A_LARGE_SOFTWARE_VENDOR would probably
argue this way as well. "We are building a common computing platform and
if Linux would just roll over all will be well in the world .. and our
share price" or something like that. 

Ok, Linus provides good quality control as an extremely competent
arbiter of what goes into the kernel but I'd suggest a telling example of
Linux standardisation is a bit downstream of these kernel decisions.

What impressed me at the conf was Bdale's discussion about the number of
hardware architectures that have debian ports and the list is expanding. No
disrespect to other distros but the (globally distributed) build tools to
support packages across multiple architectures are pretty amazing. There is
a standard code base which is now widely deployed. Other than specific
non-packages (sound on the S390?) you could argue that Linux has become
_hardware_independent_ which is not possible to say with any other "single
code base". No flamebaiting intended but please correct me if required :)

Bdales observations were that the other effect was improved robustness of
the code because the need to port to other architectures "discovered" all
of the non-portable programmer tweaks; endian issues, bashing specific
I/O addresses (DOS anyone ?) etc etc.

My participation in one embedded, cross compiler development project 
(i860 on VME) was that the very cluey tool chain builder and project
technical lead setup the build to allow compiles to both the embedded
target and Sparc using (where available for each) gcc, Sun cpp, and another
compiler in order to reveal gotchas. At that stage of C++, the Sun compiler
found the most issues but they all found unique problems at some stage.  
I haven't seen this done very often so I'd be interested if this is
commonly used and I'm just working on the wrong projects :-)

For debian, the tool chain has a common source but the "multiple compiler"
approach still applies since each architecture has tests within the tool
chain that are unique to that architecture. So the outcome is enforced
portability and more compliant code.

Rusty observed that he was wrong. In fact, "tool chain hackers get the babes"
and the example he gave was pretty mind boggling.

While Linux adoption was assisted by a hugely deployed common hardware
platform (the crappy old PC architecture), support for other architectures
means a common kernel and other elements can be deployed widely. This
differs from the Plug and Pray development environments for WinCE and
others that don't share a common core code base.  

And something I have been thinking about a bit is that to acheive a
Windows install on Computer Angels low end machines needs Win95 or
Win98 (if non-OEM licences can be found). This is all old whereas
Linux allows us to use the most recent stable releases which will
continue to be supported. Caveat on XFree86 v3 for older video cards.

We have access to lighter apps and WMs (sorry KDE and Gnome ppl) so
graphical applications can run at a respectibly pace in 32MB RAM on
P1 100MHz. It is possible to apply the "make it lighter" embedded design
approach to the CA distro. We have a new version planned if anyone
would like to eval and configure a package or two btw :)

I just remembered, that one of Rusty's talks was about successes (or
otherwise) of four kernal hackers setting up partners or friends with
desktop Linux systems. Rusty noted that this was tricky because "hey,
I just don't use this stuff." It made me realise that it needs a
different experience to the command line kernel hacker in order to
setup robust but versatile systems for the novice user.   

The recent thread on PLUG about desktops for n00bs still seems to
require a speedy box so CA have to approach the distro design
differently. A single, lightweight install (even on powerful donated
machines) means common training and simpler interface for our novice
users. The only addition thus far has been OpenOffice.org on systems
with >=128MB memory.    

Just a thought. I'd be interested in comments from more knowledgeable
people since mine is a view from the periphery. I'm also viewing this
from a hardware twiddlers perspective since any Linux implementations
for me would be embedded products probably on non Intel platforms
because I dwell in the ultra low power twilight zone. "Hand me that
electron. I've got a gate to toggle" :-)

Ha! I just recalled Bdales "hair dryer" and discussions about his power
bill for those that were at that talk. 

I'll stop now. All the best
Harry

-- 
linux.conf.au 2003		The Australian Linux Technical Conference
http://linux.conf.au/		22-25 January 2003 in Perth, Western Australia
It was huge.			Adelaide next year. I'm going.

Are you a computer angel?	http://www.ca.asn.au/



More information about the plug mailing list