[plug] Systemd, good or bad

Brad Campbell brad at fnarfbargle.com
Mon Sep 22 11:39:37 UTC 2014


On 22/09/14 17:59, Onno Benschop wrote:

> Or you might spin up 100 machines to deal with a chunky load, then kill
> them all, leaving 2 running, then spin up 40, kill 30, spin up 150, etc.
>
> This is not how we have grown up to run our computers, but computing on
> demand has made it possible to run a virtual machine at fast enough
> speeds that the speed trade-off in using bare-metal is far outweighed by
> the simplicity of management in spinning up just enough machines to do
> the job, then killing them off when they're no longer needed. It's often
> cheaper to spin up 100 tiny machines than it is to keep 2 big ones
> running all the time. Amazon has pretty pictures to show this, but it's
> about reducing the waste associated with "spare capacity".
>
> This is happening at a massive scale across the globe, in virtual data
> centres everywhere. Shaving off a couple of seconds for each machine is
> a *big deal*.
>
> This requires a different mind-set. Of course, the machine isn't often
> customised to a particular task, it's a run-of-the-mill linux box that
> stores its state outside the machine - using some configuration magic,
> or a database, or some or other context sensitive thing. You might build
> your own machine image, only with the bits you need, but making it fast
> is important when you're running lots of little machines doing lots of
> little tasks.
>
> One of the ways that linux is pissing all over Windows in this
> environment is that linux boots in seconds, where Windows boots in
> minutes. (And I'm talking about servers, not desktops.)
>
> So, that's why boot time is important.
>

And I do get that, really I do. I still maintain it's a niche. I'm 
obviously wrong, but I don't think I'll ever think otherwise.


> Of course, if you implement something as far reaching as systemd, then
> it pays to have the same tools across all your machines, be they
> dedicated bare-metal machines, or virtual throw-away ones.
>

This is the bit I don't get. A VM instance is a world apart from the 
machine that runs the VM instance. I just see this as a case of "if the 
only tool you know how to use is a hammer, every fixing looks like a 
nail". I don't get the one size fits all mentality of systemd.

I know I should just shut up and accept it, because short of spinning my 
own distribution (or moving to bsd) I'm going to have to use it sooner 
or later. I just can't get my head around what the extra complexity 
actually buys me except a boot sequence that is harder to debug (which 
oddly enough is usually the *thing* you really need to debug when things 
go south).




More information about the plug mailing list