[plug] Squid Stability Issues

Bernd Felsche bernie at innovative.iinet.net.au
Sun Mar 26 22:39:39 WST 2006


"Peter T Spicer-Wensley" <petersw at svshs.wa.edu.au> writes:

>I am the network manager/ tech support/ LT coordinator at Swan View SHS.

>I have a query about the performance of Squid 2 on my Centos 4.2 server
>(P4 3.6 2GB RAM).

>I have been running Squid as a proxy server on redHat Linux (5.2) for
>about 10 years and it has been very stable but I have just moved from an
>aging CLI (headless) config to a new server with GUI but the performance
>of the Squid service is good but not great.

>I need advice on where to go to get advice on tweaking Squid
>performance.
>My main issue is stability at present. The Squid service keeps being
>brought down by errant clients.

Look at the log files and tell us why Squid is stopping. It may be
exhausting cache capacity or running out of memory. Hundreds of
clients tend to do that.

>I am happy to point any interested pluggers to my old and new squid.conf
>files should they wish to offer any ideas.

The whole machine configuration is important. Down to the
partitioning and choice of filesystems. And other stuff running on
the squid server.

>I have about 400 Windows XP SP2 clients (not bad for a state school of
>800 kids) which are often infested with malware, some of which are
>causing the Squid service to choke.

You have the answer: Get rid of the malware.

Set up a triple firewall; one for the Internet/DMZ, the second for
the DMZ/LAN and finally one LAN/Untrusted for the eXPletive clients.

For a large number of clients, I'd even suggest firewalling subnets;
which should also delay the propagation of malware. Run squid on
each of the subnets' firewalls and use the main squid proxy as a
cache peer. That way, if one gets trashed, only that subnet is
offline.

>The CPU load on the server will go from 1 or 2% load to 100% load for a
>number of seconds as the server is bombarded with requests (from
>dataminers I think.) If this happens long enough it breaks the Squid
>service. Can I get Squid to protect itself?

Make sure that you've not enabled and Squid cache administration
features. You can do all the administration without having the doors
open on the server.

You can use squidGuard to block the "call home' addresses as well as
other unwanted sites.

What is actually hitting the CPU? A squid process or something else?

>I am using Ethereal to see where the worst offenders are however the
>problem is tracking down offending clients. Since they aren't always in
>use or even on this is tricky.

Squid's access logs will record which clients are going where...

>I have tweaked some things but need a bit of advice to counter these
>dratted infested XP clients.

Use squidGuard to block access from PC's that haven't been cleaned.

>Is there a way to tell Squid to ignore repeated requests above a
>certain level?

>I AM cleaning up Clients using AdAware and this is helping but things
>are still occasionally falling in a hole...

Clean up at the firewall level. Scan email and web pages. Either
stop or only allow other downloads through a "secure" means.

Block all outbound ports below 10,000 from untrusted machines. That
will also block a great deal of "call home" traffic.

>The Windows Firewall is better than nothing but do others have a better
>solution on the client level?

LTSP.

Windows Firewall is actually a security risk because people believe
that it protects them. No firewall protects completely; and Windows
firewall often needs to be disabled to run some crucial, legitimate
software. And then remains disabled.

>(Unfortunately installing client Linux isn't an option - though it
>should be.)

As long as the clients can PXE-boot, you don't need to "install" on
the clients.

You will need a few chunky servers to provide the "terminal
services" for the clients, but it scales well and uses resources
more effectively. It also means that you can control what's on the
computers. Starting budget is about 200MHz of Pentium and 128MB of
RAM per user.

"Blade" servers handle that sort of thing quite nicely. 
http://www-8.ibm.com/servers/eserver/au/opteron/ls20.html
With an Opteron 280 and 16GB ram, each blade could happily run
between 10 and 50 clients.

The (cheaper) alternative is just a single mini-monster server like
Tyan's VX50
	http://www.tyan.com/products/html/vx50b4881_spec.html

Handles up to 8 processor cores which should support between 50 and
200 clients doing fairly complex stuff.

I'd rather not have all my eggs in one basket and would use a
handful of smaller servers serving their own gigabit subnets.
You can authenticate users and serve their files from a common
machine.

If you use LTSP, then the network boot configuration can be used to
point terminals at less-busy servers if "theirs" has a high load
average. That works for "sessions". Add more servers or upgrade one
in an area that's always "too busy". Put "retired" servers into a
central pool to serve terminals to provide some temporary relief.
-- 
/"\ Bernd Felsche - Innovative Reckoning, Perth, Western Australia
\ /  ASCII ribbon campaign | "Laws do not persuade just because
 X   against HTML mail     |  they threaten."
/ \  and postings          | Lucius Annaeus Seneca, c. 4BC - 65AD.




More information about the plug mailing list