[plug] web server questions

Jon Miller jlmiller at mmtnetworks.com.au
Sat Sep 20 12:51:16 WST 2003


Thanks for that explanation. So in essence what you are saying is if there was such a device or product that could filter the http traffic it could only do so after the hit thereby affecting the traffic at a certain point.  This being the case have a look at www.riverhead.com.  I got a call from  Israel explaining their product and it works after the router filtering out spoofed packets, dns, icmp, http attacks.  The use a MVP architecture (Multi-Verification Process).  What it does is filter the packets coming in to only allow the "good stuff" into the local network.  Based on this setup wouldn't the network already have felt the hits?  I mean if you have a 5M connection to the Internet surely this would experience degradation down from the packets coming into the router to these systems.  The whole MVP does packet filtering, then Anti-Spoofing, Anomaly Recognition, Protocol Analysis and then last Rate limiting.  Can't all of this be setup on a gateway server?  BTW they stated they use RHL in the router cases.

Jon

Jon L. Miller, MCNE, CNS
Director/Sr Systems Consultant
MMT Networks Pty Ltd
http://www.mmtnetworks.com.au

"I don't know the key to success, but the key to failure
 is trying to please everybody." -Bill Cosby



>>> devenish at guild.uwa.edu.au 12:01:21 PM 20/09/2003 >>>
In message <sf6c3f2c.091 at mmtnetworks.com.au>
on Sat, Sep 20, 2003 at 11:51:03AM +0800, Jon  Miller wrote:
> > Unlike mail servers where one can setup blacklist/blackholes/rbl list
> > is there such a service for web servers?
> 
> Absolutely. There are many ways of doing this. For example:
> 
> - packet and connection filters (e.g. ipchains, tcpwrappers)
> - web server configuration (consult documentation for your web server)
> 
> Apache has directives such as Allow and Deny. It is possible to make it
> much more sophisticated than that, though.
> JLM> You stated absolutely to the web server question using ipchains,
> etc, but wouldn't this have to be constantly updated with new IP
> addresses as they become available?

This is effectively what happens with SMTP blacklist/blackholes/rbl
lists (though it happens transparently). Although I don't use ipchains,
I assume there it wouldn't have a problem with a script that updates the
filter list when hosts are identified. But, how does this
"identification" occur? -- I'm now guessing that you are effectively
asking "does anybody keep track of hosts that attempt to find insecure
web forms to send spam". I had not realised that you meant "service" in
that sense. I don't know whether anyone attempts it.

> > I've noticed the following:
> > 
> > /var/log/httpd/error.log
> > [Sat Sep 20 10:01:12 2003] [error] [client 61.139.60.84] File does not exist: /var/www/html/tmpad/banner/itrack.asp
> > [Sat Sep 20 10:01:13 2003] [error] [client 61.139.60.84] File does not exist: /var/www/html/a.htm
> > [Sat Sep 20 10:01:22 2003] [error] [client 210.83.18.98] File does not exist: /var/www/html/search.php
> > [Sat Sep 20 10:01:35 2003] [error] [client 61.139.60.84] File does not exist: /var/www/html/Affiliate/SB/search1.js
> 
> So what? Does this bother you in some way? Could you elaborate?
> JLM> Yes as this just is a small amount, in the log files this goes on
> for hours on end, thus limiting our services. We use a 2M/@M
> connection and at times it feels like a a 56kb connection.  The logs
> are flooded with these errors.
[...]
> So what I'm trying to understand is there must be a way to get our
> bandwidth back and eliminate this type of traffic from consuming the
> bandwidth or am I batting my head up against a hard wall?

Sorry to hear it. But at least this explains what you are talking about.

It will be very difficult for you to filter HTTP data to solve the above
problem. The reason is: in order to analyse HTTP data, you need to have
received it in the first place (and thus the resource limitation has
already occurred). You would need to filter at the IP level (e.g. by
recognising IP addresses) so that the TCP connections don't proceed. Be
warned, of course, that if the rate of connection attempts increases
then you will be basically under a denial-of-service type of "attack"
and you would need your upstream service provider to do the filtering
for you.

> JLM> Yes, these may be legitimate entries meaning they are looking at the clients web pages, no problem here, but the client tracks the number of hits to their site, these we know will count, but does the one such as these below count also?
> 61.139.60.84 - - [20/Sep/2003:11:43:26 +0800] "GET http://www.uccinema.com/a.htm HTTP/1.0" 404 199
> 220.113.15.29 - - [20/Sep/2003:11:43:33 +0800] "GET http://a.as-eu.falkag.net/dat/dlv/aslmain.js HTTP/1.1" 404 224
> 61.139.60.84 - - [20/Sep/2003:11:43:35 +0800] "GET http://ad.trafficmp.com/tmpad/banner/itrack.asp?rv=1.2&id=2873 HTTP/1.0" 404 217
> 
> Since they are 404 codes I know they are not completing their GET
> command because the pages or files do not exists, but the traffic they
> consume is immense.

Looks as though you had an "open proxy" and thus your web server has
become "popular" for mischevious deads. Although you have closed the
proxy, the "attackers" are continuing their attempts (they haven't
caught on to the fact that you no longer provide the proxy service).
:(


_______________________________________________
plug mailing list
plug at plug.linux.org.au
http://mail.plug.linux.org.au/cgi-bin/mailman/listinfo/plug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20030920/bef21eb9/attachment.htm>


More information about the plug mailing list