The folks at LiteSpeed apparently included AOLserver 4.0.7 in their latest Web Server Performance Comparison benchmarks. (Unfortunately, they don’t include a date on the benchmarks so it’s not easy to tell when these benchmarks were performed, but according to their home page at the moment, it looks as though it was updated today.)
On one hand, it’s great that they included us in their list of servers for benchmarking. On the other hand, their published results show AOLserver performing poorly, which could very well be true but from anecdotal evidence of my own personal experiences, the numbers they show seem out of line with what I would have expected from their benchmark server, a 2.4 GHz Xeon running on Fedora Core 3 with a 2.6.10 kernel. Their small static file tests show AOLserver maxing out under 4,000 req/sec for non-keepailve and under 8,000 req/sec with keep-alive. This still puts AOLserver performance ahead of Apache, but less than half the throughput of LiteSpeed’s server. The echo CGI tests show AOLserver maxing out just under 300 req/sec, which is comparable to Apache, but again less than half the throughput of LiteSpeed.
What’s remarkable is the wide gap in throughput between traditional user-space HTTP servers and the Red Hat Content Accelerator (nee TUX HTTP Server). Granted, kernel-space HTTP servers can get some unfair advantages, but what’s interesting is the fact that the Litespeed Web Server 2.0 Pro (LSWS 2.0 Pro in the benchmarks) is comparable to TUX, but according to Litespeed’s features it “[runs] completely in the user space” — so, how do they get the kind of performance they get? I’d love to see a whitepaper describing what design techniques they use, but all I could find was this item from their FAQ:
Why LiteSpeed web server is so fast?
Good architecture and heavily optimized code.
(The seemingly Engrish phraseology was theirs, not mine.)
Looking around for various web server design improvement ideas similar to Matt Welch’s Staged Event-Driven Architecture (SEDA), or the USENIX 2004 paper accept()able Strategies for Improving Web Server Performance, I stumbled across this paper by Vsevolod V. Panteleenko and Vincent W. Freeh titled Web Server Performance in a WAN Environment (PDF). Here’s the abstract:
Abstract–This work analyzes web server performance under simulated WAN
conditions. The workload simulates many critical network characteristics, such as
network delay, bandwidth limit for client connections, and small MTU sizes to dial-up
clients. A novel aspect of this study is the examination of the internal behavior of the web
server at the network protocol stack and the device driver. Many known server design
optimizations for performance improvement were evaluated in this simulated
We discovered that WAN network characteristics may significantly change the
behavior of the web server compared to the LAN-based simulations and make many of
optimizations of the server design irrelevant in this environment. Particularly, we found
out that small MTU size of the dial-up user connections can increase the processing
overhead several times. At the same time, the network delay, connection bandwidth limit,
and usage of HTTP/1.1 persistent connections do not have a significant effect on the
server performance. We have found there is little benefit due to copy and checksum
avoidance, optimization of request concurrency management, and connection open/close
avoidance under a workload with small MTU sizes, which is common for dial-up users.
Granted, as more and more users move from narrowband (dial-up) to broadband, the interesting points of this paper will become less and less relevant, but it’s still interesting to see the outcome of this kind of research. Of course, with more and more Service-oriented Architecture (SOA) designs and systems being built today, the majority of HTTP traffic over the WAN could already be between servers on broadband links than to clients on narrowband links. That would be an interesting fact to research and prove.
In the meantime, I guess the bar for AOLserver performance has been raised. Lets see what we can do to reach it!