Archives for March 2005

tcl-coredumper 0.1 released

I recently blogged about Google releasing some of their code as open source projects. What I didn’t explicitly say in that previous entry was how cool it would be if there were a Tcl interface to the Google coredumper library. Well, now there is!

Thanks to Nathan Folkman who had the same great idea, he started working on what is now being called tcl-coredumper. I assisted by doing the autoconf stuff, hacking on the Makefile, and writing the automated tests for it, as well as working on the code itself.

See the official announcement on the AOLSERVER-ANNOUNCE mailing list that was also crossposted to the AOLSERVER mailing list. It includes a link to download the source: tcl-coredumper-0.1-src.tar.gz.

Will the “splintering” of the Interweb result in a Tipping Point?

Daniel L. Smith writes a fantastic essay on the splintering of the Interweb; how data on the net has gravitated away from centralized services (mailing lists, Usenet, etc.) and towards personal publishing (blogs, forums, etc.) and how that has resulted in a “splintering” of information and service on the Internet, giving rise to new tools and applications that were and probably should remain unnecessary, if it weren’t for this splintering.

He goes on to suggest that there will eventually be a tipping point, that “we will continue to have an explosion in the splintering behavior of communication on the Net for about another two years or so, and then we’re going to hit a big Tipping Point – people will get fed up with the myriad of online avenues, and many of them will quickly die out (uh, not the people). We’re still in the “cool, gee-whiz” phase, and that will get replaced with the “ok, we don’t want to have 50 different online identities anymore” backlash.” Interesting prediction … which lead me to leave the following comment on his blog:

Back in the day, a company set out to make the gnarly techno-elite world of “the Internet” accessible to newbies. You’re familiar with that company: it was called America Online.

What it sounds like you’re pining for is a modern day America Online to come along, integrate all the “noise” and “splintered assets” of the great Web 2.0, and produce a coherent view of it all as a product.

Can lightning really strike twice? :-)

It will be interesting to see who can come along and build a service that reins in all the craptastic junk on the Interweb and makes it usable and useful to the every day person. I mean, I can’t seriously suggest to someone non-technical like my parents to “go download and install a RSS aggregator, set up a blog, go do X, Y and Z, then you’ll be able to fully enjoy the Interweb.” They need to be able to take a piece of removable media, stick it in a hole on their computer, and “just have access to it all.”

Good stuff. Keep on blogging.

Indeed, it will be interesting to see who steps up to the challenge of providing such a service, whether it be AOL or someone else. Or, if there is already sufficient momentum to make such a service impossible to create, which would be sad but unsurprising.

LiteSpeed benchmarks include AOLserver 4.0.7

The folks at LiteSpeed apparently included AOLserver 4.0.7 in their latest Web Server Performance Comparison benchmarks. (Unfortunately, they don’t include a date on the benchmarks so it’s not easy to tell when these benchmarks were performed, but according to their home page at the moment, it looks as though it was updated today.)

On one hand, it’s great that they included us in their list of servers for benchmarking. On the other hand, their published results show AOLserver performing poorly, which could very well be true but from anecdotal evidence of my own personal experiences, the numbers they show seem out of line with what I would have expected from their benchmark server, a 2.4 GHz Xeon running on Fedora Core 3 with a 2.6.10 kernel. Their small static file tests show AOLserver maxing out under 4,000 req/sec for non-keepailve and under 8,000 req/sec with keep-alive. This still puts AOLserver performance ahead of Apache, but less than half the throughput of LiteSpeed’s server. The echo CGI tests show AOLserver maxing out just under 300 req/sec, which is comparable to Apache, but again less than half the throughput of LiteSpeed.

What’s remarkable is the wide gap in throughput between traditional user-space HTTP servers and the Red Hat Content Accelerator (nee TUX HTTP Server). Granted, kernel-space HTTP servers can get some unfair advantages, but what’s interesting is the fact that the Litespeed Web Server 2.0 Pro (LSWS 2.0 Pro in the benchmarks) is comparable to TUX, but according to Litespeed’s features it “[runs] completely in the user space” — so, how do they get the kind of performance they get? I’d love to see a whitepaper describing what design techniques they use, but all I could find was this item from their FAQ:

Why LiteSpeed web server is so fast?

Good architecture and heavily optimized code.

(The seemingly Engrish phraseology was theirs, not mine.)

Looking around for various web server design improvement ideas similar to Matt Welch’s Staged Event-Driven Architecture (SEDA), or the USENIX 2004 paper accept()able Strategies for Improving Web Server Performance, I stumbled across this paper by Vsevolod V. Panteleenko and Vincent W. Freeh titled Web Server Performance in a WAN Environment (PDF). Here’s the abstract:

Abstract–This work analyzes web server performance under simulated WAN
conditions. The workload simulates many critical network characteristics, such as
network delay, bandwidth limit for client connections, and small MTU sizes to dial-up
clients. A novel aspect of this study is the examination of the internal behavior of the web
server at the network protocol stack and the device driver. Many known server design
optimizations for performance improvement were evaluated in this simulated
environment.

We discovered that WAN network characteristics may significantly change the
behavior of the web server compared to the LAN-based simulations and make many of
optimizations of the server design irrelevant in this environment. Particularly, we found
out that small MTU size of the dial-up user connections can increase the processing
overhead several times. At the same time, the network delay, connection bandwidth limit,
and usage of HTTP/1.1 persistent connections do not have a significant effect on the
server performance. We have found there is little benefit due to copy and checksum
avoidance, optimization of request concurrency management, and connection open/close
avoidance under a workload with small MTU sizes, which is common for dial-up users.

Granted, as more and more users move from narrowband (dial-up) to broadband, the interesting points of this paper will become less and less relevant, but it’s still interesting to see the outcome of this kind of research. Of course, with more and more Service-oriented Architecture (SOA) designs and systems being built today, the majority of HTTP traffic over the WAN could already be between servers on broadband links than to clients on narrowband links. That would be an interesting fact to research and prove.

In the meantime, I guess the bar for AOLserver performance has been raised. Lets see what we can do to reach it!

Photo-realistic artist Emily Zasada

While clicking around BlogExplosion I came across Emily Zasada, an artist with a real knack for painting photo-realistic works. You can see some pictures of a work in progress in March 2005 (see “Untitled White Wine Painting” days one through eight). There’s more online images of her work in her online gallery at Yessy.com, and she has some of her work up for auction at eBay.com. I’m normally not a big fan of art, but this stuff is pretty incredible: her ability to reproduce glass and liquids accurately really impresses me. It’s stuff like this that makes me wonder how long it’s going to take before automatically generated computer graphics will really be photo-realistic, as much as Emily’s art is.

Get the “Schnappi das kleine Krokodil” MP3 via eDonkey2000

I’ve previously blogged, back in January 2005, about the German sensation surrounding Joy Gruttman’s “Schnappi das kleine Krokodil” song. Since it seems people are still actively searching for it, and I don’t want to keep updating the previous entry, I’ve started a new one.

Okay, publishing content via BitTorrent is annoying. I’m personally in favor of eDonkey2000, because it’s much easier to publish content with it than BitTorrent. If you’re looking for the Schnappi MP3, go download the eDonkey2000 client (scroll down for the free “Basic” versions), then use this “ed2k” link to download the MP3:

ed2k://|file|Schnappi das kleine Krokodil.mp3|947187|ebcd0b7f7160ffd9a56f1a622f192507|

The only downside of the eDonkey2000 P2P network is that because of its simplicity of publishing content, it has become inundated with adult content, spam and warez. I see a real opportunity for someone to provide a “filtered eDonkey2000 search” service that eliminates those results from searches, possibly also filtering stuff that is copyright infringing too, and it could be an excellent P2P platform.

Alternatively, you could always go to and go and buy the CD from Amazon.de but it seems to be out of stock or something — my German’s not that great. If you want a visual experience of the song, there’s the Schnappi video over at xoop.nl.

MS Solitaire considered Harmful by North Carolina state government?

While reading this thread at Slashdot about the state of North Carolina wanting to eliminate Solitaire and other games from state employees’ computers, I came across this comment:

It’s Welfare (Score:2)
by Art Tatum (6890) on Sunday March 20, @11:16PM (#11995417)

It has for some time been obvious to me that government bureaucracy is the *real* welfare program in America. It’s a jobs program for people who can’t get work in the private sector.

Wow. How true that is …

Google Code: google-coredumper, google-sparsehash, google-goopy, google-perftools

Chris DiBona, previously an editor at Slashdot, now the Open Source Program Manager at Google, announced today that Google has launched its Google Code site, where it has placed some of its contributions back into the Open Source community at SourceForge!

The initial list consists of four projects:

  • google-coredumper: CoreDumper — “The coredumper library can be compiled into applications to create core dumps of the running program, without termination. It supports both single- and multi-threaded core dumps, even if the kernel doesn’t natively support for multi-threaded core files.”
  • google-sparsehash Sparse Hashtable — “This project contains several hash-map implementations in use at Google, similar in API to SGI’s hash_map class, but with different performance characteristics, including an implementation that optimizes for space and one that optimizes for speed.”
  • google-goopy: Goopy/Functional — “Goopy Functional is a python library that brings functional programming aspects to python.”
  • google-perftools: Perftools — “These tools are for use by developers so that they can create more robust applications. Especially of use to those developing multi-threaded applications in C++ with templates. Includes TCMalloc, heap-checker, heap-profiler and cpu-profiler.”

Three of the four are of interest to me: coredumper, sparsehash, and perftools.

For a long time, I’ve wanted better coredump capability in Linux, especially for multi-threaded applications such as AOLserver. Google’s contribution could “solve” that problem for me, which would be fantastic. Right now, it’s very difficult to troubleshoot a multi-threaded application on Linux because of this lack of capability, and gdb’s “gcore” just doesn’t cut it. Perhaps the Linux and GDB teams can integrate Google’s contribution back into their respective codebases; we’ll see.

Google’s sparse hashtable implementation could yield some performance improvement to Tcl which makes extensive use of hashtables. I’d like to see if I can use the Google sparsehash implementation as a drop-in replacement for the Tcl implementation and see what the benchmarks say. This could be big.

Google’s perftools is somewhat of a misnomer, since the big selling point is their improved memory allocator which is supposedly “[the] fastest malloc we’ve seen[, and] works particularly well with threads and STL.” This could displace the Tcl threaded memory allocator, if performance really is superior, or could be used by the Tcl threaded memory allocator for an additional performance boost. It should be fun experimenting and benchmarking it.

It’s nice to see Google publish some really valuable stuff back to the Open Source community instead of just lamely throwing us a bone like IBM. This is definitely consistent with Google’s “do no evil” philosophy.

Man, this is just awesome. It gives me a whole new range of toys to play with. It’s like Christmas in March!

S5, a Simple Standards-based Slide Show System

Andrew Grumet points out S5 Presents, a nod to Philip Greenspun‘s old WimpyPoint (web.archive.org link) system.

S5 Presents is based on Eric Meyer‘s S5 system: Simple Standards-Based Slide Show System, which is an incredibly simple and cross-platform way of delivering slide-shows inside a web browser. For a demo of the kind of slide show it produces, see the introductory slide-show on S5. I am so thrilled that Eric had the good sense to use XHTML 1.0 instead of OPML 1.0. S5 is a tangible example that reiterates my belief that OPML adds no value above and beyond what we already have available through XHTML alone.

I’m sold — I’m going to start using S5 for all my presentation needs where I can. I might still have to use Microsoft PowerPoint for those work-related presentations, but S5 is the killer app. All that’s needed is a decent WYSIWYG slide editor that can “Save As …” S5-conforming XHTML. Killer app.

please link to me, I’m not a white male!

Jeff Jarvis writes a great response to the FUD in Steven Levy’s recent column in Newsweek.

It sounds to me that Steven Levy is just parroting a message about the mainstream media’s fears about the Blogosphere’s loud and uncontrollable voice — guess they don’t like having the competition. The irony of it all is that I now consume more MSM than I used to because I read blogs that link to it.

As Jeff might call me, I’m an unwhite male, so if you’d like to further the cause and promote inclusion, which is what bloggers apparently don’t do according to Steven despite Jeff’s many counterexamples, you can feel free to link to me.

are mobile phone designs really too complex still?

Russell Beattie says this about modern mobile phone interface design:

Mobiles are the ultimate consumer computer. They are meant to be used by 12 year olds, teens, college kids, business people and your mother in law. But right now, the design of the interface is still way too confusing.

Russell definitely gets it. The real problem is that there’s too much choice — too many vendors, models, competing ideas, etc. A small minority of people can cope well with choice, but the majority just can’t. They need two choices, that’s it, either this or that. Too many phones with too many buttons (“… and not enough love to go around …“) — this is the land of confusion. People might even be better off if they only had one choice, but then they’d whine about monopolization and other nonsense instead.

Jonathan Schwartz made this statement after attending 3GSM:

And just in case you missed it, let me say it again: the majority of the world will first experience the internet through their mobile phones. We sometimes forget that 10 times as many people bought handsets last year as PC’s. Round numbers, there were a BILLION wireless devices sold last year, and around 100 million PC’s.

I can just see it now … millions of teenage kids surfing porn on their mobile phones, texting each other in solicitation for phone sex, mobile phone communities where they can pour out their emo whiny angst — is this where I think the future of technology ought to be? If the history of the Internet and the Web is doomed to repeat itself When The (Mobile) Revolution Comes, this is exactly where the future will end up, sadly.

As a parent of young kids, I won’t be looking for more user-friendly phones, instead I’ll be looking for phones with Parental Controls and anti-virus protection. In essence, I’ll want a company like AOL as my wireless provider, because they’ll look for ways to make my phone safer, simpler and easier to use. Sure, there’ll be the Yahoo!‘s and Google‘s of the industry creating really cool products for the niche geek audience, and they might keep a company like AOL on their toes in the user-friendly consumer space, but lets look at what company has years of experience in the space? There’s a reason AOL got the pejorative nickname “the training wheels for the Internet” — it’s true — and in the next several years, perhaps AOL can become “the training wheels of the mobile phone industry,” helping millions of people make use of their phones.

Or, maybe Yahoo will beat them to it, with really bright guys like Russell working for them. Or, something completely unexpected comes along and displaces mobile technology entirely (as we know it today) — that’d be very, very cool, too.