AOLserver 4.0.10 on Win32 with NSIS installer

The downside of working for companies under confidentiality agreements is that I can’t openly talk about the cool stuff I’m doing all day long, but I still find an hour or two on the weekends to work on AOLserver, which I can talk about all I want.

The latest itch I’ve started scratching is getting AOLserver to build on the Win32 platform using the free and open source toolchain MSYS/MinGW. It’s part of an effort I’ve wanted to complete for a long time: to create a “batteries included” distribution of AOLserver that’s easy to install and get started on. I’ve packaged the build using NSIS, the Nullsoft Scriptable Install System, which provides the familiar Windows application installer (and uninstaller!) that many people are familiar with. In other words, getting started with AOLserver on Win32 couldn’t get any easier.

In these early builds, I’ve included:

  • AOLserver 4.0.10
    • nsmysql
    • nssqlite3
  • Tcl 8.4.15
  • Zlib 1.2.3
  • SQLite 3.3.17
  • MySQL Client 5.0.27

I hope to add the AOLserver nsopenssl (for HTTPS/SSL support) and nsodbc (to connect to any Win32 ODBC database) modules as well. I may include support for PHP 5, depending on how much effort that entails.

One big wishlist item that I’d appreciate help with is including some documentation with the installer. Jamie Rasmussen started working on packaging the existing docs up as .CHM files, which is the format used by Microsoft’s HTML Help Workshop (FAQ). I’d like to finally finish updating and cleaning up the docs we have and including it with the installer as a .CHM, too.

If you’d like to download an interim build, here’s 4.0.10-2 for your experimental pleasure:

Once I’m comfortable with the installer package, I’ll be committing my changes to CVS along with documentation on how to produce the build. Then, I’ll try to create another installer using AOLserver 4.5.

If you have bug reports, comments or feedback about the installer, go ahead and leave them in the comments below. Thanks!

Tags: , , , , ,

AOLserver 4.0.10 VMware appliance, and AOLserver Conference

I’ve started work that I blogged about previously on a minimal Debian 4.0 (etch) image and AOLserver 4.0.10 installation as a VMware image “appliance.” If people are interested in helping test (you can use the free VMware Player), please email me directly and I’ll share the image with you. Once it’s tested and works well enough, I’ll make it available for public download.

The image contains MySQL and SQLite3 and the necessary AOLserver modules (nsmysql, nssqlite3) installed and configured. Ideally, all one would need to do to have a fully-functioning AOLserver setup is download the appliance image and player and start it up.

This is, what I hope, will be the first steps towards a more “batteries included” AOLserver distribution.

Also! Supposing there were to be an AOLserver Conference held in May of 2008, are you interested? (Again, email me directly if you are.) It would be held somewhere in northern New Jersey, close to New York City. If you would be interested in tutorial sessions, what materials would you like to see covered? Would you be willing to teach a tutorial session? Or present a paper? (A formal Call For Papers will be published once details are ironed out.)

If you have questions or suggestions, either email me or leave a comment below. Thanks!

Update: Oh, what the heck–if anyone wants to check out the appliance image and test it, go right ahead! Use your favorite BitTorrent client and get it here: AOLserver 4.0.10 VMware Appliance 2007-05-19 (261.73 MB) [.torrent]

Tags: , ,

New RPMs available for ]project-open[ V3.2

]project-open[ logo

Frank Bergmann emailed me today to let me know that Christof Damian has
created a set of RPMs for all the prerequisites required to run AOLserver 4.5,
OpenACS and ]project-open[. You can
download them from SourceForge as part of the ]project-open[ V3.2 beta release.

What is ]project-open[?

Borrowing directly from their project’s website:

]po[ is a Web-based ERP/Project Management software for organizations with 2-200 users. ]po[ integrates areas such as CRM, sales, project planning, project tracking, collaboration, timesheet, invoicing and payments.

]project-open[ is one of the largest open-source based web applications in the world with more then 1,000,000 lines of code. It is used by more then 100 companies in 20 countries to run their businesses.

Frank also tells me that they’re hoping to release V3.2 in the May 2007
timeframe. ]project-open[ looks to be quite a capable web application and if you’re interested in checking it out, the new packaging for V3.2 should make it much simpler to do so.

Congratulations, Frank and the rest of the ]po[ team! Keep up the awesome work.

Tags:
,
,
,

nsjsapi, server-side JS in AOLserver

After sitting on this code for long enough, I finally committed it to
CVS today: nsjsapi,
a module that uses the Mozilla SpiderMonkey
JavaScript-C engine for server-side JavaScript in AOLserver.

This is just my initial check-in of very basic working code. I’d
call it pre-alpha quality at this stage: it doesn’t do very much (yet)
and it’s likely to be wildly unstable. But, I’m making it available to
the world so folks can check it out, hack on it, give feedback, etc.

The code lives in SourceForge CVS: http://aolserver.com/sf/cvs/nsjsapi.
When it becomes suitable for more widespread use, it’ll get tagged and
released, but for now, it’s a “use at your own risk” kind of thing.

Tags:
,
,
,

Share your books with Bookmooch

Bookmooch: New life for Old books

I’m always happy to see successful projects that use AOLserver. I know John Buckman has been working hard on his latest creation, Bookmooch, a community for exchanging used books.

Tonight, a co-worker pointed out that this week’s TWiT Inside the Net episode 33 podcast is about Bookmooch! You can download the MP3 (18.4 MB) and listen for yourself. It’s a good show, but I’d like to just highlight a portion toward the end of the show.

At 30:53 (30m 53s) into the show, Leo Laporte asks John, “What did you write Bookmooch in, John?”

John responds, “Tcl,” which unsurprisingly resulted in a response of shock and disbelief from Leo and Megan Morrone. Then, John goes on to mention AOLserver at 31:30 and gives it a quick plug (thanks, John!).

Leo asks at 31:54, “You’re kind of swimming upstream, you’re not doing PHP, MySQL, all the traditional LAMP stuff?”

John responds at 32:00, “Magnatune is all PHP and MySQL and when we got Slashdotted I realized that it was not a very scalable platform. I’ve written some articles, one for Sysadmin Magazine, another one for Linux Journal, about surviving Slashdot, and it takes a lot of hardware if you want to scale, a lot of machines. The only way I could do Bookmooch and not charge is to do it on one or very few machines.”

He goes on to say at 32:39 that Bookmooch is powered by a single 1.2 GHz PC with a 120 GB drive supporting 280K hits a day. While this is a respectable amount of traffic, I’m betting that AOLserver isn’t even breaking a sweat, yet. It’ll be interesting to see how far John can push it before he needs to split traffic across two servers.

I’m really happy that John was able to get on TWiT and talk about Bookmooch, but I especially appreciate the fact that he gave AOLserver almost a full minute of his interview talking about it. I’m not sure how large TWiT’s Inside the Net podcast’s reach is, but I’m sure a lot more people have heard about AOLserver now that John’s mentioned it. Thanks again, John.

Update: Here’s John’s interview in Red Hat Magazine about Magnatune.

Tags:
,
,
,
,
,

Why it might be a bad idea to fork/exec in a multi-threaded application

Recently, the question of doing a Tcl [exec] inside AOLserver came up and I suggested using nsproxy to do it outside the main AOLserver process. This spurred a thread on the mailing list asking exactly why doing a Tcl [exec] inside the main AOLserver process isn’t a good idea. Finally, here’s what I wrote, trying to explain my understanding of the problem:

Considering the activity of this thread, let me contribute what I see as
the most common “problem” for AOLserver and Tcl [exec] …

Traditionally, fork() creates a copy of the process which invoked it,
which includes the memory allocated to that process. exec() overlays a
process with a new image and begins executing it. Since the typical
fork() immediately followed by exec() doesn’t write to its memory space
until the exec(), doing an actual copy of the parent process’s memory
then destroying it when it gets overlaid by the exec() is unnecessary.
So, in modern Unixes, an optimization was introduced: the parent
process’s memory is shared with the child and pages of memory are only
copied when they are written to–commonly referred to as “copy-on-write”.

This optimization is very wise in single-threaded applications: after
the fork() but before the child exec()’s, the parent process can only
do so much (i.e., cause pages to be copied) before the child releases
the memory by performing the exec(). However, in multi-threaded
applications, all of the threads in the parent process executing can
be writing to pages in memory, causing lots of copying to occur. In
the worst case, this can degenerate into almost 100% of the pages
getting copied, which means a fork could “cost” you 2x the memory of the
original parent process, depending on how many threads are active and
what they’re doing.

You might think “but, if I fork() and immediately exec() in the child,
how much can the threads in the parent process do?” Well, in Tcl,
[exec] doesn’t do an immediate fork() and exec(). There’s a handful of
code that’s executed in between. Without doing serious profiling,
there’s probably a non-trivial amount of instructions being executed,
all opportunities for context switching and execution of the threads in
the parent process. This problem is more visible in SMP systems with
many CPUs, where more threads in the parent process can be executed
while the child process is between its fork() and exec().

I’m not sure if this was “too technical” of an explanation. If it was,
please, don’t hesitate to ask questions. I want everyone to have a
decent understanding of the issue.

Also, to clarify: there’s no “danger” in executing [exec] from within
AOLserver. It “should” work — as long as you have sufficient free
memory for any pages that need to be copied — but, the impact to
performance can be costly. This isn’t a great concern in low-traffic
sites, but is certainly an issue when scaling.

This is one of the many reasons why nsproxy is good: it mitigates the
cost by doing all the fork/exec’ing in a single-threaded process that
has a small memory footprint, entirely outside the process space of the
main AOLserver process.

Tags:
,
,
,
,

Philip Greenspun is hiring AOLserver talent for photo.net

Philip Greenspun is looking to hire some AOLserver talent for photo.net! He’s looking for a programmer, a Linux sysadmin and an Oracle DBA. If you’re looking for work, especially with AOLserver, Tcl, Linux and Oracle, then let Philip know you’re interested.

What really caught my attention and made me smile was this:

What might some tasks be for the coming months? Upgrade to AOLserver
4.5 (compile some C code).

I’m glad to see that people are actively planning on upgrading to the latest AOLserver 4.5.0 release. Thanks, everyone.

For those who don’t know who he is, Philip Greenspun is the author of several books (Philip and Alex’s Guide to Web Publishing, Software Engineering for Internet Applications used for MIT 6.171). He’s also largely to thank for convincing AOL to release AOLserver as open source software, as one of the creators of the ArsDigita Community System. He’s also the curator of one of the oldest online photography communities, photo.net.

Tags:
,
,
,
,
,
,

AOLserver, Y2038 and bad MaxIdle/MaxOpen configuration examples

Back on May 12, 2006, there were reports amongst some users of AOLserver of their server mysteriously hanging, all coincidentally just around 2006-05-12 21:25 US/Eastern, or 2006-05-13 01:27:28 UTC, to be specific. It turns out it was just 1,000,000,000 (or 10e9) seconds away from the end of time in traditional Unix epoch time, which is a 32-bit signed integer, or 1,147,483,648 seconds since 1970-01-01 00:00:00 UTC. You could call this the first “Y2038” bug, which gets its name from the fact that the end of Unix time when represented by a 32-bit signed integer is in the year 2038, or 2038-01-19 03:14:07 UTC to be exact.

Bas Scheffers was the one who recognized the 10e9 coincidence, and soon after ‘Jesus’ Jeff Rogers realized that the MaxOpen and MaxIdle configuration parameters as shipped by default with ACS and OpenACS are set to 10e9 seconds, like so:

ns_param   maxidle   1000000000
ns_param   maxopen   1000000000

Reducing this number to something reasonable (say, “3600”) and restarting the server resolves the issue.

Other folks have already blogged about this issue, but just in case you missed it, here’s a short round-up:

Tags:
,
,
,
,

On iterative, open source documentation (or lack thereof)

AOL has put together developer.aol.com to highlight its contributions to the web community, but one of AOL’s earliest contributions, AOLserver, is silent from the list. (Yes, I did mention this to someone internally at the end of May 2006, and projects that have launched since have made it to the list, already.)

Jeremy Zawodny expresses what I’ve been trying to put into words for a while now about the difficulty of contributing previously closed source, which AOLserver was prior to 1999. The AOLserver community has always been anxious to see more of AOL’s closed source components for AOLserver get released as open source, most famously the “DCI” collection of modules which was invented as part of the AOL Digital City (now AOL CityGuide). So, what’s kept AOL from “being a better open source citizen” (if that’s really what it would mean) and releasing the DCI modules?

Jump-starting an open source community like AOLserver with a sizable and previously closed source codebase presents an interesting challenge. Most likely, the design discussions were held in face-to-face meetings, some now outdated design documents may have been created and the software was constructed under commercial pressures with all the sacrifices and trade-offs that come with them. Much of the knowledge and design rationale is primarily locked up in two artefacts: the initial developers’ brains and the produced code itself. Releasing any portion of the code alone significantly raises the bar of required understanding for participation and contribution. Making those brains available in the form of community participation (i.e., answering questions) means dedicating some non-zero percent of your most valuable asset: your people.

Contrast this with an open source community that starts from scratch, with nothing at the start. All design discussions are generally held using communication methods that are easily archivable and searchable. Even if no explicit design document artefacts are produced before software construction, a determined software design archeologist could pore over the chat logs and transcripts and mailing list archives to reconstruct the key points that drove the design using resources that are already publically available. After the documentation is started this way, the community can continue to refine and contribute to it through distributed collaboration tools, which is why I’m a big fan of Wiki software.

Is this documentation really that necessary? Again, for some people, probably not: the bar of required understanding is low enough for them. But, that set of people is quite small. There are folks at AOL who have full access to all our source code who still can’t make heads or tails of our stuff, who need serious hand-holding to make things just work. Imagine the difficulty a member of the community would have, not having access to all the code and all the people who know it well. For many, making more of our closed source code open would be next to useless to them. So, where’s the rush to open it up until all the necessary prerequisites (documentation, examples, etc.) are available?

So, given the situation, does this make AOL (or Yahoo!, or Google) poor open source citizens because it hasn’t put a license on more of its code and made it available to the community? Does it necessarily imply that the quality of the code is poor because it’s not easily open sourced? Is there a lesson about gift horse mouth inspection going on, here? I can’t speak for Yahoo! or Google, but take my word for it that the members of the AOLserver community who work at AOL have been continuing to clean up and better document more of the still-closed source AOLserver modules (like the DCI modules) with intent to eventually release them.

Tags:
,
,
,
,
,

Jacob Rosenberg’s brief history of web servers at AOL

Another AOL employee, Jacob Rosenberg, has started blogging. He’s got the unenviable task of making sure our stuff at AOL keeps running as he’s part of the Operations group. Today, he writes a brief history of various web servers that have come out of AOL in the past several years, including AOLserver.

Go on, show him some love and subscribe to Jacob’s blog and leave him some comments. Lets welcome another AOL voice to the blogosphere!

Tags:
,
,
,
,