A perhaps less controversial plan for creating a better VM

The previous post spawned a lot of discussion, a lot of which was surprisingly technical and on-topic. But after talking with some people I realized that OpenJDK can do a lot just on its own. Here’s my wishlist:

OpenJDK->OpenVM plan

  1. Split up source distribution into OpenVM core, place things like Swing into separate source project
  2. OS-specific integration in core; e.g. javax.unix namespace (e.g. Unix domain sockets), javax.windows (similar to python-win32), javax.osx; and allow interested operating system vendors to innovate there. The operating system does matter.
  3. Commit to longer-term VM improvements necessary to allow compilation of C# into extended JVM bytecode (not CIL)
  4. Commit to VM improvements necessary to make Jython/JRuby work well
  5. Stay on top of Linux distribution integration, make sure packagers aren’t carrying patches (this includes JSR 277 work)
  6. Together with the above, branding as OpenVM or something similar to express willingness to be more than just the old “Java” which was JVM+Java language+Swing

An Open Letter to Jonathan Schwartz and Miguel de Icaza

Jonathan, you are leading the development of a Free Software, high-quality, multi-language VM runtime with an extensive class library, called OpenJDK.
Miguel, you are leading the development of a Free Software, high-quality, multi-language VM runtime with an extensive class library, called Mono.

How about a merge? We’ll call the new project “OpenVM”, for convenience in this letter.

Let’s jump right in to the advantages for the projects:

Advantages for Mono

In one word – control. Miguel, your original goal with Mono was to bring a modern and Free Software development stack to GNOME and Linux. In many respects, you and the Mono community have been succeesful, helping spur the creation useful applications for the Free desktop, as well as getting Mono deployed in interesting applications like Second Life. However, you are largely not in control of your destiny. You’re stuck implementing a clone of what Microsoft creates, and besides the fact that cloning something is much less fun for your engineers, you can’t help but be behind.

By helping to create the OpenVM project, you will regain control. In an OpenVM effort, drawing on the common shared work of several corporations (Sun, Novell, Red Hat, Google, and IBM, to name a few), your engineers get to help design the future of Free Software. You will instantly remove all hesitation that the Free Software community has about your work, and have been the a key part of not one but two cornerstone projects for Free Software (GNOME, and Mono->OpenVM).

Advantages for OpenJDK

Jonathan, you have said you want to take the J out of JVM. By stepping up and adding Mono technology like a high quality C# compiler to this OpenVM effort, in the short term you will regain the eroding market share of the JVM on Windows by allowing interoperability between the growing C# code base and existing Java code. In the longer term, developer attraction to OpenVM will let you accelerate improvements to Java, and reverse gains in C# market share.
Moreover, the community agile languages such as Ruby and Python are nearly certain to join an OpenVM effort. Your company will again be at the core of the stack for the vast majority of the computing industry, from the Free Software community to the proprietary applications.

From the Free Software side, turning Mono from a Microsoft technology clone into a part of a truly Free project would eliminate the increasing use of .NET in the community.

Finally, leveraging the Mono team would bring a number of excellent engineers who know the Free desktop very well, having created high-quality bindings for GNOME, and Free applications that many people use.

Advantages for the Free Software community

The Free Software community has long been split between developers using Free and agile languages like Ruby and Python, the the Mono-based community, and a huge community of developers in the world who used in the formerly-proprietary Java in Free projects like Apache. A combined OpenJDK and Mono would dramatically further the merge of all three of these communities, increasing the control the Free Software community has over the stack and reducing duplication of HTTP libraries, database access libraries, etc.


Obviously, there would be many details to work out in such an effort, like how the class libraries could be merged. My intuition is that initially OpenVM would have both JDK and .NET “personalities”. Over time, the Mono .NET class library would be rebased on top of an evolved JDK class library, and eventually the .NET personality could be relegated to a separate “OpenVM-.NET emulation” project as most applications are ported to use the OpenVM JDK-based class library.

But the details are just that – where there’s a will there is a way. So the open question is – who will register the domain name first?

Hotwire hypershell 0.721 released

Hotwire 0.721 is now available. This release features a lot of changes since 0.710. Immediately visible will be the entirely revamped UI.

New Hotwire 0.721 UI

Full screenshot with object inspector

The goal is to be closer to a shell/terminal interface than before, giving more space to the output of commands while still allowing use of the mouse for operations. Another exciting internal change is that you can now define Hotwire builtins as regular Python functions, but with a decorator. For more about this feature, see this post.

Besides the above, there are a lot of other nice changes in this release from a growing list of patch contributors, such as Zeng.Shixin’s contribution of native file icons for Win32:

Native Icons on Win32

As well as Chris Mason’s improvements to the command output search:

Search highlighting

I added a nicer connection status display to the included Unix SSH client:

Connection status display in HotSSH

Mark Williamson has been experimenting with a set of Hotwire extensions to make Hotwire into an interactive Mercurial shell; see his site.

For the detailed release email, see the announcement.

Four languages – one process

The previous entry was cut slightly short because I had to head out to dinner. What I was trying to finish was a bit of code to experiment with JRuby, Jython, Rhino, and the new JSR 223 scripting framework.

My first step was to build JRuby from trunk and the Jython 2.5 branch, as well as JSR223 trunk. After a small patch, I got some code working:

package org.verbum;

import javax.script.ScriptContext;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
import javax.script.SimpleScriptContext;

public class LangFun {
public static class Hello {
public void sayHello() {
System.out.println("Hello world!");
public static void main(String[] args) throws ScriptException {
ScriptEngineManager mgr = new ScriptEngineManager();

Hello greeting = new Hello();


ScriptContext context = new SimpleScriptContext();
context.setAttribute("greeting", greeting, ScriptContext.ENGINE_SCOPE);          

ScriptEngine javaScript = mgr.getEngineByExtension("js");

ScriptEngine ruby = mgr.getEngineByExtension("rb");

ScriptEngine python = mgr.getEngineByExtension("py");

To run this, you’ll need to link to jruby.jar, jython.jar, rhino.jar, as well as the respective engines from JSR223: jruby-engine.jar, jython-engine.jar, js-engine.jar.
The idea behind this code is pretty simple – we first create a Java object, with a single method. Then we use the JSR223 interface to instantiate an engine for each of the languages, hook up a context object which maps the variable greeting into the global namespace for each language, then call their respective eval methods. The result is what you’d expect:

Hello world!
Hello world!
Hello world!
Hello world!

Pretty cool! Of course, I’m not really stressing the system here with these simple scripts; but given the dramatic progress (really, look at that graph!) of projects like JRuby, the future approaching very quickly.

“Ok, but…”, you might say, “what’s this useful for besides sharing libraries?” The general answer is that the single process, shared memory model is fundamentally more powerful than the multi-process, communicating over bytestream pipes model. You can just do more, with fewer hacks. An example is software like Reinteract – before, it could only support Python. Now, not only could you add an input language chooser to Reinteract; you could actually pass the results from a Python computation increasingly seamlessly into Groovy, Java, or whatever. Personally, I specifically want this for Hotwire. Right now we only support Python well, because the project is based on CPython.

Many new languages for the Free Software community

There was a comment on the last entry mentioning Scala. I’ve only looked at it very briefly. But this is just one of many languages that are now truly, finally part of the Free Software community. The Scripting project has a list of the engines written. But that list is far from complete, because some don’t have the engine glue written yet, and because other languages like Scala are more designed as Java replacements that run on the JVM, rather than “scripting”.

And of course through all of this, venerable Java isn’t standing still – it will likely gain closures. And remember – all of this will soon be available (if it isn’t already) via yum install or apt-get install, etc., all of it entirely Free Software, and increasingly integrated with the operating system and your favorite libraries.

A software tsunami

A large underwater earthquake ends up creating an effect called a tsunami. The event can be detected by sensors around the world, but the resulting tsunami isn’t immediately visible; if you’re in the surrounding ocean you’ll notice it, but it’s only when it hits land that you really notice the effect.

On May 8, 2007, there was an effect a lot like an underwater earthquake in the software world. What are we talking about? The complete release of OpenJDK, of course. Since that time, we’ve mostly been in the underwater propagation stage. A lot has been happening behind the scenes such as removing proprietary bits, fixing OS integration, etc. But now, I think we’re close to moving into the stage where the ocean recedes, so you can see the first visible effects.

The original OpenJDK release was a snapshot of the in-development version 7, so it was not quite suitable as a drop-in replacement for software that was targeting JDK 6. But recently in February, Sun released the sources to the stable version of the JDK, version 6. Thanks to the combined work of the OpenJDK team and the IcedTea project, this is now suitable to effectively work as a drop-in replacement for the earlier proprietary JDK releases.

On Fedora 8, this command worked for me:

sudo yum --enablerepo=development install java-1.6.0-openjdk

It pulled in new versions of a few things like zlib, but no big deal. In any case I think you’ll likely see OpenJDK 6 pushed as an update for Fedora 8 too.

Why is this so important?

It’s pretty hard to underestimate the transformative impact Java and the JVM have had on the software industry in the last decade or so. Now, Java is a very well designed language, but what I think is equally (if not more) important is the JVM. The JVM was really pretty far ahead of its time; the optimizing JIT compiler, the class structure, the threading model, concurrent generational garbage collection, etc.

There has been quite a lot of innovation on top of that platform. Ok, that sounds a bit buzzword-y. How about this: People have written a lot of awesome software for the JVM.

The examples seem endless; but let’s mention some:

Impact ahead

But before OpenJDK, most of these projects effectively did not exist to the core of Free Software community. Even though all of these projects are themselves Free Software, to run it you had to download and install proprietary JDK. I think for most of us, we might as well have been required to download a Windows or OS X VM. At least that’s the way I felt. It wasn’t very integrated with the operating system. But most importantly, it was a blob which we didn’t ultimately control, and we were right to avoid the Java Trap.

But, those were pre-earthquake times. The integration of the formerly-separate Java/JVM world with the Free Software community is ramping up very quickly. For example, Fedora is close to landing all of the dependencies of JRuby. I don’t think anyone has started on things like Processing or World Wind yet – it could be you!

A shared Free Software runtime

I want to talk specifically about efforts like JRuby, and the newly-invigorated Jython. In an earlier blog entry, we looked at the fragmentation in the Free Software community. Every free language has its own runtime and libraries; and until now, building on the JVM wasn’t an option if you wanted many contributors from the Free Software world. OpenJDK is finally changing that. Now, you can write a library using Java, it can be sensibly integrated with Free operating systems like Fedora, and can be consumed by anything on the JVM, which includes Python and Ruby, as well as new languages like Groovy. We’d actually be sharing more than just the OS kernel and C library.

Crucially, this is a platform we now control, because it’s Free Software. And the original upstream is not just giving some standards or source code drops, they are actively helping us. In fact, “they” are “us” now!

Part 2 of this entry

Making use of those blank new tabs

I’ve just uploaded a new version of the Firefox Journal to Firefox Addons. It’s in the “sandbox” right now, which means you need to create an AMO account and enable the sandbox to download it from there.

In this new release, because the awesomebar does search so well, this version of the journal removes the search interface; it’s now focused on helping you get back to sites you recently visited quickly, as well as providing an automatically-generated list of your 5 most-frequently visited web sites.

Here’s an updated screenshot. For more links and background information on the journal, see the home page.

Drawing power from the sky, part 2

In a previous entry, we briefly looked at Amazon’s Web Services from a high level. Now that I have my application running reasonably well and debugged, I wanted to write a bit more about my experience. I’m not claiming true expertise here, but I’ve done my best to learn what I can and hope I can pass on some of this in a comprehensible way.

So it has certainly been a learning experience trying to understand the overall picture for how one writes an application on top of these APIs. As background material in this domain, I found the Google FS, Bigtable, and Map/Reduce papers useful, as well as the Building Scalable Websites O’Reilly book by a Flickr developer. If you don’t read anything else, read the Map/Reduce paper at least – every programmer should understand it. Coming from the perspective of an OS/desktop developer, I personally found the GFS and Bigtable papers the most interesting as compared to the POSIX file APIs and SQL, respectively.

“Everything you know is wrong.” – Marble Madness


One of the most important things I’ve learned is that there are two kinds of “scalable”. One is the kind of “scalable” that MySQL clustering, JBoss clustering, etc. offer. These systems take you from one machine to smaller values of N. They’re typically based on UDP broadcasts or the like.

The other kind of scale is called “web scale” – this is where your application is a completely distributed system, running in multiple data centers. No one machine is truly critical. Your application gets just faster as you add machines.

What’s the tradeoff between “scalable” and “web scale”? The answer is pretty simple. Your application has a number of facets such as reliability, consistency, and availability. Researchers have essentially come to the conclusion that you can’t have all 3 at the same time as load increases, and of the 3 you almost certainly want to sacrifice (immediate) consistency. If I write a new review of a book on Amazon, someone hitting Reload on the same page a few seconds later might not see it. If I receive an email in GMail, it might fail to be in the search index. If I delete a picture from Flickr, it might still be in my “photostream” display. But one property the system can have is eventual consistency, on the order of minutes or even seconds perhaps, but not immediate. My review should eventually appear to the person pressing refresh, the email will appear in the search index, and my picture will truly be gone (from the UI, anyways).

On Relational Databases

If you look at almost any modern web development framework like JBoss Seam, TurboGears, Rails, etc etc., at the heart is a relational database for storage. Using a relational database lets you effectively push most of the hard problems like persistent storage and concurrent access onto an external system (though that could be in-process using SQLite). There are some very smart people who developed SQLite, MySQL, Postgres, etc. A relational database gives you a lot for free, and unless you know what you’re doing, you should probably not attempt to store your data directly using say the POSIX file APIs (and this is true on the desktop side too, but that’s another blog entry).

In other words, for a lot of applications, relational databases are exactly the right solution. You can get very far with a relational/small cluster system; Wikimedia is an example here, though their job is obviously made much easier by the fact that reads truly dominate access.

Also worth mentioning is the approach of partitioning a relational database; though at this point you’re starting to move away from the normalized relational model, since you can no longer perform an operation over your entire data set.

So you’ve decided to write a distributed application

It should come as no surprise that there’s no magic API to code to in order to create a web-scale system. What you need to do depends on your application. You need to have an understanding for how data flows into your system, what kinds of operations you need to perform on it, where and how you can sacrifice consistency, etc. There’s a lot of more implementation/deployment level things like load balancing and tuning that matter a lot too.

However, there is a sort of general approach of asynchronous, cached data generation. In the relational model, every web page view creates a transaction which gets the data it needs, then returns. This way your users always see up-to-date information. But in the web scale model, you can massively denormalize by precomputing data. The advantage is a web page request goes no where near a database; you simply pull the cached data from storage close to the edge web server. When you get a mutation in your system, you trigger an asynchronous (e.g. map/reduce) process to regenerate all the cached data. This may not be instant, but with some intelligence in your system you should still be able to have eventual consistency.

Writing a bug tracker

Let’s say you’re writing a bug tracker. In a traditional web app, you’d have a form which would do a POST to send an updated comment. In the handler for that POST you open a transaction, and append a new row to your Comments table. But in a distributed system, you could generate cached data for each bug. A GET request for that bug is just a simple transformation of the cached data (perhaps XSLT, or maybe you save the final HTML). But the POST request to add a comment put is into a reliable work queue (like SQS). Another machine reads from this queue and triggers a regeneration of the cached bug data – which could be more than just the cached data for that particular bug. Let’s say you want to be able to display a list of all open bugs. One way to handle this would be to store in S3 a key /open/ for each bug. Then a query of open bugs maps to a listing of all keys with that prefix. But probably what you really want for these kinds of metadata operations is a system like Hypertable or SimpleDB.

A new kind of operating system?

If you look at Amazon’s APIs, they provide the root services you’d expect of an operating system: storage (key/value as opposed to hierarchical), IPC (reliable/persistent), and task creation (Xen image as opposed to process/thread). The challenge though is rewriting applications to use new APIs.


I mentioned in the previous entry that I thought storage was the most important of these 3. There are a lot of messaging systems out there (in particular AMQP looks pretty good), and virtualization APIs abound. But there are many fewer high-profile storage APIs. Storage is hard, and rewriting applications for a new form of storage is not at all trivial. That’s why so much is invested in POSIX-compatible storage systems – if you have that, then you instantly have Apache, rsync, and probably on the order of hundreds of thousands of other useful applications. But there are limits to the POSIX API; the Google FS paper discusses those. Just as there are limits to the relational model. What is going to be interesting over the next few years is to see which of these new APIs start to win application developers, and how the traditional database-based free software development stacks adapt.