λ will save us, or, Applicative trumps imperative in the large

Sunday, 26 Nov 2006

There are some unlikely and potentially ill-considered turns of thought here, but bear with me and it might be interesting. Or not.

A few days ago, Joel Spolsky got a lot of link flow for a rant about the Windows Vista UI design as exemplified by the shutdown interface (overly cute suggestion for a simpler design included). This compelled a programmer who used to work on that part of Vista to come out of the woodwork with a fascinating glimpse into the innards of a doomed company, namely, of Microsoft:

[W]e had dependencies on the shell team (the guys who wrote, designed and tested the rest of the Start menu), and on the kernel team (who promised to deliver functionality to make our shutdown UI as clean and simple as we wanted it). The relevant part of the shell team was about the same size as our team, as was the relevant part of kernel team.

[…]

Windows has a tree of repositories: developers check in to the nodes, and periodically the changes in the nodes are integrated up one level in the hierarchy. At a different periodicity, changes are integrated down the tree from the root to the nodes. In Windows, the node I was working on was 4 levels removed from the root. The periodicity of integration decayed exponentially and unpredictably as you approached the root so it ended up that it took between 1 and 3 months for my code to get to the root node, and some multiple of that for it to reach the other nodes. It should be noted too that the only common ancestor that my team, the shell team, and the kernel team shared was the root.

So in addition to the above problems with decision-making, each team had no idea what the other team was actually doing until it had been done for weeks.

Positively Lovecraftian.

In comments, an ex-manager from those teams chimes in after some sarcasm about this process from others:

The people who designed the source control system for Windows were not idiots. They were trying to solve the following problem:

  • Thousands of developers
  • Promiscuous dependency taking between parts of Windows without much analysis of the consequences

→ With a single codebase, if each developer broke the build once every two years there would never be a Longhorn build (or some such statistic – I forget the actual number).

There are three obvious solutions to this problem:

  1. Federate out the source tree, and pay the forward and reverse integration taxes (primarily delay in finding build breaks), or…
  2. Remove a large number of the unneccesary dependencies between the various parts of Windows, especially the circular dependencies.
  3. Both 1&2

#1 was the winning solution in large part because it could be executed by a small team over a defined period of time. #2 would have required herding all the Windows developers (and PMs, managers, UI designers…), and is potentially an unbounded problem. (There was much work done analyzing the internal structure of Windows, which certainly counts as a Microsoft trade secret so I am not at liberty to discuss it.)

So the overall architecture of Vista as well as the process by which it was produced both bear an uncanny resemblance to spaghetti imperative code full of globals. It’s a miracle that they managed to ship anything.

This becomes even more striking if you read the last part of his comment:

Note: the open source community does not have this problem (at least not to the same degree) as they tend not to take dependencies on each other to the same degree, specifically:

  • rarely take dependencies on unshipped code
  • rarely make circular dependencies
  • mostly take dependencies on mature stable components.

I guess you know at this point what I was getting at with the title of this entry.

And with that, I have done my part to be part of the echo chamber that is ringing with those two links… so now it’s time for the unlikely turn I promised.

The turn’s destination is Bill Thompson’s anti-Web 2.0 screed, which is also currently echoing. It is strewn with fatuous jabs like “some scripted magic [added] to even the most banal website” and “the stale socialising of Flickr and YouTube.” It certainly wouldn’t help any argument to sound as if it’s coming from a high-brow elitist yammering about the shallow whippersnappers of today, but the technical argument sounds just the same:

Now we must decide whether to put our faith in Ajaxified snakeoil or to look beyond the interface to distributed systems, scalable solutions and a network architecture that will support the needs and aspirations of the next five billion users.

[…]

If we can unlearn the lessons of the old Web and transcend its stateless protocols to achieve real distributed processing over a managed, trustworthy network then the possibilities truly are remarkable.

Shelley Powers takes a good swing at the anti-Ajax side of his growling, so I’ll concentrate on the rest and ask: exactly what is Bill suggesting here? Where are all those “truly scalable distributed systems” that have been hiding under a rock? Not a single distributed object system has managed to work at large scales for the last two decades, and the next two decades look like they will make success even less likely, not more. The web is the first system we have that has proven to work at massive scale. Why on earth would we want to unlearn its lessons? I thought Ryan Tomayko dispensed with this line of thinking a while ago.

He goes on:

We can start to build hybrid applications that use modular code and distributed services, some local, some remote. We can introduce yet another level of abstraction – always the solution to any computer science problem – and get our codebase away from processor dependency.

We can build a network that doesn’t care or notice if your libraries are local or remote because the stuff you use regularly is always where you need it to be, cached on your local storage when needed, on a remote server when you’re online. And we can do it all without ceding control to Google, Amazon or even Microsoft.

I have some decade-old news for anyone who believes this. No, to be scalable and future-proof, the web must remain declarative; a World Wide Web of Functions will never happen.