Three Evolutionary Stages of Version Control
I recently chanced onto a conceptual framework to describe the development of version control systems that I have seen nowhere else.
It came out of a job far bigger than anticipated at the outset: an innocent edit to a a wiki article that ended up stretching across 4 days, in the course of which I would add relevant points, only to discover some new structure trying to emerge, which as I was rearranging and rewriting to bring out would remind me of yet more relevant points. By the end I was quite tired and wanting to be done, but the effort also ended up thoroughly crystallising my conceptualisation of version control systems. So I thought I should also give it the additonal exposure of my weblog.
Note that these individual points are all well known; I have long been aware of them. But it was only during this work that I came to understand their relations systematically. Remember too that getting from each step to the next in this sequence took a long time, both because it took time to realise that there was a problem in the first place, and because the respective right solution was not clear from foresight – trivially obvious as it may all seem when you see it laid out like here.
1+1: One Repository, One Working Copy
The design of the earliest systems revolved around versioning a single working copy, directly edited by all users. To prevent attempts at simultaneous modification of a single file, editing was not allowed without checking files out, which only one user at a time could do for any given file.
Having to give each user access to the same machine and file system in order to work on code was natural at the time these systems were designed, in the mainframe era, but today would obviously be a problem. Also, the requirement to check files out was a cause of friction even at the time, since everyone has to wait on one another – not to mention that someone might forget to check a file back in before leaving on vacation.
1+n: One Repository, Many Working Copies
The next evolutionary step was to decouple the repository from the working copy, so that there may then be many working copies. The exemplar in this class of systems, known as centralised VCSs, is CVS. It lifts the obvious restrictions of earlier systems with a design in which the repository is mediated by a server. Multiple users can collaborate by each checking out a private working copy of the project.
Note that in CVS, “checking out” no longer implies locking. (In other centralised VCSs, it may; e.g. Visual SourceSafe. In some, such as Perforce, it is optional.) Checking in changes is simply blocked if someone else has already checked in other changes in the meantime. Before the latecomer is allowed to check in their own changes, they have to update their working copy with the upstream changes, resolving any conflicts manually.
This works reasonably well. CVS ended up as the de facto standard for a decade.
However, its single-repository nature, subsequently adopted by most following major systems, perpetuates problems harking back to the earlier model – and adds new ones:
Checking in changes under such a system requires a network connection, as do most operations related to the project history. Besides the fact that this makes offline work nearly impossible, it also imposes a major performance penalty, since networked operations are inescapably slow. Some systems, like Subversion, try to selectively speed up some of these operations by keeping more data in the working copy, but the benefit of this is uneven across operations. Further, high traffic repositories may require rather beefy servers and connections to sustain.
Anything checked in is always public; this means one has to be very careful about the state of commits. It also makes it impossible to touch up history (e.g. to fix common mistakes like forgetting to include a new file in a commit). Branches become a big deal: all commits are publicly visible, no matter how experimental. Also, branch names are forced into a global namespace so a lot of thought has to be given to choosing them.
Branching is problematic for more reasons too. Most of these systems do not support branch merging very well: after you do it once, the changes from the merged-in branch are mixed in without any tracking, so later attempts to merge the same branch will result in lots of artificial conflicts. This makes it very difficult to keep branches in synch. But the longer branches go without merging, the more effort it takes to merge them. All this adds up to a large barrier, psychological and otherwise, against branching.
The single-repository nature means that anyone who wants the safety of revision control needs to have write access to the same repository. And since branching is badly supported, everyone with access to the repository is generally going to be working on the same trunk. This means write access has to be given out selectively, to competent people only, resulting in political headaches within projects, while outsiders are forced to create their patches in an unversioned ghetto.
n+n: Many Working Copies, Paired With Equally Many Repositories
The solution to all this was to not only give each collaborator a separate working copy, but a separate repository also. This class of system, whose pioneering solid implementation was BitKeeper, is known as distributed version control systems. The technical basis that allows this is algorithmic merging: 3-way merging (in the simplest case) allows combining non-overlapping changes automatically, and merge point tracking allows repeatedly merging branches without unnecessary conflicts.
Since each collaborator has their own repository and can make commits, the effect is that everyone has their own private branch, with full versioning for local changes, and these branches can be published at the discretion of their author and can be merged by others easily. Actually, each collaborator often has several local branches – since merging is easy and branches never “need” be published, it is painless to create short-lived branches for experiments or tests, to use them as a general workflow aspect (e.g. start a new branch for every separate bug fix), or for any other purpose, whether intended for public consumption or not.
Everyone has full offline access to the project history, and all repository operations (except pushing or pulling changes, obviously) take place at full local disk speed.
All this immensely accelerates collaborative development and removes the political headaches surrounding commit access.