iSingularity? (take 2)
Yesterday I wondered:
Where are the people who worry about [the iPad] being the future of computing?
Turns out I was just too impatient by a few hours. Alex Payne voiced the same thought and Steven Frank wrote along very similar lines, though in some fundamental ways I disagree with him. Adam Pash wrote a decent piece at Lifehacker, though I think the issue is better covered by David Megginson’s rather wider concern. But the piece that I was looking for, basically word for word, was Mark Pilgrim’s take.
The reason I disagree with Steven is that I think the Old World/New World dichotomy is a red herring. The only real difference between these is what UI metaphor is predominant in each and what supporting concepts are exposed to the user. Steven talks about “Old Worlders” expecting windows, menus and toolbars and other complexity that presumably corresponds to power. But as an Old Worlder, that’s the least of my worries. In my opinion, compared to home computers, personal computers already present huge barriers to tinkering – but merely de facto, due to the sheer complexity of modern systems.
Let me walk down memory lane. I grew up on PCs, not home computers, myself. I boggle in retrospect at how many stumbling blocks the Microsoft ecosystem and culture forced me to overcome. People who grew up on either home computers or Unices had an order of magnitude easier a time to get into computing. If I’d been someone not so doggedly curious, that differential could easily have been enough to keep me away. Things haven’t gotten better since, and meanwhile the complexity of modern computers has only increased. But the defining situation for children and teenagers is that they have no money but an infinite supply of time. In the Microsoft ecosystem, those were largely fungible – and so I overcame.
On the iPad? Not a chance. The iPad’s answer to the problems of personal computers is to simplify the UI – which is good. But the complexity under the hood isn’t even a concern. And that’s because it legislates a barrier to entry for tinkerers. No one can do anything with it that Apple does not approve of – in Adam Pash’s words, Apple’s gotten into the habit of acting like you’re renting hardware.
Now, you can tinker – but you need a Mac and an iPhone dev licence: a large wad of cold hard cash, exactly what children and teenagers don’t have. (Some of them will have parents who understand why this is a good idea and can provide the spare cash. I was out of luck on both counts.) The iPad the barrier to entry is so ridiculously high, I would not have been able to surmount it.
In contrast to Steven’s thesis, I posit that the iPad represents no trend reversal, but rather is poised to be the bend in the hockey stick shape of a curve we have been riding for a long time – as Robert Young points out:
When IBM created the Personal Computer in 1981, it predicted 2,500/year in sales. They based this estimate on a specified use case: users (assumed to be engineers, scientists, etc.) would write programs for their own use, and run same on their Personal Computer. To that end, IBM made available 3 operating systems from which the user could choose the one to his liking: CPM/86, UCSD P-System, PC-DOS. It was envisioned to be a mainframe on a desk.
And so it was until… Lotus released 1-2-3, and only for PC-DOS. At that point the light bulb went off around 128 and the Valley: what IBM had created was an Office Stove, a device for which the User DIDN’T write the programs to be run, but which could bake all sorts of delight food stuffs. That IBM didn’t restrict the BIOS and didn’t secure the OS’s made Billy Boy rich. And quite a few programmers.
The iPad is just the most extreme extension of this paradigm: it’s an appliance, but significantly less open to the gaggle of Cooks in the wild. Users, in Steverino’s mind, couldn’t care less whether the Cooks are indentured servants to Apple. They don’t even care that they are locked-in to Apple. They just know that the tarts taste good.
The iPad is not a revolution. It is right in line with where we have been going for decades. If it represents anything fundamental, that is the courage to throw out an ill-fitted UI metaphor to better serve this direction.
But how would the fundamental experience of the device suffer if Apple shipped a dev environment with the iPad, just like one used to be part of every home computer (incl. the Apple II)? Is that really an inconceivable proposition? Or heck, it could be a $20 download on the App Store for all I care. That’s no hurdle for a teenager, not even a big one for a preteen. Why must the iPad require a dev licence and a Mac to write code for? (Obviously: because that makes Apple a lot of money.)
The current personal computer is a bad paradigm. What I was hoping for was a move toward things like Alan Kay’s visions – a simplification of programming to the point where everyone (especially kids) can do it so easily, at least for simple tasks, that it becomes routine. The iPad is the direct opposite of that.
The irony in all this is that for all of how much Adobe Flash gets lambasted in the Apple sphere (and make no mistake, I am not enamoured with Flash on any level), it let Joe Gregorio’s 13-year-old create his first game, one of subsequently many others. And a successful iPad would close even this unsatisfying avenue.
Is the future we’re getting the one we really want?