Three Gigahertz Forever

Monday, Mar 9, 2015, 15:18

I hear people occasionally mention that CPU clock rates have been stuck around 3 GHz for a while. They say that multicore instead of higher clock rates is a trend. But it seems few realise how far this while has stretched.

The last time CPU clock rates went up was when the Pentium 4 Prescott LGA 570J came out and hit 3.8 GHz – in 2005.

It’s been a decade that CPU clock rates have been frozen. Exponential increases in single-threaded performance are well and truly over.

We now live in the titular era: the era of Three Gigahertz Forever.

Meanwhile, Moore’s Law has continued to apply, so we have still been getting ever faster chips despite the new situation. But the gains have moved sideways into multi-threaded performance. That’s not a “trend” any more, it’s the status quo.

Even with that, we have arrived on the outskirts of the era of Dark Silicon. An end to Moore’s Law itself is within sight. While it’s unclear how far off it really is – it’s hard to tell the distance of something you don’t know the size of –, we now find the eventual end of Moore’s Law to be a tangible inevitability.

What’s more, power consumption has increasingly gained importance as an optimisation target. Now of course optimising for power consumption largely has the same basic shape as optimising for speed – try to achieve the goal with less work, try to avoid needing to achieve the goal at all. Where power consumption differs is that it can be sensitive to wasted work even in non-hotspot areas of the code.

So not only can the “just wait for hardware to catch up” approach no longer cover for slow code, but power consumption now demands caring about wasteful work even where it previously never mattered.

The free lunch is over.

Endorsing Scan Tailor

Monday, Mar 9, 2015, 14:50

I can never quite believe how phenomenal this application is. It feels like magic. Like what using computers should be like.

This is what it’s for:

An interactive post-processing tool for scanned pages. It performs operations such as page splitting, deskewing, adding/removing borders, and others. You give it raw scans, and you get pages ready to be printed or assembled into a PDF or DJVU file.

Emphasis mine.

It doesn’t try to create perfect documents automatically. It does an astonishing first pass, actually, but that still has many so-so guesses – far from perfect. The automation is not magic. (Or maybe, is only minor magic.) Even so, this first pass is very valuable, because you get to edit a passable starting point, rather than having to do the entire job from scratch. For this editorial work, the program gives you a range of image processing algorithms tailored very specifically for the job, using UI controls that are cast exactly in terms of your senses.

And therein lies the magic. If you have ever whittled away at some menial task using an inadequate tool, then switched to the right one for the job, you’ll know the feeling – suddenly it gets easier so much, it’s as though you were cheating. But the outcome is still entirely of your doing. This is what’s going on here – just drastically amplified by the fact that the material used to fashion the tool is not simple wood or metal but a general purpose computer.

A bicycle for the mind… and the hands of a craftsman.

You start with a dog’s breakfast of a scan, and you get yourself a document that looks like it came out of a vector graphics program – because that’s what the computer makes you capable of doing.

Magic.

Introducing Buftabline

Saturday, Nov 15, 2014, 10:10

Almost ever since I started using Vim, I have been in search for a satisfactory persistent visualisation of the buffer list. I have used many hacks over the years, some written by others, some of my own attempt. None ever felt right.

Roughly a year ago, I had a sudden moment of clarity: Vim has already had for a while the functionality required to build this feature, it just usually serves a different purpose. Namely, ever since Vim has supported tabs, it has been able to render them both in text mode and as GUI tabs. Their text mode rendering is exactly what a buffer list would need as well. Why not re- (or ab-)use the tabline for buffer tabs?

I implemented a first stab at this but it had several severe deficiencies. It scratched my own itch just well enough, however, so I dragged my feet for a year.

To my chagrin, I cannot even claim to have had this idea first, much less (in my tardiness) claim first implementation: Airline shipped an implementation of this exact idea at the time it occurred to me independently. Oh well. (Subsequently I discovered several more plugins, all of which seem to be younger than Airline’s implementation.)

But recently I finally got around to finishing up my own script to the point where it is actually releasable. So without further ado:

It is designed with the ideal that it should Just Work, and has no configurable behaviour: drop it into your configuration and you’re done. Needless to say I like my own take better than the competitors; but I have also contrasted it with each of them in the documentation, so you can make your own call on that.

Share and enjoy.

Lingua programmatica

Wednesday, Sep 17, 2014, 07:44

I program in English and always have because builtins and the standard library of (asymptotically) every programming language are named based on English. If you try to choose identifiers by another language, your source will inevitably be a mishmash of English plus that other language. I find the resulting jumble to be alternately jarring and grating by equal measures.

On occasion of Twitter’s recently announced plans

Monday, Sep 8, 2014, 11:07

Thesis: algorithmically curating the feed that your service provides is an implicit admission that your service is a net negative for its average user because its signal-to-noise ratio is too terrible to harvest value from it without programmatic aid.

(Behold, Facebook, what “frictionless sharing” hath wrought.)

Or as I quipped recently:

Nobody hangs out on Twitter any more. It’s too crowded.

Sensible Git mail for the occasional user

Tuesday, Mar 11, 2014, 05:21

This entry is mostly for my own benefit, since – being an occasional user – I keep having to figure this out from scratch. What I was aiming for is a workflow that gives me both convenience and full control.

I don’t like git send-email. The command assembles and sends mail all in one go, so you have to know what it will do blindly. Running git format-patch manually first only helps a little, since git send-email still performs its own mail assembly on top of its input. It needs practice – using it frequently for a while and looking at the results. I don’t use it enough for that.

Unfortunately the patchmail I do send every once in a while goes to places with many subscribers, and I’m unwilling to rattle their inboxes with my (repeat!) learning process. I want to be sure I’ll send exactly the mail I mean to, on the very first try. Of course I could dry-run my patchmails by sending them to myself first – fiddly and always one command line typo away from making any mistake public.

I also have msmtp set up on my home server with all the details of my SMTP accounts and really don’t want to maintain another copy of that information on my laptop, especially on a per-repository basis.

So for me, the answer is to avoid git send-email entirely.

The key realisation is that once a mail is properly formatted, sending it is nothing more than piping it to /usr/bin/sendmail (or whatever equivalent you employ) – so you actually need only git format-patch.

There is just one little wrinkle to take care of: the “From ” line it generates needs to be removed before its output can be piped to sendmail. This is easily done using formail, which can also split mboxes, making it a convenient companion to git format-patch --stdout.

Bottom line, this replicates git send-email:

git format-patch --stdout origin | formail -I 'From ' -s sendmail -t

(Obviously, you season the git format-patch invocation with --to whomever@example.net etc., to your taste.)

Here, git format-patch produces an mbox-format mail folder, which formail splits into individual mails (“-s …”), and for each mail, deletes the From line (“-I 'From '”) and then pipes the mail to sendmail -t.

But note what I gained here:

I can omit the pipe.

I can just call git format-patch by itself. That allows me to inspect the exact mail that will be sent. In fact, if you don’t pipe the output anywhere, Git will invoke the pager for you and even highlight the diffs within the attachments: excellent. Very convenient for careful reading. Then if I’m satisfied, I add the pipe and out go the patches. (This is much like sending a dry-run to myself first, then sending for real, except with all the friction removed.) I know exactly what is being mailed out. There is no blackbox.

And if there was a cover letter I needed to edit? Then I just pipe to a file first, before piping to formail. In between those steps, I can edit the file – whether in Vim directly, or using mutt -f mbox.

Of course, this isn’t necessarily for everyone. You need a machine with procmail installed (for formail) and a sendmail-compatible MTA set up. You also need to be comfortable working with mail in the raw. There is, after all, no blackbox.

But it works for me. Sending patchmail is no longer something I put off.