“Not my favorite language” ♠
So 5 years after iOS has killed Adobe Flash, Apple releases a language that looks and feels remarkably like ActionScript 3. Just like ActionScript, Swift is locked behind a $1000 paywall. Instead of bindings to proprietary stuff from Adobe it has bindings to proprietary stuff from Apple.
For Mozilla ♠
Mozilla have stood on principle in the past, by refusing to implement H.264 format video. It made no difference. […] There’s always been an edge of, well, they’re doing the right thing which means that I don’t have to. Firefox should stand on principle here and refuse to play DRMed videos… but of course I’m not going to stop using DRMed video, I’ll just use Safari for that. […] If you dislike Mozilla doing this (which I do, too), then where’s the outcry against Apple and Microsoft and Google for doing the same thing? Where’s the outcry against them for doing it first? Mozilla helps keep the web open for us, but in return we have to help keep the web open for Mozilla. And we aren’t.
I got 5 on it ♠
Earlier today, a virus signature from the [over 25 years old] virus “DOS/STONED” was uploaded into the Bitcoin blockchain, which allows small snippets of text to accompany user transactions with bitcoin. Since this is only the virus signature and not the virus itself, there apparently is no danger to users in any way. However, MSE recognizes the signature for the virus and continuously reports it as a threat, and every time it deletes the file, the bitcoin client will simply re-download the missing blockchain.
Whoever did this, I commend their sense of humour.
A message of hope ♠
RSS is one of the last holdouts of a more open web and it’s been gratifying to see that there’s enough interest in it to sustain some great independent services that care more about the product than eyeballs.
It used to be that the web was a platform – not just its technical underpinnings, but the content itself that was on the web. Then it was gradually reduced to a substrate for supporting a bunch of oil rigs, each isolated. Or so it seems; it is easy to forget that the web still has the platform nature. The magic of the hyperlink has survived, in fact has even been preserved with some care by the new barons. Even Facebook or Google+ ain’t AOL. But how much promise has been lost (cf.). Yet with the nature of the web never having gone anywhere, maybe what was can be again.
Tuesday, Mar 11, 2014, 05:21
This entry is mostly for my own benefit, since – being an occasional user – I keep having to figure this out from scratch. What I was aiming for is a workflow that gives me both convenience and full control.
I don’t like
git send-email. The command assembles and sends mail all in one go, so you have to know what it will do blindly. Running
git format-patch manually first only helps a little, since
git send-email still performs its own mail assembly on top of its input. It needs practice – using it frequently for a while and looking at the results. I don’t use it enough for that.
Unfortunately the patchmail I do send every once in a while goes to places with many subscribers, and I’m unwilling to rattle their inboxes with my (repeat!) learning process. I want to be sure I’ll send exactly the mail I mean to, on the very first try. Of course I could dry-run my patchmails by sending them to myself first – fiddly and always one command line typo away from making any mistake public.
I also have msmtp set up on my home server with all the details of my SMTP accounts and really don’t want to maintain another copy of that information on my laptop, especially on a per-repository basis.
So for me, the answer is to avoid
git send-email entirely.
The key realisation is that once a mail is properly formatted, sending it is nothing more than piping it to
/usr/bin/sendmail (or whatever equivalent you employ) – so you actually need only
There is just one little wrinkle to take care of: the “
From ” line it generates needs to be removed before its output can be piped to
sendmail. This is easily done using
formail, which can also split mboxes, making it a convenient companion to
git format-patch --stdout.
Bottom line, this replicates
git format-patch --stdout origin | formail -I 'From ' -s sendmail -t
(Obviously, you season it with
--to email@example.com etc. to taste.)
git format-patch produces an mbox-format mail folder, which
formail splits into individual mails (“
-s …”), and for each mail, deletes the
From line (“
-I 'From '”) and then pipes the mail to
But note what I gained here:
I can omit the pipe.
That allows me to inspect the exact mail that will be sent. In fact, if you don’t pipe the output anywhere, Git will invoke the pager for you and even highlight the diffs within the attachments: excellent. Then if I’m satisfied, I add the pipe and out go the patches. (This is much like sending a dry-run to myself first, then sending for real, except with all the friction removed.) I know exactly what is being mailed out. There is no blackbox.
And if there was a cover letter I needed to edit? Then I just pipe to a file first, before piping to
formail. In between those steps, I can edit the file – whether in Vim directly, or using
mutt -f mbox.
Of course, this isn’t necessarily for everyone. You need a machine with procmail installed (for
formail) and a
sendmail-compatible MTA set up. You also need to be comfortable working with mail in the raw. There is, after all, no blackbox.
But it works for me. Sending patchmail is no longer something I put off.
Monday, Dec 30, 2013, 14:50
Sunday, Dec 22, 2013, 00:32
Hypothesis: ideal line length is in the range where you can peripherally glance the end of the line while fixating on its start, because (further hypothesis) this allows your focus to travel toward a fixed point rather than carefully trying to stay in the narrow track of the line.
(From this would follow that large leading only helps by mitigating the difficulty of tracking an over-long line, but leaves the underlying problem of a missing destination fix point unaddressed.)
Monday, Oct 7, 2013, 09:11
v1 The less words you use to tell a story, the more effective its message will be, and a greater number of people will read it to completion. Anyone who cares about the user experience in regards to their software, will tell you “people don’t read.” Which while being somewhat accurate, really isn’t the case. People do read, they just value their time. That is just as important in blogging. Protect your reader’s time, and deliver the message as quickly as possible.
v2 Many designers will say users don’t read text, and therefore, you should have as little copy as possible. This is a lie. Users do read text. Users protect one thing above all: their time. The more text they have to read, the more time of theirs is lost. Protect their time by delivering strong messages with fewer words.
v3 Reading takes time. The less reading you force someone to do, the more time you save them.
v4 Fewer words create a more powerful message.
(Excuse the full-quote – I will refer to different parts repeatedly, so I thought it necessary.)
This is a perfect illustration of the effect that twitterization has on ideas: all nuance and tangent is brutally sawed off until only platitude survives into writing. Elliot’s v4 is so vacuous as to not be worth saying at all. Still, it is an improvement on v3 – because though longer, v3 somehow manages to be even worse. V1 may be unnecessarily long, but among these it is the only one worth writing, because the only one worth reading, because the only one containing an idea. Even in v2, the idea is already watered down: the writing is clumsier and more redundant than in v1, in spite of the reduction in length.
But all of my criticism so far is too myopic. Step back and you’ll notice the real blunder: Elliot missed his own point! He went for brevity above all else, and his message suffered for it. People do read, he asserted – they just value their time… which v4 is a waste of. Presumably the grim takeaway so far is that if you aren’t sure which parts of your writing can be removed and which need keeping, maybe you should go ahead and remove all of it…
Fine, that was the wrong direction. What might be a better one? How do you nudge the text toward conveying ideas? I admit that because I find the arguments here somewhat ill-matched (not in any way that can’t be fixed, mind you – just not without going into the matter at greater length), I am having some difficulty rewriting the text without altering the message. But if I restrain myself to just run with what’s there for the sake of this exercise, I arrive at something like this:
v2 Waste no words in telling a story and it will be effective and keep more readers’ attention until they finish it. Designers will often say that users don’t read; the reality is they just value their time. This is true in blogging too – because it is true everywhere. Don’t waffle.
And that is where I’d stop.
Purely in terms of metrics, this is notably shorter than Elliot’s v2. I think it also manages to retain all the ideas from v1 in spite of its brevity, though that is clearly subjective. Of course it isn’t (nor can be) anywhere near as short as v3 or v4. But is it more powerful? (V4’s own words!)
Oh, and if Elliot really wanted to be serious about it?
v5 Short is powerful.
Friday, Jun 28, 2013, 22:38 (updated Wednesday, Aug 21, 2013, 06:17)
Recently it occurred to me that I had been using the same main SSH key for almost 15 years. I had minted a second one for GitHub when I signed up there, but that was it. Worse, both of them were only 1024 bit strong! That may have been fine 15 years ago when I minted the first one, but it certainly isn’t now. They were also both DSA keys, which turns out to have a systematic weakness. (Plus, old versions of the FIPS standard only permitted 1024-bit DSA keys.)
This had to be fixed. And I wanted an actual regime for my keys, so I wouldn’t repeat this.
Naturally, my new keys are all RSA and 8192 bit strong. Yes, 8192 – why not? I worried about that slowing down my SSH connections, but I knew adding key length only increases the cost of the handshake phase, and if my SSH connections are taking any longer to set up now, I haven’t noticed. Even if I did notice, SSH now supports connection sharing (which I have enabled), so that only the initial connection to a host would even experience a meaningful delay. And since I combine that with autossh to set up backgrounded master connections to the hosts I shell into frequently, most of my connections are nearly instant, and always will be, irrespective of key strength.
So how many keys does it make sense to have?
My first impulse was to mint one key pair for each server I would be shelling into. But as I’ll explain, that isn’t what I ended up doing.
I spent a while reading and thinking.
In terms of private key security, my situation is that on every machine on which I work at a physical console, I run a SSH agent with most or all of my private keys loaded. I also have a few passphrase-less keys on other machines, for use by various scripts. (Logins using such keys are restricted to specific commands.) In all cases, an attacker who gained access to any one of these keys would almost certainly have access to all the other keys on the same machine. So there is no security to be gained from using different SSH keys for different servers from the same client. But it does make sense for each client to have its own private keys.
A simple way to encapsulate this as a rule is: never copy a private key to another machine.
In the trivial case, this means one private key for each client machine, with the public key copied to every server to be accessed from that machine. There is, however, a potential privacy concern with this: someone who can compare public keys between several systems can correlate accounts on these systems to each other. Because of this, I share public keys only between servers where I wouldn’t mind if they could be thus correlated.
The upshot is that I have one main key pair for use on my home network and on several other machines under my direct control; plus a few more key pairs (e.g. my new GitHub key) used for maybe 3 shell accounts each. (It so happens that I only have a single machine on which I run a SSH agent – my laptop –, but not long ago, there were 3.) Lastly, I have deleted all copies of any private keys I had distributed among my machines.