Privacy vs confidentiality in protocols ♠
TLS does not provide privacy. What it does is disable anonymous access to ensure authority. It changes access patterns away from decentralized caching to more centralized authority control. That is the opposite of privacy. TLS is desirable for access to account-based services wherein anonymity is not a concern (and usually not even allowed). TLS is not desirable for access to public information, except in that it provides an ephemeral form of message integrity that is a weak replacement for content integrity.
[…] TLS everywhere is great for large companies with a financial stake in Internet centralization. It is even better for those providing identity services and TLS-outsourcing via CDNs. […]
If the IETF wants to improve privacy, it should work on protocols that provide anonymous access to signed artifacts (authentication of the content, not the connection) that is independent of the user’s access mechanism.
[Slightly reordered for the purposes of quoting.]
Tuesday, Mar 31, 2015, 09:39
Chris Siebenmann is skeptical about Mosh, which spurred me to write down my own feelings about it:
I ♥ Mosh!
When I’m traveling, I generally connect through hotel and conference networks that almost always exhibit precipitously fluctuating latencies, often-miserable throughput, and in some cases, near-tragic levels of packet loss. Just using Screen/Tmux/dtach over a SSH connection (as a commenter on Chris’ entry suggested) doesn’t come close to cutting it, because SSH itself is utterly unusable. I’ve had situations where connections would drop within seconds, every time I managed one, taking dozens of dozen-second tries to establish at all. On such a network, UDP is of the essence to get even an inkling of a usable connection. And only protocol-integrated predictive terminal emulation allows making full use of the benefits UDP offers then.
Then I embark on the journey back, and I throw Mosh out of my setup the minute I step off my return flight (or whatever else I am travelling on). If I have a network connection worth calling that, then the way Mosh operates – built-in non-optional detaching, inability to support SSH’s various forms of forwarding, and server-side per-session dæmons that never go away on error conditions – is grating. I can’t stand using it when I’m back in my usual haunts where I enjoy decent networking. (And it really doesn’t take a lot to count as decent network in this sense.)
So there you go. I’m very glad it exists, because when I need it, it’s a sanity lifeline. But it cannot do its job unseen, and you’re in a real bad place if you need it. So however grateful I may be, I much prefer to neither need nor use it.
If you think this sounds schizophrenic, that’s because it is. Indeed this is a strange endorsement, I realise that. But let me assure you: when I need Mosh, I am so happy it exists.
Monday, Mar 9, 2015, 15:18
I hear people occasionally mention that CPU clock rates have been stuck around 3 GHz for a while. They say that multicore instead of higher clock rates is a trend. But it seems few realise how far this while has stretched.
The last time CPU clock rates went up was when the Pentium 4 Prescott LGA 570J came out and hit 3.8 GHz – in 2005.
It’s been a decade that CPU clock rates have been frozen. Exponential increases in single-threaded performance are well and truly over.
We now live in the titular era: the era of Three Gigahertz Forever.
Meanwhile, Moore’s Law has continued to apply, so we have still been getting ever faster chips despite the new situation. But the gains have moved sideways into multi-threaded performance. That’s not a “trend” any more, it’s the status quo.
Even with that, we have arrived on the outskirts of the era of Dark Silicon. An end to Moore’s Law itself is within sight. While it’s unclear how far off it really is – it’s hard to tell the distance of something you don’t know the size of –, we now find the eventual end of Moore’s Law to be a tangible inevitability.
What’s more, power consumption has increasingly gained importance as an optimisation target. Now of course optimising for power consumption largely has the same basic shape as optimising for speed – try to achieve the goal with less work, try to avoid needing to achieve the goal at all. Where power consumption differs is that it can be sensitive to wasted work even in non-hotspot areas of the code.
So not only can the “just wait for hardware to catch up” approach no longer cover for slow code, but power consumption now demands caring about wasteful work even where it previously never mattered.
The free lunch is over.
Monday, Mar 9, 2015, 14:50
I can never quite believe how phenomenal this application is. It feels like magic. Like what using computers should be like.
This is what it’s for:
An interactive post-processing tool for scanned pages. It performs operations such as page splitting, deskewing, adding/removing borders, and others. You give it raw scans, and you get pages ready to be printed or assembled into a PDF or
It doesn’t try to create perfect documents automatically. It does an astonishing first pass, actually, but that still has many so-so guesses – far from perfect. The automation is not magic. (Or maybe, is only minor magic.) Even so, this first pass is very valuable, because you get to edit a passable starting point, rather than having to do the entire job from scratch. For this editorial work, the program gives you a range of image processing algorithms tailored very specifically for the job, using UI controls that are cast exactly in terms of your senses.
And therein lies the magic. If you have ever whittled away at some menial task using an inadequate tool, then switched to the right one for the job, you’ll know the feeling – suddenly it gets easier so much, it’s as though you were cheating. But the outcome is still entirely of your doing. This is what’s going on here – just drastically amplified by the fact that the material used to fashion the tool is not simple wood or metal but a general purpose computer.
A bicycle for the mind… and the hands of a craftsman.
You start with a dog’s breakfast of a scan, and you get yourself a document that looks like it came out of a vector graphics program – because that’s what the computer makes you capable of doing.
Saturday, Nov 15, 2014, 10:10
Almost ever since I started using Vim, I have been in search for a satisfactory persistent visualisation of the buffer list. I have used many hacks over the years, some written by others, some of my own attempt. None ever felt right.
Roughly a year ago, I had a sudden moment of clarity: Vim has already had for a while the functionality required to build this feature, it just usually serves a different purpose. Namely, ever since Vim has supported tabs, it has been able to render them both in text mode and as GUI tabs. Their text mode rendering is exactly what a buffer list would need as well. Why not re- (or ab-)use the
tabline for buffer tabs?
I implemented a first stab at this but it had several severe deficiencies. It scratched my own itch just well enough, however, so I dragged my feet for a year.
To my chagrin, I cannot even claim to have had this idea first, much less (in my tardiness) claim first implementation: Airline shipped an implementation of this exact idea at the time it occurred to me independently. Oh well. (Subsequently I discovered several more plugins, all of which seem to be younger than Airline’s implementation.)
But recently I finally got around to finishing up my own script to the point where it is actually releasable. So without further ado:
It is designed with the ideal that it should Just Work, and has no configurable behaviour: drop it into your configuration and you’re done. Needless to say I like my own take better than the competitors; but I have also contrasted it with each of them in the documentation, so you can make your own call on that.