Newsfeed

Fixing a Google Chrome failure to save passwords

Sunday, 2 Jul 2017

This is a kind of post that people used to write back in the heady early days of blogging and a more communal web: putting something out there to help Google help other people.

For some time I had been having an irritating persistent failure with Google Chrome that I could not find an answer for:

  • After logging into some website, it would offer me to save the password, as usual.
  • I would click on the save button.
  • Chrome would not show any kind of error.
  • But the password would not be saved: neither would it filled in automatically next time I went to the same site, nor would any password at all show up in the list on chrome://settings/passwords

The list of passwords would just stay blank no matter what I did.

Mysteriously, a handful of passwords did get stored, somehow, somewhere. Chrome could fill those in even as it woudn’t list them on the settings screen.

Searching the web, almost all answers I could find related to the case of people who are logged into Google within Chrome and use its password syncing service. I don’t – I simply want my passwords saved locally. The few answers I found that seemed to relate to my situation invariably suggested resetting one’s profile. Now, that approach does appear not to be mere superstition: of the people I found who had this problem, the ones who reported back all wrote that resetting their profile fixed the problem.

So I had a way of making the problem go away – but I also have a lot of data in my profiles. It’s not just bookmarks. I have tweaked many of the settings, individually for each profile (which is the whole point of using them, after all), and more importantly I use a number of extensions that themselves have extensive configurations. Recreating that all is a big task.

So I went poking around where Chrome stores its user-specific data, inside the user’s home directory:

  • Mac: Library/Application Support/Google/Chrome
  • Linux: .config/google-chrome
  • Windows: AppData\Local\Google\Chrome\User Data

In there, the main profile resides in a directory called Default, while additional profiles are found in Profile 1, Profile 2 etc.

Within each profile, there is a file called Login Data. It’s an SQLite database (as are many of the files in there), and therefore accompanied by a corresponding Login Data-journal file. Deleting that pair of files fixes password saving in the profile in question without affecting the rest of it. (Note: you must quit Chrome first.)

So a full profile reset is not necessary – you can reset just the password storage by deleting just the files that it uses.

This does mean you lose any passwords you had stored previously, unfortunately. But since you cannot really access them any more anyway, that data loss has effectively already happened by the time you delete the files.

My dependence on Big Tech

Saturday, 6 May 2017

Farhad Manjoo:

What’s the order in which you would drop Apple, Amazon, Google, Facebook from your life, if forced to — from first to last.

  1. Facebook is already not part of my life. (That’s a lie, but it’s at a minutes-a-month level. And they are most reluctant minutes.)

  2. Apple… I would greatly miss the quality of various aspects of their products, but using them already requires compromise. So I would make do.

  3. Amazon is mostly of use to me in terms of research: I use it to browse reviews and to find niche products I wouldn’t know how to search for otherwise. It’s not a big part of my life and I try to keep it that way, but losing it would be painful on those occasions where I have come to rely on it.

  4. Google’s loss would be painful every single day. I wish I could sever my dependence on their web search and their maps, so I’ve looked for alternatives repeatedly and in anger. None come close.

(The distances are nowhere near even. Facebook trails the pack by a very large gap; Google leads it with a significant one.)

My scraped feeds are now my scrapped feeds

Sunday, 16 Oct 2016

I used to host a handful of scraped newsfeeds here at /feeds/. Over the years, their number shrank as most scraped sites set up their own feeds, then excitement over feeds evaporated. While I wasn’t looking, subscribers bled away, then disrepair set in.

This is the end of the line. It was time. So it goes.

Useful GitHub Issues overviews

Sunday, 24 Apr 2016

I’ve always found the default, easily available views of GitHub Issues inadequate for my purposes. I want to separate issues by the kind of action I’ll want to take, but the interface is fundamentally oriented around a single list of issues, and by default that is just a big dump of every issue that involves you in some way. Luckily all the buttons are just UI over a query language, and the query language turns out to be just barely powerful enough to allow me to get the overviews I really want.

So here are the queries I arrived at. Together they approximate a basic dashboard. Unfortunately there is not, to my knowledge, a keyword in the query language to refer to “whoever the currently logged in user is”, so I cannot demonstrate them as effectively as I’d like: you will have to manually edit them to subsitute your username for mine.

  • is:open user:ap

    This shows all issues filed by anyone against any repositories that I own.

    Semantically, this one is “stuff I need to fix”.

  • is:open author:ap -user:ap

    This shows all issues I have filed against repositores I do not own.

    Semantically, this one is “stuff I need to keep bugging others about”.

  • is:open involves:ap -author:ap -user:ap

    This shows all issues filed against repositories I do not own, which I have merely commented on or been mentioned in.

    Semantically, this one is “stuff I care about as a bystander”.

That collection gives me a reasonable handle on everything I need to take care of one way or another, which I could not get from GitHub’s own built in views.

Good Mail Sorting

Monday, 21 Dec 2015

When I started writing this, 13 months ago or so, I had been frenetically hacking my mail rig for a couple days. I sat and wrote, because it had been a revelation.

Setting

For years I used a setup on my home server where cron would kick off fetchmail (since replaced by mpop), which would in turn invoke procmail to deliver each received mail, putting it somewhere in my set of folders. Then I read that mail in mutt over a SSH connection.

I love mutt.

Act Ⅰ

My first impetus to do something was frustration over my VoIP telephony voicebox notifications. Those come as mail with the voicemail attached as a sound file. Because I can’t well play them in a mutt running on another machine, I have to get the attachment out and onto my laptop.

Previously I tried to automate this by having procmail send these mails to a script that would extract the attachment and discard the mail itself. I never quite got this working right, so I never actually used it.

This time I succeeded – by the choice of having the script scan my inbox for new mail after delivery. (It looks for voicebox mail, plucks out the sound file, then deletes the mail.) Since my inbox is a Maildir, this is easy code: a readdir to list the mails, an open to scan them, an unlink to delete the matching ones. So now I never deal with voicebox mails directly any more, I just run this script against the network share when I’ve missed calls, and up pop some files on my laptop, which I can play. Convenient.

(Not as convenient as clicking a play button in a GUI or web mail client, granted. But using one of those would be far more inconvenient on other counts. Tradeoffs. (Have I mentioned I love mutt?))

Act Ⅱ

So I then realised that I could use the exact same approach to write another script I’ve unsuccessfully attempted once before – namely, to mark some mail as read before I’ve even seen it.

I used to have a program that I wrote for this – again, invoked from my procmail configuration, delivering mail in procmail’s stead and marked the way I wanted – but I stopped using it when I noticed that it would drop mail, rarely, under circumstances I never managed to pinpoint.

Now I have this functionality again, and now I can completely trust the code never to lose mail again, because all it does this time around is readdir and rename.

I forgot how nice this setup is when it works. I have mutt set up to show only threads with new mail when I open a mailing list folder, so if the noisy notification traffic I don’t care about is already marked as read, it is essentially invisible to me. (I don’t want to killfile the chaff. I like correct threading so much, and there are sometimes replies to these mails, long chains of them even. I don’t want those floating about with no context.)

Anagnorisis — When mail processing is hard

Here’s the problem: the period during which a mail exists only in main memory, before it is delivered and on disk, is quite high-stakes. Code that handles mail during this period must be absolutely solid, reliable at all times without exception without fail, in order to avoid dropping mail on the floor. You can afford no errors. Any typo that aborts program execution will cost you mail. Any bug that manifests someday in the future will cost you mail. Editing the code and saving it in half-finished form before you’re done will cost you mail if cron kicks off a fetch at that moment. In short, a mistake of any kind tends to default to costing you mail. And so every possible error condition must be covered by failsafes and rescue measures. This is a very hard environment, grim and unforgiving.

However. Once the mail is on disk, and once it’s in a Maildir in particular, things get incomparably relaxed: the likelihood of losing the mail to a small mistake drops precipitously. You have to screw up very hard to manage to lose data. At that point, scripts processing mail just deal with files to move around. It’s not by any means impossible to write data loss bugs then too – but mistakes default to leaving your mail where it was. That’s a stress-free situation.

(Indeed with the Maildir-based scan-and-move approach, the mark-as-read program was basically trivial.)

Act Ⅲ

Now I had to figure out when to run this program. It had to have a way to specify rules for what messages to mark as read, and so it took rules as command line arguments. Then, because different folders need different rules, I wrote a shell script that invoked the program first on one folder with some rules, then another with others, etc.

Now when would I invoke that shell script, though? By hand, like the voicemail extractor? One thing I realised while writing the script was, I could also put the mpop invocation inside it, and then kick off that script from cron instead of invoking just mpop itself. Because of course the best time to run the script is right after new mail has (potentially) arrived.

It took a day until it dawned on me what I had just done.

If I have cron kick off a script that fetches mail and then only afterwards moves some of it around…

… then why do I need procmail at all?

Synthesis — Mail filtering can be easier and better

See, over the years, I have forever wanted to get rid of procmail, because its configuration syntax is so awful. But I have been putting it off ever since I read about Mail::Audit back in… yes, 2001. (Later along came Email::Filter, and later still some aid for dealing with the harsh environment in the form of Email::Pipemailer::DieHandler.)

But I knew the data loss default cost of mistakes, so I stuck with procmail grudgingly instead of trying to replace it with some custom code, because I trusted procmail with my mail’s arrival on disk – if not fully then at least to an extent that I wouldn’t trust my own attempts. I’m only a dilettante at mail-handling code (not being familiar with the ground it has to have covered), and in any case procmail is battle-tested in a way my own attempts will never ever be.

What I didn’t realise is that I don’t need to write code to do what procmail does. I can just have mpop deliver all of my mail to a transit folder, then invoke a script that scans this transit folder and kicks any mail it finds in there over to its destination.

This has some very attractive properties. Unconditional delivery to disk makes it very robust against losing mail, dwarfing even procmail. Code that enumerates files and then moves them around is much harder to screw up catastrophically than code that is responsible for data that only exists in main memory. And thus I get to write the code that does the rule-checking and file-moving in a real programming language, rather than crappy procmail syntax.

Reflection

For over a decade, I was so fixated on doing the mail processing during delivery that I was blind to the much simpler approach that was staring me right in the face. After all, every tool that I considered using is designed this way: Mail::Audit; Email::Filter; Courier’s maildrop; common Sieve implementations; etc. So I never even made the connection that another approach was possible. It seemed like the obvious natural and sole way of doing it.

Until I finally did realise it. And kicked myself for days. To think: all the automation I could have set up years ago (cf. voicemail extractor); all the mail I could’ve not lost (not very much, all told, but all of it avoidable). Just the friction of trying to use the misshapen language of an ill-fitting tool, for years on end.

Don’t pick the wrong problem to fight with. Life is better that way.