GPG Replacement Just Needs to Be “Good Enough” For Now

Spread the love

A few days ago, Moxie Marlinspike wrote something that got the InfoSec community into a open debate.  His contention is that GPG has failed philosophically and technologically in building up 20 years of cruft.  He essentially calls for a restart, and calls GPG’s small installation base a blessing in disguise because it makes for an easier time starting from scratch.

This, not surprisingly, resulted in a lot of very strong responses, with some for, others against, and many looking for clarification.  I understand his point, and I agree with him in some parts (mostly the philosophical) but am hesitant on other parts (mostly the technical).  What follows is based on a couple of posts I made on Slashdot.

Crypto is Hard–Sometimes Too Hard

Moxie is encouraging good encryption, but calling for updates (it hasn’t significantly changed since the mid-’90s) and a better wrapper. GPG is still largely by geeks, for geeks. I couldn’t get my parents to use GPG because they’d dismiss it as too hard.  Even the suggested minimum settings vary based on where you look and when they were posted.

Example: An RSA key size of 2048 bits is largely considered secure, but NIST recommends 3072 bits for anything that one would want to keep secure into the 2030s. People still often see their e-mail as their private papers and may be concerned over who can read them well past then. But does that mean they use 3072, or go with the random crypto weblog guy who says to always go with 4096? And why can’t they create 8192- or 16384-bit keys like that software claims to over there?

And what to hash to use? Plenty of sites still say MD5, but they were written years ago. Some have updated to SHA1, but others point out weaknesses there. OK, SHA2, then. But then there’s SHA256, which must be better, right? (I know SHA256 is a subset of the SHA2 family, but those unfamiliar with crypto will not.)

Raising the Bar

But really, this isn’t important.  How many people can describe how an RSA key is generated, much less how a proper PRNG produces a suitably random number and then how AES/Blowfish/whatever encrypts the data? Does the average person need to know that? Not really. And even if they did, they don’t care.  They just want something to work and be secure.

So what’s the goal? With maybe a handful of exceptions, everyone does something that can compromise their security. HTTPS relies on a trust architecture that we’re being reminded recently (Superfish, PrivDog) is in reality extremely fragile. And yet it’s being encouraged to make the job of the average surveillance tool more difficult. It’s very much letting The Other Guy(TM) handle security. It has flaws, but it raises the bar substantially.

Despite Superfish/PrivDog-style backdoors, browsers and other software help take up the slack by rejecting poor configurations, with each update adding additional checks.  Regular people don’t have to worry about it except for brief windows.  PGP and GPG were designed to reach near-perfect levels of encryption, but that bar is clearly too high for significant uptake. We should instead be looking for something that encourages end-to-end encryption that is good enough. We can build on if the underlying structure is properly designed, and as people get more accustomed to crypto in their lives, they’ll be able to adjust to improvements.

Framework Proposal

Right now, we have options where you can let a CA provide you your site’s TLS certificate (usually 2048-bit and SHA1). If you know what you’re doing, you can roll your own with better security. That’s what we need for end-to-end crypto. It can have flaws compared to a perfect GPG installation, but it needs to raise the bar, and be able to keep raising the bar.  HTTPS is widely used because people don’t have to think much about it.  Until GPG-style crypto becomes relatively automated, it won’t be embraced by more than a handful of people.

We need something with the flexibility of TLS web certificates (though I recognize the flaws of that exact model) for end-to-end crypto, too. Clients should have auto-update capabilities that add algorithms as they arrive and are accepted or deprecate algorithms that are broken, and do so without the user having to worry about it unless they want to.  We need common libraries or protocols that can allow new or existing clients to safely implement connections to these services without having to build them from scratch, thereby preserving and encouraging competition.  Keys should be automatically revoked and recreated when an algorithm fails, and automatically recreated when nearing expiration, much like the model proposed by Let’s Encrypt.  For the biggest challenge, we need mechanisms that can provide safe storage for the keys so they survive a catastrophic loss but can be deleted with near-absolute certainty if the user wishes; this would probably be among the last features added.

The system must also have the flexibility for advanced users to make their own choices.  Browsers allow people to add and remove CAs or enable or disable crypto algorithms if they know where to look, and there’s no reason that model cannot work here.  Users should also be able to choose their storage mechanism, whether remote or local, and ultimately have the ability to disable automatic mechanisms such as revocation and renewal.

These don’t lead to a perfect system. They lead to a good enough system with room to grow and improve. But I would argue (as I think Moxie does) that GPG is far from a perfect system because it’s too difficult to use.  When the majority of communications are relatively well-secured, it makes it far more difficult for a surveillance state to conduct its operations. When it becomes part of the underlying economy, it becomes far more difficult to retract.  Perfect security can still be a long-term goal, but we need more realistic goals to encourage uptake in the meantime.

2 Replies to “GPG Replacement Just Needs to Be “Good Enough” For Now”

    1. I haven’t had more than a cursory look at it, and hadn’t seen that PDF from December that you linked to. I’ll add that to the reading list.

      I’ve already spotted one philosophical issue in the name. By calling it Dark Mail, it’s not going to appeal to wide swaths of users. Names like Simple Mail Transport Protocol, Domain Name Service, File Transport Protocol, and even Hyper-Text Transport Protocol are neutral to people. By calling it Dark Mail, it suggests a level of impropriety or danger in the use, something already a problem with the various “dark nets” like Tor and I2P that are blamed for illicit sales of drugs, weapons, and criminal actions. A new, neutral name should be selected for it, even if the rest of it covers my concerns.

Leave a Reply

Your email address will not be published. Required fields are marked *