Trust and the NSA: They’re Not Mutually Exclusive

Spread the love

The National Security Administration has, for good reason, been front and center in the news for the last couple of months.  What the NSA is mostly known for is signals intelligence (intercepting someone else’s communications) and cryptography.  It was founded in 1952 out of the ineffectual Armed Forces Security Agency for that specific purpose, in fact.  That mission has led it to tapping communications lines, setting up vast antenna arrays, and putting analysts in frigid shacks on the sterns of destroyers pitching in the stormy North Sea, all dedicated at trying to get The Other Guy’s communications.  And when it does get them, it tries to crack the encryption used (if any) and succeeds a lot.

In addition to that, the NSA has been tasked to ensure that communications for the United States government are secure.  It does this in a number of ways that include preventing leakage of the signals in the first place, but it’s most famous for its work in cryptography.  And if there’s one thing that they know, it’s that crypto is hard.

It knows that for one main reason, and that is its code-breaking section.  One of that section’s first duties, of course, is to break other nations’ codes.  But it also tries to break algorithms in and from the United States.  Any time the agency tasks someone to create or improve an encryption algorithm, another group that specializes in finding weaknesses in crypto algorithms is tasked to break it.  If that happens, it gets sent back to be fixed if possible or scrapped if not.  This is a good thing: if your friend can break your algorithm, there’s a good chance that your enemy can, too.

So take a worldwide coverage and world-renowned crypto capabilities and combine them with the NSA’s mission, which has been eloquently stated, “The ability to understand the secret communications of our foreign adversaries while protecting our own communications–a capability in which the United States leads the world–gives our nation a unique advantage.”  In short, break theirs while protecting ours.  Part of protecting ours is ensuring that the encryption used, particularly by the federal government, is not breakable while taking every available opportunity to break the encryption used by others.

Take this combination, and two questions naturally rise to the top.

  • How much do you trust the NSA?
  • How hard is it to avoid them if they’re looking for you?

It turns out that these are not easy questions to answer.  While there have been a lot of suspicions about whether the NSA has looked at only foreign traffic over the years, at least without a warrant, it was hard to find proof save for the rare leak.  Even the information that has come along in the documents so far released by Edward Snowden hasn’t made the extent of surveillance completely clear, and that makes it even harder to answer the questions.  We’ll look at the first of those questions today, and the second question in the next article.

Trust is a Grayscale, Not a Hard Line

The issue of trust is not a simple question.  If asked how much they trusted the NSA, millions of people now would say, “Not at all!”  But as Bruce Schneier wrote recently in his book Liars and Outliers, the question of trust is more subtle complex.  For example, do I trust my neighbor not to break into my home and take my things?  To a degree, yes.  I wouldn’t stake my life on it, but neither do I watch her all the time or hire an armed guard service just in case she tries.  But do I trust my neighbor enough to lend her $5?  Not at all.  I have no idea how likely she is to pay it back, so I won’t loan her anything beyond a cup of sugar.  My neighbor isn’t the issue here, but another trust dichotomy exists–at least for me–regarding the NSA.  

Trusting the Watcher Watching Me

Do I trust the NSA to spy on the traffic of terrorists and to take action to let other government branches know?  Absolutely.  It’s probably their biggest mission right now.  So on that point, I trust the NSA.  I also trust them to spy on, say, Russian, Chinese, and Syrian traffic, and probably a lot of other nations as well (Germany and Brazil come to mind based on allegations that are both new and old).  But then, I’ve also seen the downplayed reports of friendly nations getting caught spying on the United States, countries that are close friends like Taiwan or even allies like Israel and France.

But do I trust them not to spy on my traffic?  No, I don’t.  Nor have I since I first heard of Echelon sometime in the 1990s, when suddenly it looked like everyone’s traffic was being reviewed.

Echelon was the first time that I had heard of a government espionage program that made me uncomfortable for my own privacy and security.  The descriptions that I read back then had to do with loopholes where the UKUSA group (the UK, the US, Australia, New Zealand, and Canada, a tightly-knit group that is perhaps the closest set of allies in the world) would intercept traffic that was not of their own citizens and then distribute that information to the other countries’ agencies.  The idea was since they weren’t spying on their own citizens, it was technically legal.  (Recent news has made me doubt the legal legitimacy of such a tactic, but that’s not to say that it didn’t/doesn’t happen.)

But there’s another question that comes up related to their core mission.

Trusting the Watcher Watching Over Me

Do I trust their validation of a given a cryptographic algorithm?  Generally, yes, I do, especially when they don’t get directly involved in it.  I didn’t like the Clipper chip that had NSA approval in the 1990s, but that was because mandatory government private key escrow is bad in every scenario I can imagine (not to mention that SKIPJACK has issues).  But I do trust them when they say that a published encryption algorithm intended to protect sensitive US government communications is secure because if they’re wrong–or lying–they will have failed in their primary mission, and if there’s one thing the intelligence community does not handle well, it’s mission failure.

Cryptography has been important for government, business, banking, religions, and individuals for about as long as messages have been passed.  But never before our time has it been so ingrained into society.  Yet methods to break ciphers and their implementations are constantly ongoing by governments and researchers, the latter of which will often publish their results.  They’re not usually the kind that completely break a cipher, but they may suggest weaknesses that trigger a move to something more secure.  Governments, of course, generally don’t publish new techniques in order to preserve their advantage over those who believe a given cipher to be secure.

Creating an encryption algorithm that cannot be broken is one of the most difficult things to do.  It’s been said that it’s very easy for a person to create an algorithm he can’t break, but that doesn’t vouch for the security of it.  How many people have ideas that seem brilliant to them but which are clearly flawed when other people see them?  History is littered with ciphers that seemed impenetrable when created but which fell rapidly.  The Caesar and scytale ciphers are basic transposition ciphers which have been around for millenia, but even in their day could be cracked by a literate person.  The Vigenère cipher, first described in 1553, was widely considered unbreakable until a general attack solution was published in 1863 (though others could sometimes privately crack it as far back as the decades after it was described).  One of the most famous ciphers, known as Enigma, protected Axis messages during World War II and was so effective that the effort to crack it at the UK’s Bletchley Park during wartime led directly to the modern computing revolution.

Each advance in encrypting and code-breaking led to another advance, and another, and another.  Some have been discovered independently: differential cryptanalysis was first described publicly by Eli Biham and Adi Shamir about 25 years ago, but word later came out that IBM had known about it since at least 1974 and the NSA probably knew about it even before that, and that it was used to improve the DES algorithm.  Others have been unique discoveries, such as the breaking of the aforementioned Enigma which required not only new techniques but new technology to handle the techniques in a useful time frame.  That ongoing battle is a big part of what led NIST to publish a call for new algorithms for use by the US government.

An NSA-Approved Cipher: AES

The Advanced Encryption Standard, or AES, came out of a competition that ran from about 1997-2001 intended to find a suitable algorithm for US government use.  Security researchers submitted a number of competing algorithms, and after analysis open to the public and performed by many of the brightest minds in the field of cryptography which saw many entrants falter, the Rijndael algorithm was ultimately selected.  The NSA ultimately blessed it without modification, adding it to the Suite B list of algorithms approved for use with communications that are SECRET (128-bit AES) or TOP SECRET (256-bit AES).  While the NSA certainly has its own algorithms that are probably better (I say “probably” because few outside the agency have been able to review them), it’s telling that key communications up to and including TOP SECRET are handled using products that incorporate AES.  If the NSA had found significant weaknesses in AES, it’s unlikely that it would have publicly commented on it at all, which is probably the equivalent of saying that they had broken it, or at least had some very severe reservations.

AES is built into every browser that ships today and is often used when connecting to a website using SSL/TLS.  And that’s a good thing, because the protocols that came before it, Triple DES (or 3DES) and RC4, have some significant problems that are well-known in the security community.

Theoretically (that is, if no flaws were ever found), an AES-256 key would take so long to crack via brute force that the universe as we know it won’t be around.  For a computer that adheres to the Landauer limit and harnessing the equivalent of all the energy the sun currently produces (about 1.4×1031 joules per second), it would take about 2.8×1022 years just to enumerate all the possible keys.  By that time, star formation will have ended, planets will have been consumed or flung into the void, and most galaxies will be small, cold remnants of their former selves, if they still exist at all, having either slowly come apart or been consumed by the central black hole.

And that doesn’t even get into cracking the encryption.  That just lists all the possible keys.  It’s safe to say that no one around then is likely to care much about what is encrypted now.

That’s not to say that AES is perfect.  Attacks have been made against some forms of it which reduce its effectiveness to small degrees.  But they require some fairly unusual circumstances (such as 200 million known plaintexts) and still make it such that the difficulty of blindly cracking a single key (let alone thousands) requires resources that even the NSA–if they have them–would need good reason to attach.  And that’s assuming they know of some far more significant weakness than has been published.

The Easy Answer: Encrypt Everything!  Er… Right?

So if breaking AES is (theoretically) so hard, why not just encrypt everything with it?

Well, first of all, encryption is hard to get right because there’s more to it than just the cipher itself.  There are all the implementations that have to be done right as well, plus all the software using the implementations.  Let’s focus on e-mail for a concrete example.  Here’s what must be secure for an encrypted message to make it from sender to recipient without anything useful being available to someone looking to capture the data:

  • The sender’s client writing the e-mail
  • The program encrypting the e-mail (which may be the e-mail application or an external program)
  • The operating system running the sender’s e-mail client
  • The transmission mechanism to the server
  • The transmission mechanism between servers (which may happen more than once if the message is relayed)
  • The transmission mechanism to the receiver’s e-mail client
  • The operating system running the receiver’s e-mail client
  • The program decrypting the e-mail (which again may be the e-mail application or an external program)
  • The receiver’s client reading the e-mail

This doesn’t even get into the very poor trust models commonly used between e-mail servers or that encryption software frequently lags behind the newest Windows releases.  

But it does show why it mostly doesn’t matter what you’re using because pretty much every program ever has security flaws that can be exploited.  At some point, the contents of the message have to be in plaintext or else it’s basically useless.  They’re not yet encrypted while you’re writing the e-mail and they’re decrypted when someone has received the e-mail and is reading it.  If your computer has a vulnerability (and it does), it’s usually (but not always) much easier for an agent to sneak in a piece of malware, capture the message (if they don’t have it already) and the key used to encrypt the message, capture the keystrokes or other input to decrypt it, and store it for later analysis.

(This doesn’t get into things like TEMPEST gear that can read the electromagnetic emissions of an electronic device, displaying the target’s screen or other output on a screen in a van and just reading what’s there.  There are systems designed to get around these problems, physically hardened against interception, with special designs that are formally validated to obscene levels which also drive the prices insanely high.)

So in summary: AES is probably safe from attack, even if they chip a few bits away from it.  Therefore, if you encrypt it with an exceptionally good passphrase, it probably won’t get brute forced in your lifetime or the lifetime of the human race.  But they probably don’t need to brute force it because there are simpler, more elegant ways of getting it.


In the next article, we’ll look at some non-technical issues associated with encryption and look at how hard it actually is to hide from the NSA.  It turns out to be–much like trust–a much more complex issue than at first it seems.

One Reply to “Trust and the NSA: They’re Not Mutually Exclusive”

Leave a Reply

Your email address will not be published. Required fields are marked *