FCC Posts Net Neutrality Report and Order

On 26 February 2015, the FCC adopted by a 3-2 vote new network neutrality rules.  At the time, as is normal for FCC rule adoption, the rules were secret, available only to the commissioners and a few aides.

Two weeks later, the rules have been published.  They include 305 pages of history, explanation, rulings, forbearance orders, constitutional considerations, and regulatory flexibility analysis; 87 pages of commission statements (including 80 pages of dissent, 64 from one commissioner); and a mere 8 pages of regulation.  Fewer than two pages of that involves the actual net neutrality rules; the rest covers definitions requirements for filing pleas and complaints, for confidentiality of proprietary information, and requesting advisory opinions.

To emphasize this point, I’m putting the text of the new rules below.  It’s very simple, and very easy to read and understand.

Continue reading “FCC Posts Net Neutrality Report and Order”

GPG Replacement Just Needs to Be “Good Enough” For Now

A few days ago, Moxie Marlinspike wrote something that got the InfoSec community into a open debate.  His contention is that GPG has failed philosophically and technologically in building up 20 years of cruft.  He essentially calls for a restart, and calls GPG’s small installation base a blessing in disguise because it makes for an easier time starting from scratch.

This, not surprisingly, resulted in a lot of very strong responses, with some for, others against, and many looking for clarification.  I understand his point, and I agree with him in some parts (mostly the philosophical) but am hesitant on other parts (mostly the technical).  What follows is based on a couple of posts I made on Slashdot. Continue reading “GPG Replacement Just Needs to Be “Good Enough” For Now”

Lenovo completely undermines user-vendor trust

Looking for a computer? Thinking about a Lenovo?

I strongly advise that you reconsider your choice due to an issue that has just come to the general attention of the InfoSec community. A couple of months ago, Lenovo was caught allowing VisualSearch, one of the companies that provides adware for the consumer line of its computers, to install an update to a program called Superfish. This update installed an unrestricted root certificate authority (CA) into the certificate store.

Before I get to the explanation, if you have a Lenovo system, please check to see if you have Superfish installed. If so, remove it. It will reportedly take this bad root CA with it.  But it will not restore trust in Lenovo.  Update 1: The certificate stays behind, and it’s the same private key on every installation, meaning that someone who gets hold of it from one compromised system can use it on another.  No trust left in Lenovo whatsoever.  Update 2: To see if you have the cert installed, go to https://www.canibesuperphished.com/.  If you don’t get a warning, then you are vulnerable.

Back to the issue. It is almost impossible to understate how bad this is. Lenovo essentially allowed flat-out attack software to be installed on a huge number of systems. With this root CA, the Superfish program replaced real certificates (like on banks, shopping sites, health sites, and anything else protected by HTTPS) with its own certificates so it could see every piece of data that you sent or received. If you went to a site in a browser, it showed a perfectly normal(-looking), perfectly secure(-looking) green lettering or bar, even though Superfish could see everything that transpired.  It is a fundamental violation of the trust between purchaser and vendor.

That’s not hyperbole. This is attack software, even if their stated purpose (to allow comparison shopping) is benign. But it does so using what’s called a man-in-the-middle attack, one of the holy grails of attack methods. Further, the certificate can be used to sign software, applets, or documents, allowing them to be recognized by Windows as safe. Anything can be run, and it will look perfectly legitimate.

That also means that anything that could subvert it could completely subvert the system, and do so with you trusting it.  It could point you to a site under an attacker’s control and convince you it was your bank.  It could ask you to install a software update and convince you that it was issued by the software vendor.  It could see everything you do, everything that left and entered your system, and report it back to somewhere else with no alerts because it would all appear completely legitimate.

I understand that sometimes companies make mistakes. They even sometimes make security mistakes. Security is hard. But this is an unfathomably bad decision by a company that should know better, especially given the attention and fear generated by their purchase of IBM’s computer lines. I was not fond of them before, and now what little doubt I had has been shattered.

Update 3: I should have included removal instructions. Here they are for Vista/7/8:

1. Open the Start Menu/Screen and type “certmgr.msc” to find the Certificate Manager. Click on it or press Enter to open it.
2. In the left pane, open the Trusted Root Certification Authorities folder.
3. In the right pane, open the Certificates folder.
4. Look for “Superfish, Inc.” in the list of certificates.
5. If it’s present, right-click on it and select Delete.
6. Click Yes to the prompt that appears.

At this point, the risk for this certificate has been removed.

BadBIOS: Worst fears realized or just fearing the worst?

Last week, word hit about a piece of malware referenced as BadBIOS.  Reported by Dragos Ruiu, founder of the Pwn2Own contest and a respected member of the security community, it’s said to be able to communicate with other infected systems by the sound hardware, similar in some ways to a modem.

There are still a TON of questions about this. As far as I’ve read, few if any other people have seen the hardware, but the researcher himself is considered trustworthy. I’ve seen a lot of reports that get the information wrong, like a report that BIOS was infecting BIOS via the sound capabilities, which is not (so far as I can tell) what is being claimed. It seems that what is present is an incredibly resilient and persistent malware that can communicate to other similarly-infected systems via the sound card, and apparently to affect more than one operating system, having successfully affected Apple’s OS X and Windows, as one might expect, but also Linux and even OpenBSD, the latter of which is a very unusual target.

This is, in some ways, what was feared by many when Intel said it wanted to move from BIOS to EFI/UEFI.  Intel had some very good reasons for this as the capabilities of BIOS were interfering in general computing hardware advancement, but when you put what amounts to an operating system in the firmware with room to expand, there stands a good chance that it’s going to be abused.  UEFI sits under everything, and while it’s not quite a virtual machine host (yet), it has many of those same capabilities as it can read what’s going between hardware easily, giving it the ability to alter data at many points.  It also makes it extremely difficult to pry out as few if any malware detection mechanisms can look into the hardware.

Based on a recent (mediocre) book series I’ve been reading, the thought crossed my mind that it may have been secretly sent to one or more researchers so that they would find it specifically in order to derail some secret capability developed by a state-sponsored agency or group. That’s getting into conspiracy theory, something I don’t tend to do, but those happen online more than they happen in meatspace.

In any case, it’s still something I’m watching, and I’m sure there are researchers working to develop similar capabilities. It’s not something I worry about hitting my systems, because the complexities of doing so are enormous. Most computer hardware is built to handle very specific information, but the microphones still start and speakers still end as analog, and the quality of both diverges significantly from one system to the next, even within the same model of hardware. I can see how data can be delivered via sound–we’ve done it for decades with modems–but aside from targets picked very carefully, I have difficulty believing that this could be used for something widespread, especially since the infection mechanism needs a different entry point.

It’s an interesting piece of targeted malware (if real), but it’s not going to take over the world.

Extraordinary times and measures: How the NSA might justify its injust actions

There’s a long history in the United States of backing the hero doing the wrong thing for the right reason.  We love movies like Dirty Harry, Lethal WeaponBeverly Hills Cop, and Hard Boiled where the good guy (usually a cop) finds no other way to get the bad guy than to break the law.  At the same time, some of the best villains are seen to have done the wrong thing for the right reason: Gen. Hummel in The Rock exemplifies this when he takes hostages to force the government to tell the truth about the deaths of Special Forces soldiers over the years.

While those are fantasy worlds, there’s also a long history of sympathy for those real people who break the law for what society (or parts of it) deem to be the right reason.  From those who resort to cannibalism to survive to those who refuse to disperse while in largely peaceful protest to a president who ignored separation of powers and ordered military trials of civilians, we look upon them with approval or at least forgiveness because we realize that sometimes extraordinary times require extraordinary measures.

But most of these approvals of real actions are in hindsight.  At the time of the action, they are often controversial, even unpopular.  But perhaps there’s another aspect to them that is often overlooked: they’re not happening to us.  When an action doesn’t directly affect a person, they’re less likely to take a strong negative view on it than when they see a real or potential impact in their own lives.

This happens when we hear about the reality of combat, especially if we know someone who has been in the fighting.  Even if we disagree with the war, we tend to give the benefit of the doubt to the individual because we want to trust that they did the right thing at the time even if it was illegal or usually considered immoral or unethical.  But when we specifically are caught in the cross-fire, literally or figuratively, we tend to have a very different view.

And that’s what I think has caused the uproar over the NSA surveillance.  Don’t get me wrong–I have some serious issues with it, too–but when there was reason to believe that it was primarily happening to people in other countries or to potential terrorists in the United States, people didn’t get too worked up over it.

Now that the Snowden documents have revealed ever-increasing surveillance of many millions of Americans–perhaps nearly all of them–it’s suddenly hit home that the average person could come under suspicion for the simple act of making or taking a phone call, visiting a website, or chatting with a friend.  We start to worry that in connecting various dots, we could become a dot, and the known protections against this are nebulous at best.  We have only claims from the government which include a court that has little or no adversarial activity.  And that’s not good enough.

It doesn’t help that for most people, the NSA is a faceless entity.  Most people don’t know anyone who works there, or if they do they don’t realize it as those who draw an NSA paycheck generally don’t advertise it.  When we can’t put a familiar face on an activity, the motives become questionable, even sinister, because we have no one to question.

I’ve known some who have worked for some of these agencies.  One shared trait is not talking about foreign affairs, usually for the same reason.  From the inside, those with a TOP SECRET/SCI clearance see things that change their view of the world.  I’ve been told by someone who would know that the average stay in the NSA’s counter-terrorism group is two years or less; after that, they burn out.  They see so much that the general public not only doesn’t get but doesn’t want to see that they can’t talk even about things not covered by their clearance.  It’s just too frustrating.

And maybe that’s led to scope creep.  The analysts and their bosses are, at least in their minds, dealing with extraordinary times and they require extraordinary measures.  If we had just done this one other thing, maybe we would have caught the attack before it could do damage.

I imagine this happens fairly regularly.  Someone comes up with an idea, someone else expresses discomfort, it gets bounced around the lawyers and perhaps the White House, and then a rationale is provided.  I expect not everything is approved.  Some things are too complex, too expensive, too niche, or just too blatantly unconstitutional.  And sometimes there’s very strong push-back.  But someone, somewhere, comes up with a legal reasoning and those who are not steeped in the law tend to go with it.  It becomes easy to justify: We’re not the legal experts, we need this capability, it will save lives.  Extraordinary times, extraordinary measures.  That’s what they tell themselves.

But in extraordinary times, it takes extraordinary people to stand up against the illegal and unconstitutional.  It’s critical that those protecting us remember what is being protected.  People are being protected, but so is the foundation on which the country was built.  That foundation has served for more than 200 years as an inspiration to people everywhere.  The personal rights enshrined in the United States Constitution have largely become the accepted way that things should be around the world.  When they’re set aside by stretched reasoning, even for extraordinary times,  it undermines the very foundation of our society.  Edward Snowden remembered that, and whatever his personal faults and mistakes, his actions have opened our eyes and caused an international discussion about how much is enough.

Yes, something might slip through.  Another Boston Marathon bombing may happen.  But even in its aftermath the country and–more importantly–its ideals survive.  There are times when the wrong thing is the right thing to do.  But it’s the exception, never the rule.  Extraordinary measures used every time become ordinary–and wrong.  And we must remember that, whether we are an average citizen, a police officer, a soldier, an intelligence analyst, or a president.

The NSA’s Attention Span: Widely Focused on the Narrow

When the power of a nation-state is directed upon you, they have resources that completely boggle the mind.  This applies even if it’s a minor power: Estonia, Hungary, and Cambodia all have their own capabilities and, while very small compared to some, your ability to hide from a country that makes you Priority One is limited.  They have seasoned pros that are in all likelihood a lot better than you are, and the allies they call in when they need help are even more dangerous to you.

But of all the agencies, the National Security Administration possesses perhaps the most impressive capability for finding information on the planet.  This comes largely from being funded at a level that completely dwarfs every other nation (he NSA’s actual budget is classified, but it is believed to have received at least $10 billion and perhaps as much as $20 billion in the 2012-13 intelligence community budget) and having access to an array of locations and technologies that few if any other nations possess. Many of its listening posts (not including temporary posts on ships, in aircraft, and set up in vehicles or shacks) are known even if exactly what each does is not, and their presence around the world shows the reach the NSA has through US allies.  Their technological edge includes supercomputers, interception methods, and hacking capabilities that render most defenses nearly moot.

The previous article discussed the difficulties associated with encryption, both in getting it right and in circumventing it by accessing the data via other means when it’s not encrypted.  In short, it requires some very careful planning to make sure that your implementation, both from a technical and an operational perspective, are as solid as they can be, and this is where most people fail.

This is not to say that encryption is useless.  Far from it.  If you’re trying to secure information from competitors, random attackers, or other enemies, it’s one of the best tools available.  Even if you’re doing something that a national agency doesn’t want you doing, it’s better to encrypt than to not, if possible and practical.  And there are ways to give even the most powerful adversary a headache.  But if you come under the scrutiny of the NSA, it becomes exceptionally difficult to effectively hide the contents of the message unless you take very specific precautions and you do it without failure every single time.

From this rises the second question from the last article: how do you avoid the NSA if they’re looking for you?  This turns out to be extraordinarily difficult not only because of the NSA’s reach into the world’s communications but also the legal framework in which the NSA operates.  We’ll start by looking at how far and with what difficulty the NSA can actually look.

Continue reading “The NSA’s Attention Span: Widely Focused on the Narrow”

Trust and the NSA: They’re Not Mutually Exclusive

The National Security Administration has, for good reason, been front and center in the news for the last couple of months.  What the NSA is mostly known for is signals intelligence (intercepting someone else’s communications) and cryptography.  It was founded in 1952 out of the ineffectual Armed Forces Security Agency for that specific purpose, in fact.  That mission has led it to tapping communications lines, setting up vast antenna arrays, and putting analysts in frigid shacks on the sterns of destroyers pitching in the stormy North Sea, all dedicated at trying to get The Other Guy’s communications.  And when it does get them, it tries to crack the encryption used (if any) and succeeds a lot.

In addition to that, the NSA has been tasked to ensure that communications for the United States government are secure.  It does this in a number of ways that include preventing leakage of the signals in the first place, but it’s most famous for its work in cryptography.  And if there’s one thing that they know, it’s that crypto is hard.

It knows that for one main reason, and that is its code-breaking section.  One of that section’s first duties, of course, is to break other nations’ codes.  But it also tries to break algorithms in and from the United States.  Any time the agency tasks someone to create or improve an encryption algorithm, another group that specializes in finding weaknesses in crypto algorithms is tasked to break it.  If that happens, it gets sent back to be fixed if possible or scrapped if not.  This is a good thing: if your friend can break your algorithm, there’s a good chance that your enemy can, too.

So take a worldwide coverage and world-renowned crypto capabilities and combine them with the NSA’s mission, which has been eloquently stated, “The ability to understand the secret communications of our foreign adversaries while protecting our own communications–a capability in which the United States leads the world–gives our nation a unique advantage.”  In short, break theirs while protecting ours.  Part of protecting ours is ensuring that the encryption used, particularly by the federal government, is not breakable while taking every available opportunity to break the encryption used by others.

Take this combination, and two questions naturally rise to the top.

  • How much do you trust the NSA?
  • How hard is it to avoid them if they’re looking for you?

It turns out that these are not easy questions to answer.  While there have been a lot of suspicions about whether the NSA has looked at only foreign traffic over the years, at least without a warrant, it was hard to find proof save for the rare leak.  Even the information that has come along in the documents so far released by Edward Snowden hasn’t made the extent of surveillance completely clear, and that makes it even harder to answer the questions.  We’ll look at the first of those questions today, and the second question in the next article.

Continue reading “Trust and the NSA: They’re Not Mutually Exclusive”

How the Biggest Hack Ever Wasn’t

There’s been a lot in the news lately talking about the largest hack–I mean, the biggest attack–no, wait, that the Internet almost–

I can’t even come up with a summary of the reports, because most of the general reports have been exceptionally bad at explaining what happened.  Mostly, they have far overblown the technical prowess required and the effects on the Internet (even if a few servers were inaccessible for a little while).  So here’s my attempt (one among thousands) to explain what happened.

One of the major providers of spam sources is called Spamhaus.  They’re a good group of people, and I highly recommend that most companies use them as part of their spam solution.  (End users don’t really have a way of using them directly, so if you’re not running an IT department, don’t worry about it.)  Some reports call them “cyber vigilantes,” but the truth is that they basically build up a list of IP addresses that send spam or that shouldn’t be sending out large quantities of e-mail.  Their customers use these feeds to help determine when a message is likely from a spammer so it can be dropped early in the process.

The only people who really think they’re vigilantes are the people whose addresses end up on their lists.  One of these groups was apparently associated with a company called CyberBunker, so named because they set up shop in an old NATO bunker.  They would do business with anyone unless it involved terrorism or child porn.  Spammers were perfectly welcome to set up shop.

When CyberBunker’s address space got listed by Spamhaus, someone decided to remedy this by knocking Spamhaus offline.  It was hit with a combined 300Gbps of traffic.  That’s 300,000Mbps.  Consider that the high-speed connections most people have at home are perhaps 10Mbps, or maybe 20Mbps if they’re gamers.  Even my own FiOS connection at 150Mbps is a mere 0.05% of that stream.  Spamhaus, of course, has better feeds, but even if it has a carrier-grade connection like an OC-48 and its 2.5Gbps capacity, the traffic it was hit by was still more than 120 times that capacity.

How ever does one do a hack like this?  It’s almost impossible to consider the power at the fingertips of these people!  They must own every system on the planet to do this!

Well, not quite.

Continue reading “How the Biggest Hack Ever Wasn’t”

Critical Java update released: To update or uninstall? That is the question…

Today, Oracle released an update for Java 7 that addresses a security flaw found a few days ago and which is currently being exploited.  Those who have Java installed and need it should update to it by going to www.java.com and installing it from there.

This is the fourth major security fix release in the last five months for Java 7.  This latest fix addresses a flaw that exists all the way back into Java 6 and possibly earlier.  This and other problems have led many security experts to call for Java to be simply removed from everything that you run.

It’s not that simple.

Continue reading “Critical Java update released: To update or uninstall? That is the question…”