I just sent this in to the FCC via email@example.com:
Anything that legally justifies a fast lane is unacceptable. When I pay my ISP for a certain amount of bandwidth, that places a burden on the ISP to provide acceptable service to me. If, through my connection, I’m seeking streaming from Netflix or YouTube, or downloading games from Steam or Good Old Games, or installing patches from Microsoft or Red Hat, I have paid for an expectation of acceptable connectivity to these services.
Several of the ISPs don’t seem to understand this. They seem to believe that their only requirement is that some rudimentary access is supplied despite what the customer pays for bandwidth. While right now the focus is on Netflix and other single providers, what happens if the ISPs decide to go after CDNs like Akamai? Suddenly, huge swaths of the Internet are targeted. Hundreds of thousands, even millions of sites become subject to bandwidth limitations because they choose to use more efficient CDNs.
It is up to the FCC right now to reclassify the ISPs as common carriers under Title II. We are in the midst of an upheaval at least as big as the industrial revolution, and the ISPs are led by people who really don’t seem that different from Gordon Gekko in their pursuit of ever-higher profit margins. Don’t let the Internet devolve into haves and have-nots. Don’t let the ISPs even have a chance of picking favorites. That’s not what they’re for. They’re for delivering content at the speeds that I’ve paid for. If their costs for delivering that speed goes up, that’s not the problem of Netflix or Google or Akamai. That’s between my ISP and me. Don’t let that change.
I just sent this in to the FCC via firstname.lastname@example.org:
In the IT world, and especially in security, we write a lot of reports. We often get the technical information right but the presentation can be a little dry, which can limit the impact. The following began as suggestions for writing penetration test reports (roughly along the lines of the SANS SEC560 template), but they can apply to other reports as well.
I get a lot of people asking me how they can get involved in security. Some of them are IT pros who have been in their careers for many years while others are new, like a help desk novice. But they all want to get involved because security is the exciting place to be. It’s the hot place that isn’t going away, unlike the rest of IT that, it seems, management seeks to automate out of existence.
Well, they’re right. Or some of them are, at least. It’s currently the sector least likely to be automated out of existence, but that’s largely because it’s currently too complex to do. I remember when a lot of IT was like that. We did much of our work by hand, as scripting was a luxury, especially on Windows. Security will come to that point, too, but it will probably be a little while. There are simply too many legacy systems around for it to be otherwise.
Anyway, here are some tips for getting involved in security. These are based on my own experiences coming up from informal desktop support through servers and then into security.
- Start thinking like security people. Security people by and large think…differently. The hacker ethos is there, and it’s not just about breaking into systems. It’s about changing things to get the desired outcome. This applies to offense, defense, and things that have nothing to do with either.
Here’s the hard part: If you don’t know how to do this, by all means, ask. We’re usually happy to explain how we approach our work. Have lunch with security people you know. Read papers, books, and weblogs. Watch videos from past conferences. Even better, attend conferences like DerbyCon and your local BSides, places that are welcoming to people who are new to the field.
Once there, ask to join a conversation. There’s a good chance you’ll be able to join, even if just to listen. Don’t pretend you’re better at something than you are, because you’ll be found out in about nine seconds and shunned. And they will remember you if they see you again, like across the table in an interview. Security is a much smaller field than people think.
- Integrate security into your daily work. If you work on the help desk, start asking yourself how the callers’ actions could cause security problems, taking notes about your thoughts and running them by your security staff (another reason to have lunch with them). If you’re further along, learn how to harden the systems you maintain. Don’t change anything without permission, of course, but read about others’ experiences, and realize that one size does not fit all. Just because a respected guide recommends wiping the page file on reboot doesn’t mean it’s a good idea for your environment. The more you do this, the more you start thinking like security, the better you’ll get on with them, and the better chance you have at joining them one day.
- Integrate security into your daily life. This isn’t just hardening your home systems. Learn to spot security issues as you go through life. I have some friends who think it’s sad and/or paranoid, but when I walk into a building, the first thing I do is start looking for ways to subvert the security in case of an emergency. This develops mental reflexes that are necessary in any security role, as the ability to spot something amiss and react to it is critical regardless which side you’re on.
- Set up a lab and tinker. Scrape together a system at home and install a free hypervisor like VMWare ESXi, KVM, or Xen. Or get a copy of VMWare Workstation (or Player if you can’t afford it) or VirtualBox and install it on your workstation. Download ISOs of older software like CentOS 5.0 and start looking up exploits against them. Once you find them, look for ways to mitigate them without patching because patching is not always a solution for a number of reasons.
- Learn multiple operating systems. You’re going to be interacting with a lot of different gear from different times. If you’re most comfortable on Windows, start learning Linux. When you do, it’s best to dive in, spending at least a week using it as your sole operating system to force yourself to learn how it works. Then find other environments that you don’t know and learn how they work. You’re not necessarily going for mastery, but some familiarity with how they work goes a long way.
- Learn a scripting language. Even if you’re not a developer, you need to learn something about automation. You have two primary choices based on default installations: Python for Linux and PowerShell for Windows. A third option, primarily for Linux, is Ruby, which is in some ways easier and more compact (and Metasploit is written in it). Regardless, you need not be an expert (though it helps), but you should be able to read a script and describe its flow. Find an idea and start writing it yourself. You’ll likely do it badly, but if it’s yours, you’ll have more passion and drive to finish it, and that will help you learn.
- Keep your eyes open. Security opportunities won’t always be as obvious as position postings. Have lunch with security people. Volunteer to work on security projects (even if security people aren’t involved). Volunteer your time with non-profits: the smaller ones, especially, can use some help. Go to conferences (the point bears repeating). There’s value in who knows you as they might pass word of a new opportunity along.
- Don’t whine. Very few people got into security purely by luck. Many of those who did failed to get anywhere. Getting into security usually takes work. Getting ahead in security takes more work. What will irritate security people is when someone whines incessantly that they can’t do something but clearly haven’t put forth any real effort. Show you’ve put forth the effort and you stand a chance of getting in and/or getting ahead.
That’s what I usually tell people, though this is (amazingly) a much shorter version of the discussions I usually have. I’m happy to talk with anyone who wants to get into security. We still need all the help we can get.
I’ve been puzzling over why Wireshark seems to lock up when launching on Windows 8.1 and dumpcap.exe sits in the background even after Wireshark is forced to close. Some experiments from the command line showed that any time dumpcap.exe tries to use some aspect of its capture behavior (including just listing interfaces), it locks up. Various tools suggested that it was waiting for some external event to allow it to close.
I finally learned from an Ask Wireshark post that it was due to WinPCAP not starting on demand. The solution is simple:
- Change HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NPF\Start to value 0x03 (SERVICE_DEMAND_START).
- Enjoy packet capturing goodness.
I believe this is an issue with WinPCAP and not Wireshark. There’s an alternate solution of running Wireshark in Windows 7 compatibility mode, but I try not to run things in compatibility mode unless there’s really no other way to do it.
Last week, word hit about a piece of malware referenced as BadBIOS. Reported by Dragos Ruiu, founder of the Pwn2Own contest and a respected member of the security community, it’s said to be able to communicate with other infected systems by the sound hardware, similar in some ways to a modem.
There are still a TON of questions about this. As far as I’ve read, few if any other people have seen the hardware, but the researcher himself is considered trustworthy. I’ve seen a lot of reports that get the information wrong, like a report that BIOS was infecting BIOS via the sound capabilities, which is not (so far as I can tell) what is being claimed. It seems that what is present is an incredibly resilient and persistent malware that can communicate to other similarly-infected systems via the sound card, and apparently to affect more than one operating system, having successfully affected Apple’s OS X and Windows, as one might expect, but also Linux and even OpenBSD, the latter of which is a very unusual target.
This is, in some ways, what was feared by many when Intel said it wanted to move from BIOS to EFI/UEFI. Intel had some very good reasons for this as the capabilities of BIOS were interfering in general computing hardware advancement, but when you put what amounts to an operating system in the firmware with room to expand, there stands a good chance that it’s going to be abused. UEFI sits under everything, and while it’s not quite a virtual machine host (yet), it has many of those same capabilities as it can read what’s going between hardware easily, giving it the ability to alter data at many points. It also makes it extremely difficult to pry out as few if any malware detection mechanisms can look into the hardware.
Based on a recent (mediocre) book series I’ve been reading, the thought crossed my mind that it may have been secretly sent to one or more researchers so that they would find it specifically in order to derail some secret capability developed by a state-sponsored agency or group. That’s getting into conspiracy theory, something I don’t tend to do, but those happen online more than they happen in meatspace.
In any case, it’s still something I’m watching, and I’m sure there are researchers working to develop similar capabilities. It’s not something I worry about hitting my systems, because the complexities of doing so are enormous. Most computer hardware is built to handle very specific information, but the microphones still start and speakers still end as analog, and the quality of both diverges significantly from one system to the next, even within the same model of hardware. I can see how data can be delivered via sound–we’ve done it for decades with modems–but aside from targets picked very carefully, I have difficulty believing that this could be used for something widespread, especially since the infection mechanism needs a different entry point.
It’s an interesting piece of targeted malware (if real), but it’s not going to take over the world.
For anyone that happens to be struggling with getting Fedora 19 installed on the Asus 1015E, at least with BIOS revision 303, it appears that there’s something in the installer kernel (3.9) that doesn’t agree with the system. Fedora 20 (kernel version 3.11) does work, though since it’s currently in pre-Alpha state, you’re installing it at your own risk.
Other than that, it’s a great little $200 notebook.
On 10 September 2013, the US Ninth Circuit Court of Appeals ruled in Joffe v. Google that Google’s capture of payload data from unencrypted WiFi networks while it capturing Street View images from its specially-equipped cars. In some sense, this isn’t surprising, but the way that the decision was worded makes it appear very easy for anyone to accidentally become a wiretapper, and puts in danger those of us who perform captures for a living.
I ran into a problem a few weeks ago with my Linux system. After performing a kernel update and rebooting, I couldn’t remember the disk encryption password. I tried for an hour or more, running through all of the passwords I could think of, including with new combinations and possible miskeys, but nothing worked. Finally, I shut it off in frustration.
Last night, I figured I’d take another crack at it. After nearly 30 minutes, I finally stumbled across the right password, and it was something that I’d tried before several times both last night and during the previous failure but apparently managed to miskey it a few dozen times. Success!
Until I tried to log in.
Password for my account? Wasn’t happening. Couldn’t remember what it was. Worse, I couldn’t remember the root password, either. OK, I figure. I’ll just reboot into single-user mode and reset the password.
It wasn’t quite that simple.
A couple of days ago, I was invited by Google to enable a new mobile Chrome feature. Thinking that perhaps this was the new QUIC protocol, I went ahead and accepted. What I got instead was an offer to run all cleartext traffic through Google’s proxy servers.
Still in extremely limited, invitation-only beta, Google’s claims regarding improved performance are probably accurate. Being in the middle of the connection, the proxy certainly can compress traffic and convert images to a format better suited for a mobile device, particularly one with low screen resolution, reducing the amount of data to be downloaded and thus improving network performance, especially over slower connections. Exceptions would be made for HTTPS traffic any anything coming from an Incognito session.
But this is at a severe cost in privacy. Every single unencrypted connection in a normal browser session would run through Google’s servers, allowing not only possible interception of passwords and other sensitive data (remember that not all data is legally protected) but also the possibility of feeding otherwise hidden pages into Google’s index. Despite the potential (certainly not assured) speed advantages, I fear that Google will at least make this a prominent option for users to enable without understanding the risks. Most people will choose convenience (in this case speed) over security given the option.
This is one of those things that I’ve long warned against. I’m fine with home filters, but those are generally under the owner’s control. A proxy that you don’t control gives ultimate power to whomever does own the proxy. It could block the traffic for any (or no) reason and the information that the user gets back about the block may or may not be accurate.
It also makes for a central point of monitoring that any government would love to have the opportunity to use. Looking at things optimistically, I’m sure the FBI would love to tap it in criminal cases, but there are plenty of other countries (like India) that are trying to or have set up monitoring as a fact of life, and I doubt that those countries’ networks will be made exempt from this feature.
I can’t get excited over this at even the most basic level. Usually when I see a new Google feature, I see what they’re trying to do even if the implementation is a little iffy. However, in this case I really can’t see the net good to come from it.
There’s a long history in the United States of backing the hero doing the wrong thing for the right reason. We love movies like Dirty Harry, Lethal Weapon, Beverly Hills Cop, and Hard Boiled where the good guy (usually a cop) finds no other way to get the bad guy than to break the law. At the same time, some of the best villains are seen to have done the wrong thing for the right reason: Gen. Hummel in The Rock exemplifies this when he takes hostages to force the government to tell the truth about the deaths of Special Forces soldiers over the years.
While those are fantasy worlds, there’s also a long history of sympathy for those real people who break the law for what society (or parts of it) deem to be the right reason. From those who resort to cannibalism to survive to those who refuse to disperse while in largely peaceful protest to a president who ignored separation of powers and ordered military trials of civilians, we look upon them with approval or at least forgiveness because we realize that sometimes extraordinary times require extraordinary measures.
But most of these approvals of real actions are in hindsight. At the time of the action, they are often controversial, even unpopular. But perhaps there’s another aspect to them that is often overlooked: they’re not happening to us. When an action doesn’t directly affect a person, they’re less likely to take a strong negative view on it than when they see a real or potential impact in their own lives.
This happens when we hear about the reality of combat, especially if we know someone who has been in the fighting. Even if we disagree with the war, we tend to give the benefit of the doubt to the individual because we want to trust that they did the right thing at the time even if it was illegal or usually considered immoral or unethical. But when we specifically are caught in the cross-fire, literally or figuratively, we tend to have a very different view.
And that’s what I think has caused the uproar over the NSA surveillance. Don’t get me wrong–I have some serious issues with it, too–but when there was reason to believe that it was primarily happening to people in other countries or to potential terrorists in the United States, people didn’t get too worked up over it.
Now that the Snowden documents have revealed ever-increasing surveillance of many millions of Americans–perhaps nearly all of them–it’s suddenly hit home that the average person could come under suspicion for the simple act of making or taking a phone call, visiting a website, or chatting with a friend. We start to worry that in connecting various dots, we could become a dot, and the known protections against this are nebulous at best. We have only claims from the government which include a court that has little or no adversarial activity. And that’s not good enough.
It doesn’t help that for most people, the NSA is a faceless entity. Most people don’t know anyone who works there, or if they do they don’t realize it as those who draw an NSA paycheck generally don’t advertise it. When we can’t put a familiar face on an activity, the motives become questionable, even sinister, because we have no one to question.
I’ve known some who have worked for some of these agencies. One shared trait is not talking about foreign affairs, usually for the same reason. From the inside, those with a TOP SECRET/SCI clearance see things that change their view of the world. I’ve been told by someone who would know that the average stay in the NSA’s counter-terrorism group is two years or less; after that, they burn out. They see so much that the general public not only doesn’t get but doesn’t want to see that they can’t talk even about things not covered by their clearance. It’s just too frustrating.
And maybe that’s led to scope creep. The analysts and their bosses are, at least in their minds, dealing with extraordinary times and they require extraordinary measures. If we had just done this one other thing, maybe we would have caught the attack before it could do damage.
I imagine this happens fairly regularly. Someone comes up with an idea, someone else expresses discomfort, it gets bounced around the lawyers and perhaps the White House, and then a rationale is provided. I expect not everything is approved. Some things are too complex, too expensive, too niche, or just too blatantly unconstitutional. And sometimes there’s very strong push-back. But someone, somewhere, comes up with a legal reasoning and those who are not steeped in the law tend to go with it. It becomes easy to justify: We’re not the legal experts, we need this capability, it will save lives. Extraordinary times, extraordinary measures. That’s what they tell themselves.
But in extraordinary times, it takes extraordinary people to stand up against the illegal and unconstitutional. It’s critical that those protecting us remember what is being protected. People are being protected, but so is the foundation on which the country was built. That foundation has served for more than 200 years as an inspiration to people everywhere. The personal rights enshrined in the United States Constitution have largely become the accepted way that things should be around the world. When they’re set aside by stretched reasoning, even for extraordinary times, it undermines the very foundation of our society. Edward Snowden remembered that, and whatever his personal faults and mistakes, his actions have opened our eyes and caused an international discussion about how much is enough.
Yes, something might slip through. Another Boston Marathon bombing may happen. But even in its aftermath the country and–more importantly–its ideals survive. There are times when the wrong thing is the right thing to do. But it’s the exception, never the rule. Extraordinary measures used every time become ordinary–and wrong. And we must remember that, whether we are an average citizen, a police officer, a soldier, an intelligence analyst, or a president.