Ten Laws of Security Administration

January 31, 2011

And the companion piece, the Ten Immutable Laws of Security Administration:

Law #1: Nobody believes anything bad can happen to them, until it does
Law #2: Security only works if the secure way also happens to be the easy way
Law #3: If you don’t keep up with security fixes, your network won’t be yours for long
Law #4: It doesn’t do much good to install security fixes on a computer that was never secured to begin with
Law #5: Eternal vigilance is the price of security
Law #6: There really is someone out there trying to guess your passwords
Law #7: The most secure network is a well-administered one
Law #8: The difficulty of defending a network is directly proportional to its complexity
Law #9: Security isn’t about risk avoidance; it’s about risk management
Law #10: Technology is not a panacea


Ten Laws of Security

January 31, 2011

The ten laws of security, from Microsoft’s TechNet site. Click on the link for full explanations, if you like.

Law #1: If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore
Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore
Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore
Law #4: If you allow a bad guy to upload programs to your website, it’s not your website any more
Law #5: Weak passwords trump strong security
Law #6: A computer is only as secure as the administrator is trustworthy
Law #7: Encrypted data is only as secure as the decryption key
Law #8: An out of date virus scanner is only marginally better than no virus scanner at all
Law #9: Absolute anonymity isn’t practical, in real life or on the Web
Law #10: Technology is not a panacea


Definition Monday: Virtual Private Networks

January 31, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Virtual Private Networks.

Virtual Private Networks, commonly referred to as “VPNs” for the sake of brevity, are a common technology in today’s corporate networks. The concept behind them is simple – provide a way for two geographically distinct sites to appear to be on the same internal LAN, so that the users and computers on those networks can share resources. At the same time, of course, those resources need to be protected from the users of the Internet at large.

Back in the Dark Ages, when I started working in IT, connecting two sites to one another required either either laying fresh cable or (more likely) leasing a connection from the local telecom monopoly. You would call up the corporate sales office for your RBOC and tell them that you needed a T-1 or T-3 or Frame Relay connection between two sites, and they would quote you an outrageous price. You would then pay it, because you had no other options, and would receive a slow-by-today’s-standards dedicated connection between the two locations. Your satellite office could now access internal resources in your main office.

Now, dedicated lines weren’t all bad, of course – they were reliable, for the most part, and secure. But they were very pricey, and not a geographically flexible technology; if you signed that contract, your satellite office wasn’t going to be moving for a long, long time.

VPNs give that same sort of functionality, but without the physical networking path. Essentially, a VPN is a cryptographically secure tunnel between two sites on the Internet. Traffic flows through, from one side to the other, securely wrapped in something like IPsec and unaware of the transition between networks. From the point of view of the end user’s applications, it is a transparent technology.

One implementation style of VPN is the client-server model, as seen above: this is when a remote worker needs to access corporate resources, and so he or she runs a VPN software client on a computer at a remote location to connect to the corporate network. This allows the client to have a network address on the corporate network; it’s essentially the same as just giving the client a very, very long Ethernet cable and plugging it in behind the corporate firewall. All traffic intended for the corporate network, symbolized by the green line above, passes through the firewall and is able to access the internal corporate network directly.

Another style is the site-to-site, or LAN-to-LAN, model. This is the replacement for the dedicated leased line model above; traffic is transparently routed through a tunnel between the edge of one network and the VPN concentrator on another network. Using the same green line as in the previous example, you see that the encrypted tunnel actually starts at the router that the client computer is connected to, rather than at the client computer itself. This would be used when setting up a remote or satellite office. But since the VPN is not tied to a physical wiring structure, as with a leased line, this can be used to set up a temporary office “in the field”.

Which approach is correct? It depends on a lot of factors – how many people will be using the VPN, how permanent a satellite location is, what sort of authentication and authorization schemes you have in place or need to be implemented. But independent of the details, the VPN technology as a whole is an excellent tool for a geographically dispersed business that needs to share computer resources among employees while protecting them from the Internet at large.


Bring-Your-Own-Hardware in the Enterprise

January 30, 2011

In case you haven’t noticed, we’re in the middle of a seismic shift in end-user computing. Gone are the days of desktop computers, chained to a piece of furniture and attached over a local network to a file server and a print spooler. The new watchwords are mobility and flexibility, as more and more workers are getting accustomed to tablet computers, smartphones, laptops, and the ability to access their data from anywhere with an Internet connection. More and more often, employees want to have this sort of experience at work as well as at home, even going so far as to use their personally-owned equipment on their employer’s network.

From the employee’s point of view, it just makes sense. Given the choice between a boring two-year-old HP or Dell desktop computer running the locked-down corporate image of Vista and a sleek new MacBook, most people are going to choose the latter. Especially if they’re already Macintosh enthusiasts.

From the employer’s point of view, it also makes sense – if your employees are happy, they’re going to more productive. And if having exactly the computing environment they want, on their own dime, makes them happy; well, who would stand in the way? The company is saving money from the technology budget, the employees can choose the tools they’re most comfortable with. It’s a win-win situation.

However, things can go south in a hurry. Little or no input over employee equipment means that it’s difficult to maintain a solid security posture. Clients need to be treated as hostile and anonymous until proven otherwise, a clear break from the tradition of trusted clients that have been vetted by IT. If you decide to move in this direction in your own company, here are a few principles and suggestions that you should keep in mind.

  • Have Appropriate Policies, And Publicize Them

Many of the things that I am going to suggest depend on writing clear, concise policies and educating your end users about them. When users are in charge of their own workstations, they need to understand the consequences of their actions. Central IT is not able to provide the safety net that they have in the past when they don’t have any control over the clients on the network.

Policies need to be inclusive, rather than exclusive – that is, they should include requirements rather than restrictions. Also, they should be as technology-agnostic as possible. This makes them easier to keep current, and harder to find loopholes in.

“All mobile devices connecting to the Exchange environment must be ActiveSync compatible.” – Good.

“Mobile devices running Windows Mobile are forbidden on the corporate network.” – Bad. What happens when Microsoft changes the name of their mobile OS? And what if there’s another OS that you’re banning for the same reason?

  • Guard Network Access Jealously

If users are going to be showing up with their own devices and plugging them into your network, you need some way to know who owns what. At the very least, implement a registration system like Netreg so that you can track MAC addresses and who owns them. (I know that this is trivially spoofed, but it’s better than nothing.) A better solution would be to roll out 802.1x on both the wired and wireless networks, forcing authentication against a centralized RADIUS server at connection time. An ideal solution would be a full-blown Network Access Control implementation, whether it’s something commercial like Cisco Clean Access or Bradford Campus Manager, or an open-source solution like Packetfence. A Network Access Control (NAC) system not only registers the devices, but can evaluate their security posture to allow or disallow access to the network.

So, if you have a site license for an antivirus product, and you don’t want people connecting to your network without it, a NAC can make that happen. It may seem like an unnecessary investment, until the first time there’s a malware outbreak on your network and you have no way to isolate infected machines.

Also, it is appropriate to treat every client as potentially infected or hostile. Use IDS/IPS systems to monitor traffic, use host-based firewalling to protect servers from clients that haven’t been whitelisted, use egress filtering and log flow data at your border.

  • Have A Proper Backup System

Imagine this scenario – you find out that one of your employees has been giving proprietary data to a competitor. This person works in sales, and has a tremendous amount of vital customer data in his possession. On his personally-owned laptop. Which you would have no legal right to access, at least not without lawyers getting involved.

Uh-oh.

Products like Microsoft Data Protection Manager and Apple Time Machine should be used to take regular, periodic backups of corporate data stored on personally owned computers. If data is the lifeblood of your business, and for most people it is, then there needs to be at least one copy of that data on a machine that’s owned by the business. This is one of those policies that I was talking about earlier.

  • Secure the Endpoints

Your company needs to have a policy governing encryption of sensitive data, regardless of who owns the hardware. Modern operating systems all come with encryption options – Apple’s FileVault, Microsoft’s BitLocker, the LUKS capability built into most Linux distributions. Aside from those, there are a wealth of third-party tools like PGP Desktop or Utimaco that can be installed and used. Anything carrying sensitive data needs to be properly secured, especially if that “anything” is spending sixteen hours a day outside of the office and unaccounted for with its owner. Anything containing “work data” needs to be encrypted; you don’t want to be the business on the front page of the local paper after someone in Accounting leaves his Thinkpad in a taxi.

This goes for mobile phones, as well. A system like Blackberry Server Express or Microsoft Exchange allows security requirements to be pushed down to associated handsets. At the very least, passwords should be required after a short lockout period, and employees should be required to report a lost handset immediately so that it can be remotely wiped.

  • Use Remote Desktop Capabilities

For truly sensitive information, it might be wise not to let it leave your corporate servers at all. Technologies like Citrix Access Gateway or Microsoft Remote Desktop can be used to allow access to a desktop shell without ever moving data across the link to the client machine. I would recommend using multi-factor authentication, with ID tokens or smart cards, to mitigate the risk of a compromised machine leaking authentication credentials to your terminal server environment.

  • Conclusion

First, the bad news: there’s no way to make a network of dissimilar, un-vetted devices completely secure. It is entirely possible for an end user to simply disregard policy and wreak tremendous havoc on your network before you’re able to stop him. If you’re able to stop him.

But here’s the good news: a network that is designed to actively deal with the threat of a rogue client is much more likely to withstand an internal attack that one designed around the traditional trusted-machines-behind-a-single-firewall model. Implementing these technical suggestions, along with dissemination of appropriate policy, could easily make a Bring-Your-Own-Hardware network more reliable and robust than its traditional counterparts, even with employees choosing their own gear.


Netflix Throughput Charts

January 28, 2011

As a provider of high-definition video streams, Netflix is in a unique position to determine the sustained bandwidth that is commonly available to the customers of broadband ISPs. And, because they are such generous souls, they’ve chosen to share that information. In graphical form, no less.

The winners? Charter, in the US, and Rogers in Canada.


Facebook Subpoenas

January 28, 2011

For a long time, public posts on the Internet have been admissible as evidence. But more and more often, private or restricted posts are being subpoenaed from sites like Facebook and MySpace for use in court.

From the article:

In the United States, postings on social networks are generally governed by the federal Stored Communications Act, which regulates how private information can be disseminated in non-criminal matters. The law has been interpreted to mean that the sites don’t have to hand over users’ personal data in response to a civil subpoena. Defense lawyers, though, have devised a strategy to work around this roadblock: They ask judges to order plaintiffs to sign consent forms granting defendants access to their private material. The defendants then attach these consent forms when they subpoena the sites. In these subpoenas, the plaintiffs are essentially authorising the sites to hand over printouts of the private portions of their pages to the defendants.

Long story short – if you’re going to claim a debilitating injury, you probably shouldn’t post photos of your rock-climbing trip a week later on Facebook. Even if they’re “private”, they’re not.


Huawei Cipher Weakness

January 28, 2011

According to this post on Bugtraq earlier this week, the Huawei HG520 and HG530 home WAPs have a weak generation scheme for the default encryption key – it can be generated from the device’s MAC address. And since the MAC address is available to anyone on the network, that means that the encryption key can be generated by anyone who is passively eavesdropping on traffic.

Just another example of why you should never, ever, use the vendor’s default password for anything. Even if it’s “secure” and “unique”. There has to be some way to generate it reliably during manufacturing, and that algorithm is rarely secure enough to rely upon.


Sourceforge Attack

January 27, 2011

Sourceforge.net, a hosting service for open source projects, has suffered a serious security breach. They are currently working to identify the source of the exploit and ensure the integrity of the remaining data in their environment. Some services, notably CVS, are still down as of this writing.


Facebook HTTPS

January 27, 2011

Facebook is adding https functionality spanning the entire web site. Previously, only pages that required authentication credentials to be entered were encrypted; this meant that authentication cookies could be captured in plaintext, as with the Firesheep tool. This should put an end to that.

The capability can be activated by end users on the “Account Settings” page.


UNC Breach

January 27, 2011

An excellent writeup on the recent UNC-Chapel Hill security breach at Inside Higher Ed.

Here’s a quick synopsis: Dr. Bonnie Yankaskas, a professor of radiology at the university, was collecting mammography data for a study. The server holding the data, which included medical records and social security numbers, was breached by an unknown attacker and the data is considered to be potentially compromised.

The University wanted to fire her, but settled for demoting her to Assistant Professor and halving her pay.

Dr. Yankaskas’s argument is that she is an academic researcher, not a computer security expert – disciplining her for a security breach is unfair, because this is not her area of expertise or her responsibility. The school’s policy is that she should have appointed a “server caretaker” to monitor the firewall, install patches, etc., and the person she chose is a programmer with no training in security and no experience in server  administration. She also ignored his requests for training over the years, and continually graded him as “excellent” in his administration of the server, despite the fact that he did not know what he was doing.

This is a typical tension in higher education – the faculty want to be free of the strictures of security and IT policy, because they feel it unfairly confines their research. IT, on the other hand, wants to be as strict as possible and keep everything in a nice, predictable box.