DNS Attack

March 30, 2012

Apparently the loosely organized hacking collective/meme known as Anonymous has announced that they will take out the Internet’s root DNS servers with a massive DDoS tomorrow.

How likely is it that they’ll succeed? Not very, for a whole host of reasons.


March 13, 2012

Apparently Dell has agreed to purchase SonicWall from the private equity group that has owned them for the last couple of years. This should be an interesting transition for current SonicWall customers; hopefully the support experience doesn’t degrade too terribly much.

IPv6 Deployment

July 28, 2011

According to this survey from Network World, most IT departments plan on having their webservers and other externally-facing resources available via Internet Protocol v6 in the next 24 months. A majority of respondents also plan to have their internal networks running either v6 or dual-stack within the same timeframe.

Do you have a plan? If not, I’d say you’re already way behind schedule.

Cisco VoIP Exploits

May 13, 2011

Once again, we see the results of telecom functionality moving into the networking space – the old-school telecom people just aren’t ready for the demands of properly securing an IP network. AusCERT has asserted that Cisco VoIP products, out of the box, can be vulnerable to attacks that turn them into listening bugs, that allow an attacker to eavesdrop on conversations, or can be crashed entirely as a Denial of Service attack.

Running any service over an IP network means that you now have TWO sets of security problems to deal with. In much the same way that “dumb” cell phones’ replacement by smartphones add tremendous security headaches, so too does the transition from traditional PBX systems to a VoIP world.

Fallout in the Cloud

May 2, 2011

The recent Amazon cloud services outage has caused some consternation, especially among the customers who permanently lost data that they had entrusted to Amazon for safekeeping.

It is important to remember that one of the three pillars of information security is “availability”: that is, ensuring that your information environment is robust enough to survive catastrophic events and continue providing information resources to the people who need them. Clearly, simply handing over your business data to a third-party and then washing your hands of responsibility for it is not a valid practice.

ASUS Transformer

April 25, 2011

While this isn’t technically a security-related topic, I wanted to pass along a link to this review of the new Android-powered ASUS EEE Transformer tablet. It’s running Honeycomb, much like the Motorola Xoom, but with an optional keyboard dock to turn it into a traditional laptop form factor.

It might be a good time to make sure that your NAC systems and other network infrastructure are capable of handling Android devices – I’m sure this is only the first of many laptop/desktop systems running the OS. It’s not just for phones any more.

Android DHCP Issue

April 20, 2011

Having trouble with misbehaving DHCP client behavior from Android devices? You are not alone. Check out this entry over at the Google bug tracker.

One of the possible culprits is a DHCP lease timer that’s tied to system clock; unfortunately, system clock stops advancing and simply jumps forward when a machine wakes from sleep, so the renewal request is never generated. Nice.

Definition Monday: IPv6

April 11, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: IPv6.

“I am a little embarrassed about that because I was the guy who decided that 32-bit was enough for the Internet experiment. My only defense is that that choice was made in 1977, and I thought it was an experiment. The probem is the experiment didn’t end, so here we are.

–Vint Cerf on IPv4 address space exhaustion

The Internet, as it has existed for a long time, is about to undergo a massive change. We are out of addresses.

The current addressing scheme for much of the Internet, especially here in the United States, is IPv4 (that is, Internet Protocol version 4). It is the “dotted quad” notation that you’ve no doubt seen before – four eight bit digits, from 0 to 254, separated by dots. Something like for the loopback address, or for your home broadband router.

Because these four digits are eight bits apiece, there is a maximum of 32 bits of address space available (actually, less than that due to various reservations and technical details, but ignore that for now). Thirty-two bits means 232, or roughly 4.2 billion addresses. Considering the world population is nearly seven billion, and that we also need addresses for servers and other network infrastructure, we clearly just don’t have enough to go around.

(I’ve mentioned in the past on this blog when exhaustion was imminent. Solutions like the Microsoft/Nortel buyout are, at best, extremely limited in the amount of time it buys.)

The solution to this, which has been available for years, is IPv6 (Internet Protocol version 6). Rather than being based around a 32 bit number for each node, IPv6 addresses are based around a 128 bit address. To the layman, this might look like a fourfold increase, but in fact, it’s much more than that. Moving from 232 to 2128 is an increase on the order of 296 times – that is, there are 7.92281625 × 1028 more addresses available in the new scheme. The assumption is that most home user Internet connections will be issued a “slash 64”, or a 64-bit address space; this is enough to host the entire current Internet, squared.


  • This should easily take care of any future scarcity issues. There are enough individual addresses in IPv6 to assign one to every molecule on planet Earth, with most of the pool left over. It’s a staggering amount of addresses.
  • IPv6 has many technical tools built into it from the ground up – things like autoconfiguration and IPsec encryption – that were clumsily grafted on to the IPv4 world.
  • This will restore the end-to-end nature of the Internet, where nodes can directly contact one another without hacks like NAT and PAT. Your home broadband router will no longer require “reserved” addresses in conjunction with port forwarding and other messy workarounds – instead, everything on your home network will have a unique, Internet-accessible address.
  • New sites, especially in China and the developing world, will be deployed on IPv6. If you want to communicate in the future with web sites that don’t exist right now, you need IPv6 connectivity.


  • Obviously, everything being directly connected to the Internet could require an increased emphasis on security. For too long, vendors have hawked NAT as a “firewall” solution, which it really isn’t – this will require some rethinking.
  • A lot of equipment will need to be replaced. Even now, in the year 2011, Cisco is selling Linksys branded network equipment that is not IPv6 compliant. And more than network equipment, everything on a network needs to be evaluated and possibly updated or replaced: firewalls, servers, SIEM systems, VPN concentrators, even simple appliances like NTP time sources.
  • The IPv6 way of doing many things is different; generally better, but different. There will be a significant learning curve, even for experienced network administrators.

All in all, the move to IPv6 will be a positive thing. But there’s a reason why the protocol has been available for a decade and we’re only now implementing it as a matter of necessity. It’s a huge, sprawling, complicated deployment, on the order of the Y2K fiasco, and it will require lots of careful thinking and analysis in your organization.

Additional Comodo Breach

March 30, 2011

In the wake of last week’s compromise at Comodo, which was use to issue fraudulent certificates, two more breaches have been announced.

Certification Authorities, or CAs, are at the top of the trust hierarchy for SSL connections. They are the people that verify that a certificate claiming to be from google.com is actually from Google. If a large CA is compromised, and certificates can be forged, the entire trust system built into SSL implementation begins to crumble. This is, to put it lightly, a Bad Thing.

Definition Monday: Network Access Control

March 28, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Network Access Control.

In an extraordinarily high security environment, it’s possible that only devices personally vetted by the IT team could be connected to a network. Using strict procedures as well as technologies like 802.1x, multifactor authentication, and MAC address port locking, it would be possible to ensure that only a specific set of network devices would be able to pass data.


However, very, very few networks are run with that sort of tight security. In a typical enterprise environment, previously unknown and unvetted clients need to connect all the time. Salesmen or consultants visiting the office will need network resources to work. Student interns might bring their own laptops or palmtops, since few companies will actually issue computers to the unpaid. Employees might want to use an iPad or some other personally owned device in addition to their corporate computer.

So the question becomes this: how do we ensure the security of the network as a whole, including all of the information assets on it, while still being flexible enough to accommodate any random piece of hardware that someone brings in that happens to speak TCP/IP?

Network Access Control.

As the name implies, a Network Access Control (or NAC) system acts as a gatekeeper, controlling client access to network resources. Whether you’re looking at an Open Source system like Packetfence or a commercial product like Cisco Clean Access or Bradford Campus Manager, the methodology is more or less the same.

When a device is connected to the network, a message is sent to a central database server with the hardware address of the device; this is to determine whether this is something that has been used on the network in the past or if it is some entirely new visitor. If it is a new device, the system will generally ask for some user credentials to ensure that the person plugging in this item is an authorized user of the network. This is especially important in wireless environments, where clients cannot be assumed to be in a particular geographic area but may be out in a parking lot or on another floor of a shared building.

Once the credentials are authenticated, generally via a RADIUS or LDAP central directory server, the NAC system will evaluate the security posture of the device. This is usually done via a piece of software called an “agent”, which is downloaded to the client machine and executed to gather data. This agent will retrieve information like the patch level of the operating system, the presence or absence of items like antivirus software, the networking settings, and so on. Information retrieved from the agent is then relayed back to the NAC, which will use it to define network connection parameters for the new client.

For example, imagine a student intern brings in his home netbook to use on the company network. When he connects, he is prompted for his username and password; this establishes that the item is owned by an intern, not a full time employee, so he may be placed on a VLAN for end users who don’t need access to database servers and other critical infrastructure. The agent then relays that the netbook has antivirus software installed, but the definition file is out of date; this information could be used to put the netbook into a “guest” VLAN with only Internet access, sealed off from company resources. It could even be used to put the device into a “remediation” VLAN that only has access to Windows Update, Symantec, and other web sites that would be useful for getting the machine up to snuff. Once it has been brought up to date, the agent will run again, realize that it is fixed, and reallocate network resources accordingly.

Obviously, the initial deployment of a NAC requires a lot of thought and planning. But with more and more employees wanting to just use their own equipment in the office, a Network Access Control system can save tremendous amounts of time for your IT staff by relieving them of the need to personally evaluate and update each new machine that someone wants to use at work.