Definition Monday: Steganography

April 25, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Steganography.

Steganography, a term derived from the Greek for “covered writing”, refers to techniques for hiding a covert message in an unsuspected object or communication. Historical examples abound – from the ancient Greeks tattooing messages on the scalps of trusted slaves to Boy Scouts using lemon juice as invisible ink to send hidden messages. In the modern parlance, it more often refers to “digital steganography”, the use of computers to embed messages into an innocuous file.

(A couple of vocabulary terms before we continue. “Stego” is a common abbreviation for steganography, both for the sake of brevity and because most spell checkers choke on the full word. The data that is being secretly conveyed is often called the “message” or the “payload”. The file that the message is hidden in is often called the “carrier”.)

It is important to note that there is a subtle difference between encryption and steganography. When two parties are communicating using an encrypted channel, there is still metadata available to an eavesdropper. For example, if you sent me an encrypted email, there would still be definitive proof that your email account was used to send some message to my email account. The purpose of stego, on the other hand, is to hide the fact that any message is being passed at all. If you upload an image with a hidden message embedded in it to your web gallery and then I download it, there is almost no way that anyone would correlate these events.

There are hundreds of different steganography applications available for all major operating systems – for the sake of example, I will look at OpenPuff. OpenPuff is a currently maintained Windows application designed to hide messages in a variety of different carrier types:

  • Images (BMP, JPG, PCX, PNG, TGA)
  • Audio support (AIFF, MP3, NEXT/SUN, WAV)
  • Video support (3GP, MP4, MPG, VOB)
  • Flash-Adobe support (FLV, SWF, PDF)

What this means is that someone can hide up to a quarter-gigabyte of data inside something that appears to be a bitmap or video file, upload it to a common media sharing site like Facebook or Flickr, and have an accomplice download the file and extract the data. And unless your corporate defenses are set up to capture someone uploading data to a social media site – a filter that would no doubt be overwhelmed by false positives in most environments – you would be none the wiser. Especially since OpenPuff is available as a Portable App these days, so it doesn’t even require install rights on the client machine.

Surprisingly, there have been very few cases of steganographic carriers spotted in the wild; lots of speculation about it as a threat, but very little proof. Then again, the point of the technology is to evade notice. Maybe it’s just really good at it.


Definition Monday: IPv6

April 11, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: IPv6.

“I am a little embarrassed about that because I was the guy who decided that 32-bit was enough for the Internet experiment. My only defense is that that choice was made in 1977, and I thought it was an experiment. The probem is the experiment didn’t end, so here we are.

–Vint Cerf on IPv4 address space exhaustion

The Internet, as it has existed for a long time, is about to undergo a massive change. We are out of addresses.

The current addressing scheme for much of the Internet, especially here in the United States, is IPv4 (that is, Internet Protocol version 4). It is the “dotted quad” notation that you’ve no doubt seen before – four eight bit digits, from 0 to 254, separated by dots. Something like 127.0.0.1 for the loopback address, or 192.168.1.1 for your home broadband router.

Because these four digits are eight bits apiece, there is a maximum of 32 bits of address space available (actually, less than that due to various reservations and technical details, but ignore that for now). Thirty-two bits means 232, or roughly 4.2 billion addresses. Considering the world population is nearly seven billion, and that we also need addresses for servers and other network infrastructure, we clearly just don’t have enough to go around.

(I’ve mentioned in the past on this blog when exhaustion was imminent. Solutions like the Microsoft/Nortel buyout are, at best, extremely limited in the amount of time it buys.)

The solution to this, which has been available for years, is IPv6 (Internet Protocol version 6). Rather than being based around a 32 bit number for each node, IPv6 addresses are based around a 128 bit address. To the layman, this might look like a fourfold increase, but in fact, it’s much more than that. Moving from 232 to 2128 is an increase on the order of 296 times – that is, there are 7.92281625 × 1028 more addresses available in the new scheme. The assumption is that most home user Internet connections will be issued a “slash 64”, or a 64-bit address space; this is enough to host the entire current Internet, squared.

Advantages:

  • This should easily take care of any future scarcity issues. There are enough individual addresses in IPv6 to assign one to every molecule on planet Earth, with most of the pool left over. It’s a staggering amount of addresses.
  • IPv6 has many technical tools built into it from the ground up – things like autoconfiguration and IPsec encryption – that were clumsily grafted on to the IPv4 world.
  • This will restore the end-to-end nature of the Internet, where nodes can directly contact one another without hacks like NAT and PAT. Your home broadband router will no longer require “reserved” addresses in conjunction with port forwarding and other messy workarounds – instead, everything on your home network will have a unique, Internet-accessible address.
  • New sites, especially in China and the developing world, will be deployed on IPv6. If you want to communicate in the future with web sites that don’t exist right now, you need IPv6 connectivity.

Disadvantages:

  • Obviously, everything being directly connected to the Internet could require an increased emphasis on security. For too long, vendors have hawked NAT as a “firewall” solution, which it really isn’t – this will require some rethinking.
  • A lot of equipment will need to be replaced. Even now, in the year 2011, Cisco is selling Linksys branded network equipment that is not IPv6 compliant. And more than network equipment, everything on a network needs to be evaluated and possibly updated or replaced: firewalls, servers, SIEM systems, VPN concentrators, even simple appliances like NTP time sources.
  • The IPv6 way of doing many things is different; generally better, but different. There will be a significant learning curve, even for experienced network administrators.

All in all, the move to IPv6 will be a positive thing. But there’s a reason why the protocol has been available for a decade and we’re only now implementing it as a matter of necessity. It’s a huge, sprawling, complicated deployment, on the order of the Y2K fiasco, and it will require lots of careful thinking and analysis in your organization.


Definition Monday: Network Access Control

March 28, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Network Access Control.

In an extraordinarily high security environment, it’s possible that only devices personally vetted by the IT team could be connected to a network. Using strict procedures as well as technologies like 802.1x, multifactor authentication, and MAC address port locking, it would be possible to ensure that only a specific set of network devices would be able to pass data.

Maybe.

However, very, very few networks are run with that sort of tight security. In a typical enterprise environment, previously unknown and unvetted clients need to connect all the time. Salesmen or consultants visiting the office will need network resources to work. Student interns might bring their own laptops or palmtops, since few companies will actually issue computers to the unpaid. Employees might want to use an iPad or some other personally owned device in addition to their corporate computer.

So the question becomes this: how do we ensure the security of the network as a whole, including all of the information assets on it, while still being flexible enough to accommodate any random piece of hardware that someone brings in that happens to speak TCP/IP?

Network Access Control.

As the name implies, a Network Access Control (or NAC) system acts as a gatekeeper, controlling client access to network resources. Whether you’re looking at an Open Source system like Packetfence or a commercial product like Cisco Clean Access or Bradford Campus Manager, the methodology is more or less the same.

When a device is connected to the network, a message is sent to a central database server with the hardware address of the device; this is to determine whether this is something that has been used on the network in the past or if it is some entirely new visitor. If it is a new device, the system will generally ask for some user credentials to ensure that the person plugging in this item is an authorized user of the network. This is especially important in wireless environments, where clients cannot be assumed to be in a particular geographic area but may be out in a parking lot or on another floor of a shared building.

Once the credentials are authenticated, generally via a RADIUS or LDAP central directory server, the NAC system will evaluate the security posture of the device. This is usually done via a piece of software called an “agent”, which is downloaded to the client machine and executed to gather data. This agent will retrieve information like the patch level of the operating system, the presence or absence of items like antivirus software, the networking settings, and so on. Information retrieved from the agent is then relayed back to the NAC, which will use it to define network connection parameters for the new client.

For example, imagine a student intern brings in his home netbook to use on the company network. When he connects, he is prompted for his username and password; this establishes that the item is owned by an intern, not a full time employee, so he may be placed on a VLAN for end users who don’t need access to database servers and other critical infrastructure. The agent then relays that the netbook has antivirus software installed, but the definition file is out of date; this information could be used to put the netbook into a “guest” VLAN with only Internet access, sealed off from company resources. It could even be used to put the device into a “remediation” VLAN that only has access to Windows Update, Symantec, and other web sites that would be useful for getting the machine up to snuff. Once it has been brought up to date, the agent will run again, realize that it is fixed, and reallocate network resources accordingly.

Obviously, the initial deployment of a NAC requires a lot of thought and planning. But with more and more employees wanting to just use their own equipment in the office, a Network Access Control system can save tremendous amounts of time for your IT staff by relieving them of the need to personally evaluate and update each new machine that someone wants to use at work.


Definition Monday: Defense In Depth

March 21, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Defense In Depth.

There was a time, when the Internet was young and optimistic and not nearly so hostile as it is now, when the main defense of an Internet connected site was a single simple firewall at the network border.

Of course, this was also a time when the majority of users had a dumb terminal, if anything, that remained on their desk and the data that they worked with was on a DEC VAX or some other minicomputer maintained by the high priests of the IT department.

Those days are long, long gone.

Defense In Depth refers to an information security strategy where multiple redundant layers of defense are using to protect information assets. This strategy can mitigate technology failures, vendor-specific exploits, and multiple attack vectors that simply could not be handled by a single layer of defense.

For example, consider a Windows workstation in the controller’s office of your business. The network as a whole is probably protected by at least one firewall. There is likely a router in between this workstation and the Internet as well, which has its own abilities to accept or deny traffic. An Intrusion Detection System may be monitoring the traffic between the Internet and the computers in the controller’s department, watching for signs of attack or compromise. Finally, the machine itself is probably running a host-based firewall (either the Microsoft-supplied one that comes with Windows or a third-party installation), virus scanner, adware scanner, and so on. The thought is that if a threat manages to get past the border firewall, it still needs to get past the other measures in place before data can be compromised.

Another example – most companies run different virus detection packages on their mail server and their workstations, despite the fact that the licensing is often more expensive than just running one in both places. Why do this? Because if a virus can elude one of those packages but not the other, it will still be stopped. But an antivirus monoculture has no such built-in safeguards.

Two things to keep in mind when deploying a Defense In Depth strategy:

  • Consider mixing vendors, or at least mixing up product lines and operating systems among a single vendor’s offerings. Imagine that your office is an all-Cisco shop, from the firewall to the core routing to the wireless network. Now imagine that a new vulnerability is discovered, specific to Cisco embedded operating systems, that allows for traffic to be exfiltrated without tripping any sensors. You’re going to be a lot more vulnerable than someone who sprinkled in some snort boxes, Vyatta routers, or some other non-Cisco equipment when designing the network.
  • Employing proper Defense In Depth can be expensive, especially if you go with a multi-vendor approach. It means buying multiple products with overlapping functionality. It means juggling more physical hardware. It means justifying purchasing new equipment rather than repurposing old stuff to cover a functional hole. It’s an expense that can be difficult to justify because the return on investment is not clear to the layman – but it is vitally important to make the case.

In the modern information environment, where network borders are fuzzy, where corporate data is showing up on personally owned laptops and smartphones, where people might be using their work laptop to help with their kids’ homework, the old “hard outside and a chewy center” model of a single network firewall at the office just doesn’t cut it any more. Defense In Depth is an important concept to remember when implementing your policies and technologies.

 


Definition Monday: DDoS Attacks

March 7, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: DDoS Attacks.

A Distributed Denial of Service Attack, often referred to as a DDoS Attack or simply a DDoS, is a more advanced version of a classic Denial of Service (DoS) attack. To explain what it is and how it works, it might be helpful to look at an analogous situation using the telephone network.

Imagine that you have a single phone line for your business, which is used to answer customer questions and take orders. Now imagine that someone has decided to render that phone line useless by calling it over and over again, and then hanging up when the call is answered. The phone is always ringing, but there is nothing useful on the other end. And worse, legitimate customers cannot get through to talk to you. The customer service provided by the phone line is being denied. Hence, denial of service attack.

Something similar is done in the IP world. A publicly accessible web server that provides information and ordering capabilities to your customers can be attacked by a rogue computer. Using a variety of techniques – I won’t get into the nitty-gritty technical details here, both because there are many different attacks and because they require some in-depth networking knowledge to understand – that server can be flooded with traffic that appears to be legitimate but is not. That means that genuine customer traffic is squeezed out; an actual customer cannot place an order, because the web server is inaccessible to him.

A classic DoS like this is pretty easy to mitigate. With the phone example, you would contact your telephone service provider and tell them to block the number that keeps calling you. Similarly, with the IP networking example, you would contact your web hosting provider or, if you’re hosting your own server, your Internet service provider and tell them to block all traffic from the IP address that is flooding your web server. The nefarious traffic, in either case, is blocked before it gets to your phone or your web server, and customers can connect again.

A Distributed Denial of Service Attack, on the other hand, is not so easy to cut off. (Just ask the people at WordPress, who got smacked with one a few days ago.)  The “Distributed” part of the name is the important distinction; rather than coming from a single source, this traffic is coming from all directions.

To go back to the phone example, imagine that your business phone is ringing off the hook – but that each call is coming from a completely different area code and phone number. The phone company will have great difficulty blocking the calls, especially in light of the fact that there is no way to do so without risking a block of legitimate calls as well.

In the IP networking world, a DDoS means that the traffic is coming from multiple sources simultaneously. This both makes it more difficult to block, as in the phone example, but more importantly it means that the hostile traffic is aggregated. Being attacked by a hundred compromised cable modem clients at 10Mb/s each means that there is a 1 Gb/s flood of traffic hitting your web server. An average botnet – that is, a centrally controllable group of compromised machines, often used to launch an attack like this – numbers in the tens of thousands to hundreds of thousands of computers. That’s a lot of traffic.

So how do you deal with a DDoS against a business resource? The first thing to do is make sure that you have some idea of the capabilities of your ISP or hosting provider – you should have, in writing, their policy on DDoS mitigation. There are steps that can be taken upstream to block this traffic, but the capability to do so varies by provider and is often closely correlated with price. You also want to make sure that your infrastructure is being properly monitored using an IDS or a SIEM or something similar, so that you are aware when a DDoS begins. And you need to have a backup plan for what happens if your web site or other Internet resource is unavailable for a short period. Maybe taking orders by phone isn’t completely antiquated after all.

 


Definition Monday: Exploits

February 28, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Exploits.

Many times, when reading security alerts on a mailing list like bugtraq, you will see the word “exploit”. What exactly is an exploit, and why is it important?

It’s important to remember that many of the security vulnerabilities that are discovered by third party researchers, particularly in the open source world, are theoretical. That is, a particular piece of code may appear to have a security hole because of the way that it is written, but that does not necessarily mean that the security hole can be taken advantage of by an adversary. If that piece of code is never exposed to the adversary, or if it has some other routine protecting it, or if the means of taking advantage of it cannot actually happen, then the hole remains theoretical.

For example, imagine a piece of software that controls an industrial metal router in a factory. It is entirely possible that the software requires an old version of Microsoft Windows, and that updating Windows will cause the metal router to stop working. That version of Windows may have a security vulnerability when exposed to a hostile network. But if the computer running the metal router is never attached to a network, then there is no way to take advantage of the vulnerability. It ceases to be a problem.

(I know that this sounds like some metaphysical tree-falling-in-the-forest stuff. Is a vulnerability that can’t be attacked still a vulnerability? I’ll leave that to the philosophers – I have enough on my plate worrying about the systems that are exposed to pontificate on those that aren’t.)

On the other hand, if a security vulnerability can be taken advantage of, and if it can be done in a reliable, repeatable fashion, then the  code that attacks it is referred to as an “exploit”.

For example, take a look at this posting on the bugtraq list from two days ago. The poster has identified a Cross-Site Request Forgery (CSRF) vulnerability in a particular model of Linksys home router. In addition to discovering the flaw, he has also included exploit code in the form of an HTML snippet that takes advantage of the vulnerability – this can be used to add an administrative user to the Linksys router configuration, under certain conditions, without the user being aware of the addition. And since all of the traffic on a home or small business network passes through this router, it’s probably not a great place for your adversary to have administrative privileges.

So, to put it succinctly, an exploit is a piece of software or an explanation of how to take advantage of, or exploit, a security hole.

Other terms that you might run across:

Proof of Concept (PoC) Exploit – This is a crude exploit intended to demonstrate that a security vulnerability exists, but is not as reliable or as professionally produced as a normal exploit. You will often see these in environments like bugtraq, where the author doesn’t want to provide something that can be “weaponized” and used to attack systems but still wants to prove the existence of a bug.

Zero-Day Exploit – This is an exploit for a security vulnerability that the vendor has not yet released a patch for. A newly discovered hole in Microsoft Windows 7, which is still present even when all vendor patches are applied, would be a zero-day exploit. Here is an example of a zero-day in the Cisco Secure Desktop product.

MetasploitThe Metasploit Framework is a penetration testing tool that provides a plugin architecture for running multiple exploits. Generally speaking, each exploit is its own little program; with Metasploit, they are all launchable from a common command shell. This is a boon for both penetration testers and computer criminals, both of whom make a business of taking advantage of security vulnerabilities.


Definition Monday: SIEM

February 21, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: SIEM Systems.

A SIEM, or Security Incident and Event Manager, is a relatively new concept in information security. The concept was pioneered about a decade ago, and has been evolving rapidly ever since.

A SIEM performs two major functions:

Log Centralization

The first, and original, purpose of a SIEM is to serve as a single point of collection for activity logs from disparate systems on an enterprise network. Nearly everything is capable of producing logs in some standarized format: Windows servers, VPN concentrators, network firewalls, managed Ethernet switches, Unix hosts, IDS systems, even individual workstations. In a SIEM deployment, each of these network devices sends its generated logs to a single collection point so that they can be analyzed in one place.

The benefit to this is obvious, if only for troubleshooting purposes. Imagine a mid-sized network that has half a dozen DNS servers, four Active Directory domain controllers, two DHCP servers, redundant border routers, and two hundred wireless access points. Finding a particular wireless host and tracking its Internet activity would take hours or days if each of these devices had to be queried and analyzed separately. With the centralized logging of a SIEM, on the other hand, all the information is in one place and easily searchable, usually with an intuitive web interface. You can track the laptop from the time it is issued an address by the DHCP server to the moment it vanished from the last access point.

Correlation Analysis

Additionally, a modern SIEM deployment will include a correlation software engine to mine through these disparate logs and alert the administrative staff to potential problems.

Imagine this example: your enterprise network has an LDAP-based single sign-on environment. This means that the same account credentials can be used to log in to any system on the network. Now imagine that someone is trying to gain access to an account with the username “admin”, assuming (perhaps rightly) that this account has elevated privileges and so it is a particularly tempting target. Your computers are set up with account lockout rules – logging in with the wrong password five times will lock the account.

The attacker knows this, so he tries four passwords for the “admin” account on a random assortment of hosts on your network. In an environment of any size, four incorrect logins are not going to raise red flags. But if the logs from these different hosts are all flowing into a SIEM system, the administrators should be quickly alerted by the correlation engine that someone is definitely trying to compromise the “admin” account.

Advantages and Disadvantages

The advantage of a SIEM should be obvious – it allows administrative staff to view the current and past condition of a network with a stunning level of transparency and immediacy. Most popular SIEM products will interface with almost anything that speaks TCP/IP – and, generally speaking, writing new plugins to understand a foreign format is a straightforward task.

The main disadvantage of a SIEM is that it is a very complex product, and the simple deployment can be a major project unto itself. Each host needs to be configured to speak to the central console. The correlation engine needs to be carefully tuned to minimize false positives and, more importantly, to minimize false negatives. In a complex network, multiple listening hosts (often known as “probes”) may need to be deployed in order to have a clear view of all network traffic. And the hardware to run a project like this needs to be pretty powerful; this isn’t something that will run in a VMWare container with a dozen other machines. You need power, memory, and disk to do this right.

But if those disadvantages aren’t too daunting, a SIEM is a fantastic tool for anyone who needs to manage a network with more than a few dozen hosts.


Definition Monday: Multifactor Authentication

February 14, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Multifactor Authentication.

Authentication is a key security concept in today’s networked environments, but it’s one that is commonly both misunderstood and underappreciated.

For a long time now, the most common type of authentication on computer systems has been the password or passphrase (These terms are essentially interchangeable, though “passphrase” generally refers to a longer string of characters). Examples abound – logging into your email account, logging into your workstations, even logging into this blog to leave a comment; in each of these cases, you need to enter a username and a passphrase to verify your identity. The thought is that the username of your account might be common knowledge, but the passphrase should be a secret that is known only to the appropriate user, and so knowledge of the passphrase is de facto proof that the user requesting access is indeed the user who was issued the account.

(Tangential comment: As I often say when giving basic security lectures: passwords are not just a cruel joke perpetrated by the IT staff on unsuspecting users to make their lives more difficult. They are a means of authentication, a means of proving that you are indeed the legitimate owner of an account. The authentication leads to authorization, the assignment of proper access rights and controls to your login session, as well as accounting, the recording of your successful authentication and any particularly interesting things you do while logged in. Collectively, these are known as the AAA services and are provided by protocols like RADIUS.)

These days, though, passphrases are no longer adequately secure for some environments. They can be compromised through brute force attacks, if poorly chosen. They can be harvested from plaintext database records if a web site is poorly engineered. They can be entered by users into a phishing web site, or on a computer running a keylogging daemon. Knowledge of a passphrase is no longer an ironclad proof of identity; we need something more.

In order to mitigate this, then, some services are beginning to use multifactor authentication, requiring more than just a single passphrase to allow authentication. These additional factors can be grouped into one of three categories:

  • Something You Know

The simplest factor, the passphrase, is an example of “Something You Know”. That is, the secret that the user is able to enter into the computer system is a partial proof of identity.

  • Something You Have

Another factor, “Something You Have”, refers to an object that is in the possession of the user attempting to log in. Something like an RSA SecureID, for example, would count as “Something You Have”. The numbers that the SecureID generates cannot be replayed and cannot be predicted. Being able to enter the numbers into the login window is undeniable proof that the user possesses the device.

  • Something You Are

The final factor, “Something You Are”, is also known as biometrics. This encompasses fingerprint readers, retinal scanners, voiceprints, and other mechanisms that use part of the user’s anatomy as an authentication token.

Combining two or three different authentication techniques from these three broad categories is what constitutes “multifactor authentication”. Using only one of them is “single-factor authentication,” requiring two is “two-factor authentication”, and asking for something from each category is “three-factor authentication”.

Let’s take a look at this in a normal office environment. You probably have an ID Card that is used with a magstripe reader or an RFID reader to open the door at work: this is single-factor, because it only requires Something You Have. Similarly, logging into your computer with a username and passphrase is also single-factor, because it only requires Something You Know. But if you log in to your GMail account using their new phone-based authentication system, you are using two-factor: the original passphrase is Something You Know, and the mobile phone is Something You Have. Similarly, if you have something like a fingerprint reader on your portable and must enter a passphrase and swipe your finger to log in, that is also two-factor (Something You Know and Something You Are).

A word of caution: clearly, multifactor authentication architectures can make authentication more reliable. While it is easy for a passphrase to be compromised, intentionally or not, it is much more difficult to steal someone’s passphrase AND their employee ID card or mobile phone or other physical token. But when planning to deploy a system like this, it is very important to ensure that it can recover from lost authentication tokens. If you’re using a phone system like the Google example, what happens when a user loses his or her phone? Can your fingerprint reader handle a situation where a user has a cut on his or her fingertip? It is important to think through the failure scenarios as thoroughly as the successful ones.


Definition Monday: Intrusion Detection Systems

February 7, 2011

Welcome to Definition Monday, where we define and explain a common technology or security concept for the benefit of our less experienced readers. This week: Intrusion Detection Systems.

An Intrusion Detection System, often referred to with the abbreviated “IDS”, is exactly what it sounds like. It is a piece of hardware or software that listens to data changes or traffic in a particular environment, watching for suspicious or exploitative trends. Think of it like the high-tech version of a motion detector light on a house; it passively monitors the environment until something triggers it, and then performs a specified task. Just like the motion detector will turn on the light, the IDS will log the problem, generate an SMS text message to an administrator, or email an affected user.

Broadly speaking, there are two common classes of IDS – Network-based IDS systems (NIDS) and Host-based IDS systems (HIDS).

Network-based IDS (NIDS)

A NIDS system passively monitors the traffic in an environment, watching for certain patterns that match a defined set of signatures or policies. When something matches a signature, an alert is generated – the action that occurs then is configurable by the administrator.

The most common NIDS in use these days is probably Snort, an open-source solution written by Marty Roesch and maintained by his company, Sourcefire. Snort is capable of acting as either a passive eavesdropper or as an active in-line part of the network topology. In this diagram, the lefthand example is a passive deployment, the right is in-line.

As you can see in the example on the left, the computer running snort is connected to the firewall – the firewall would be configured with a “mirror” or “spanning” port that would essentially copy all of the incoming and outgoing traffic to a particular interface for the snort software to monitor. This way, any suspicious traffic passing the border of the network would be subject to examination.

In the example on the right, the traffic is passing directly through the snort machine, using two Ethernet interfaces. This is an excellent solution for environments where a mirror port is unavailable, such as a branch office using low-end networking equipment that can’t provide the additional interface.

(It is important to note that a NIDS should be carefully placed within the network topology for maximum effectiveness. If two of the client machines in these diagrams are passing suspicious traffic between them, the snort machine will not notice; it only sees traffic destined for the Internet. It is always possible, of course, to run multiple NIDS systems and tie all of the alerts into one console for processing so as to eliminate these blind spots.)

Because of its large install base, rules for detecting new threats are constantly being produced and published for free usage on sites like Emerging Threats. If you want to be alerted when a host on your network is connecting a known botnet controller, for example, the up-to-the-minute rules for this can be downloaded from ET. The same goes for signatures of new worms and viruses, command-and-control traffic, and more.

So a NIDS is an excellent tool for detecting when a host on your network has been compromised or is otherwise producing suspicious traffic. But what about exploits that don’t cause traffic generation? If someone compromises your e-commerce server, for example, and installs a rootkit and starts modifying the code used to generate web pages, your NIDS will be none the wiser. For more careful monitoring of individual high-priority hosts, you would use a HIDS.

Host-based IDS (HIDS)

While a NIDS watches the traffic on a network segment, HIDS watches the activities of a particular host. A common open-source HIDS system is OSSEC, named as a contraction of Open Source Security.

OSSEC will monitor the Windows Registry, the filesystem of the computer, generated logs, and more, looking for suspicious behavior. As with a NIDS, an alert will be generated by any suspicious activity on the host and the results of the alert can be set by the administrator. If a process is attempting to modify the documents on your main web server, for example, OSSEC can kill that process, lock out the account that launched it, and send an email to the system administrator’s cell phone. It’s a remarkably flexible and impressive system.

Much like a NIDS, the placement of HIDS software needs to be carefully planned. You don’t want to receive an alert every time a file is accessed on a file server, for example; your administrator will be overwhelmed, and will simply stop reading alerts altogether. The system has to be carefully configured and the monitored behaviors pruned to as to eliminate false alarms and ensure that true security issues are noticed and alerted properly.


Ten Laws of Security Administration

January 31, 2011

And the companion piece, the Ten Immutable Laws of Security Administration:

Law #1: Nobody believes anything bad can happen to them, until it does
Law #2: Security only works if the secure way also happens to be the easy way
Law #3: If you don’t keep up with security fixes, your network won’t be yours for long
Law #4: It doesn’t do much good to install security fixes on a computer that was never secured to begin with
Law #5: Eternal vigilance is the price of security
Law #6: There really is someone out there trying to guess your passwords
Law #7: The most secure network is a well-administered one
Law #8: The difficulty of defending a network is directly proportional to its complexity
Law #9: Security isn’t about risk avoidance; it’s about risk management
Law #10: Technology is not a panacea