impossible for ideas to compete in the marketplace if no forum for
Understanding Network Access
Contributed by: Mirage Networks, Inc.
In Part 1 of this White Paper Series, we examined what has driven the development of security technology, and how that has created the situation in which we find ourselves today. The goal of this four-part series is to answer the two key questions IT security professionals find themselves asking with increasing frequency and urgency:
How do I control the access to my corporate networking resources? - and -
How do I ensure that the resources that are allowed on my network arenâ€™t creating a security risk?
We also introduced the three core elements of a successful NAC implementation: pre-admission, post-admission, and quarantine and remediation. In Part 2, we focus on the first of these elements.
As you may recall from Part 1 the three main NAC standards (Cisco NAC, Microsoft NAP, TCG TNC) concern themselves with pre-admission (a.k.a. on-entry) NAC. Pre-admission checks are critical, but assuming that clean devices cannot become infected or hacked once ON the network could well be considered myopic. Post admission infection is addressed later in this document.
For a NAC solution to be effective, it must deliver two essential pre-admission capabilities. First, it must be able to identify a new device connecting to the network. Second, it must be able to test the endpoint for adherence to security policy and restrict access for those devices that do not meet defined entry criteria. Together, these capabilities should provide data that can be used to compare a deviceâ€™s current security state against established security policy criteria, to determine how much or how little access that device is allowed.
1. Endpoint detection: software approaches
There is a significant variety the approach NAC solutions take to both identify and control access to the network.
A common approach to identifying and controlling network access for a user on a device is through the 802.1x standard for network authentication.
There are three elements in this process: the supplicant, the authenticator, and the AAA server.
This standard defines a communications protocol between three elements: a supplicant, an authenticator, and an authentication, authorization, and accounting (AAA) server. The supplicant is software deployed on a userâ€™s workstation, and the authenticator is implemented on a network device such as a router, switch or wireless access point. The authenticator requests information from the supplicant and passes that information to the AAA server.
802.1x uses the Extensible Authentication Protocol (EAP). EAP is used to communicate authentication information between supplicant and AAA server through the authenticator, and typically uses it to send additional host integrity data to the authenticator.
802.1x: Pros and Cons
On the plus side, 802.1x detects entry to the network before the endpoint has an IP address and can cause havoc on the network.
The drawbacks are tied to manageability. To be effective, the 802.1x approach requires both management of every endpoint device and integration with a third party: all network devices must be configured for 802.1x, and they must also be integrated with the authentication server. Additionally, this approach often requires an upgrade of legacy networking equipment and OSs. When you consider the reality of what these management requirements will demand in terms of time, in addition to the cost and time required to upgrade the infrastructure, it is clear why adoption of the 802.1x has been slow.
Dynamic Host Configuration Protocol (DHCP)
Another pre-admission approach for identifying endpoints and restricting their network access is through integration with a DHCP server, very commonly a DHCP Quarantine Enforcement Server (QES). In this approach, the device accesses the network and sends out a DHCP request for IP address assignment. DHCP solutions usually require a host agent - software on the device itself - that serves as what is known as the Policy Decision Point (PDP). In NAC solutions using this approach, there is usually also an in-line DHCP appliance that deploys between the organizationâ€™s DHCP server and the network.
When a device requests an IP address, the DHCP NAC appliance operates on the "better safe than sorry" assumption and assigns an address that sends the requestor to a restricted, quarantine VLAN. If the device has the DHCP agent, the DHCP server queries it to ensure that the agent is in compliance with policy. If it is, the device is assigned a replacement IP address, giving it access to the appropriate network VLAN. However, should the device either have no agent, or an out-of-compliance agent, it can either be blocked from entry entirely, be kept in a quarantine VLAN, or be required to download an agent for testing of unmanaged devices.
DHCP: pros and cons
DHCP is often a good choice for NAC because for the most part, the majority of valid company users use DHCP for IP address assignment and therefore is a good hook for recognizing new devices. Another advantage of this approach is that it works regardless of whether or not local network devices support the 802.1x standard or not and it is generally easier to deploy than the 802.1x implementation.
There are several drawbacks to this approach. As with 802.1x, requiring devices be equipped with agents, and that those agents be kept up-to-date, is problematic in and of itself. Further, most of the agents are restricted to Windows clients.
But in the shadows loom two greater dangers. First is the issue of static IPs, which easily bypass DHCP security. This could be a user assigning his/her device a static IP address - a tactic employed both by cyber criminals and valid users wanting to avoid the entry check - as well as devices with static IP, like servers or printers. Second, by using broad networks for quarantining high-risk devices, this approach creates a "VLAN of Death" - a virtual leper colony in which devices ultimately end up infecting and re-infecting each other.
A third pre-admission approach to identifying and controlling network entry is through integration with the user authentication process; the approach that Microsoft has taken with their NAP initiative is an example.
There are two types of solutions taking this approach. In the first, a replacement or proxy for authentication is implemented in-line with the actual AAA server; in the second, the in-place authentication server is upgraded with software that can recognize both a hostâ€™s authentication and integrity credentials (Microsoft NAP is an example of the latter). In either instance, endpoint integrity determines what network and system resources the endpoint may access.
Typically, these solutions have to integrate with a Policy Enforcement Point (PEP) for mitigation, using 802.1x or a secure DHCP QES or VPN QES for network restriction. These systems rely more on the server infrastructure than the networking infrastructure for triggering a host integrity check.
Authentication: Pros and Cons
This approach is useful for restricting access to network servers and services on the network. However, it does have one critical flaw that leads to a dangerous irony: unmanaged devices donâ€™t initiate authentication server identification, which means no checks will be run on those devices. By definition, unmanaged devices are higher-risk devices, so the devices that most need a check are the devices that wonâ€™t get one.
Endpoint detection: appliance approaches
Network security appliances are an alternative to the above to detect the entry of the endpoint to the network and initiate policy checks. Their deployment is what differentiates them: in-line vs. out-of-band.
In-line appliances either replace the edge switch or sit in line directly behind the access layer switch. This allows them to see traffic, detect devices and initiate policy checks. Alternately, they can be deployed to sit in-line with either the DHCP server or the Authentication server, and then function in a manner similar to that discussed in section 1b above. This similarity is the reason that the latter deployment model is less desirable, as it makes the security easy to bypass.
In-line appliances: Pros and Cons
The advantages of an in-line NAC appliance is that it will see all endpoints coming on the network, and can control their traffic effectively because all the traffic flows through the appliance. The drawback of this approach is that it is essentially replacing the access or distribution layer switches, which can be a costly, and time consuming process in terms of network upgrades and potential re-architecture.
A final approach the use of a dedicated security appliance that is deployed out-of-band. To deliver optimal results, the appliance must be capable of maintaining a real-time IP address map and be able to detect when a device needs to be tested and controlled. Here, too, there are varying approaches. Some use an alternative PEP, such as the DHCP or Authentication servers, or 802.1x as described above. The best take the “less is more” approach, eschewing third-party PEPs for the increased cost and latency they will deliver. The Mirage Networks NAC solution is an example of this approach. It uses a unique combination of maintaining an active IP map by monitoring traffic at both Layer 2 and Layer 3 of the network and using ARP management to isolate endpoints from the rest of the network for testing.
Out-of-band appliances: Pros and Cons
The positive aspect of this approach is that a device can be isolated from the network without the ability to bypass NAC checks, as is the case with DHCP and Authentication hooks, and without costly network replacement or reconfiguration that we see with 802.1x and in-line solutions.
Device Testing for Risk Profiling
Once a device has been detected, it can be tested. Testing can take many different forms, from simple authentication and services checks to deep scans of the registry and applications to determine if the deviceâ€™s operating system (OS) and security relevant applications are fully up to date and adhere to network security policy
It should be noted that entry scans are better at providing risk information than infection information. In other words, pre-admission NAC testing can determine if a device is out of compliance with security policy or is vulnerable to be threats, but wonâ€™t provide a great deal of information on an actual infection. This is why post-admission monitoring- something weâ€™ll discuss in detail in Part 3 of this series - is so critical.
Most NAC solutions have some element of testing endpoints for policy adherence as a requirement for admission to the network. Policy testing identifies endpoints that fail to meet some criteria established by the IT or security administrator. These tests can include checks for services or applications, i.e. peer-to-peer (P2P) software or Web servers, OS patch levels, AV signature levels, and processes to identify unmanaged, and/or unauthenticated devices or end users.
An elemental endpoint check is looking at running processes and available services that may either be restricted, like P2P applications, or required, like AV software and other security applications. Some of these checks can be run external to the device from the network, and will look at the network ports and services that are available on the endpoint. This technique can be used to identify Web, FTP, and/or P2P servers, and many other applications that have a networking component. The presence of some services can also indicate the absence of certain security software, like a personal firewall, for example. If an endpoint has networking services available for connection - like TCP port 21 for FTP connections - that are restricted by a correctly configured corporate personal firewall, then one can assume that the personal firewall is either not running or is improperly configured.
If checking for services that have no available networking component is important, some application-level checks require software deployed on the endpoints for accurate endpoint examination. In this case, the endpoint security software must interact with a network-based policy server that serves as the PDP to ensure that the results reported by the endpoint agent match the policy criteria established by the policy server.
OS patch-level vs. vulnerability checks
There are two approaches to OS patch levels checks and the subsequent vulnerability state assessment of an endpoint. In the first, a minimum patch level is tested on endpoint devices entering the network. Those that donâ€™t meet this minimum patch level are either sent to quarantine or are denied access to the network. The advantages of this approach: it is a very objective measure of the requirements for entry to the network, and it can eliminate devices that may be vulnerable to a certain type of attack. There are drawbacks, however. First, in network environments that enable minimum IT oversight and allow the use of a wide variety of devices, it can often be difficult, and sometimes virtually impossible, to require the very latest patch level for every endpoint that enters the network. If highly restrictive policies are enforced, they can often result in a high volume of Help Desk calls and an unhappy user population. Second, to really check for OS patch levels without opening up endpoints to dangerous queries, this approach requires endpoint agents that come with their own management overhead and headaches.
A second approach is to test the endpoint for vulnerability to attacks from the network. This approach examines what OS patching is intended to prevent, i.e., vulnerable services that can be exploited by threats. An advantage of this approach is that an endpoint can be checked from the network, without agents, for vulnerabilities to self-propagating threats. A drawback of this approach is that it doesnâ€™t identify systems that need patching during the entry process. It is possible for a system to be protected against a threat, but still not have the OS patch fixing a particular vulnerability. This could be the case if a personal firewall were in place, for example. In many instances, the use of one or both of these techniques is a philosophical decision based on risk tolerance and the amount of management overhead the administrator is willing to take to use one or the other technique.
Antivirus signature level
Another common check by NAC products is for the antivirus (AV) signature level. The concept behind this approach is that devices that are up to date on AV signatures are unlikely to be infected with a threat. An endpoint agent interacting with a network policy server enables these checks. The advantage of this type of check is, obviously, that up-to-date AV software is good protection against threats. But there are several "gotchas" here, if this is the only policy check for admission being performed (as is often the case with many Cisco NAC deployments, in which the endpoint agent is AV software from a major AV vendor.) First is defining the minimum signature level for entry: if this is defined as the most recent, then systems will be routinely blocked during entry as they are required to update their AV software. This can lead to a very unsatisfactory user experience. Second, even with updated AV, devices may still be infected with a threat that will go undetected until post-admission, as a full system scan usually takes tens of minutes and is an unacceptable delay for entry to the network. Finally, AV signatures, by their nature, are always released after an outbreak. This means that there is a period of time that the network is vulnerable to attacks from the latest threat. The upshot: AV signature checks play an important role in admission criteria, but usually arenâ€™t sufficient to ensure that infected devices donâ€™t make it into the network.
Managed vs. unmanaged devices
A final policy check for endpoints entering the network is determining its status as a managedand authenticated device, as opposed to one that has not been seen before or approved to be on the network. This check is really all about controlling access to the network for machines that are not company-approved or managed by the IT staff. This approach can be used to identify and test contractor machines, home machines, and many handheld devices that may make it onto a corporate wireless network. Many times, this is implemented to ensure that any new machine that enters the network has undergone certain checks, which may take longer and be more rigorous than standard checks, before they are allowed on the network. In this case, they may be sent to a quarantine area of the network, where these tests and registration processes can be performed before network admission is allowed.
Another type of test that can be run on network admission is threat checking. Unlike policy checks, which give an indication of the risk profile of an endpoint, these tests look for actual threats.
There are two basic approaches for testing for endpoint-originating threats. The endpoint can either be scanned for threats on entry to the network, or it can be monitored from the network to identify threats coming from the endpoint.
Testing for threats on endpoints usually involves a network-based AV scan that occurs in the context of a Web session. These tests are usually pared down to minimize user latency,and look for the most recent and most common threats. This approach can be helpful in identifying and removing actual threats on endpoints on entry to the network. The reason this approach is generally not used, especially in the LAN, is that the less time the scan takes the less effective it is, and the more time it takes the more unacceptable it is as a component of the admission process to end users. This approach also suffers from the fact that AV signatures are developed and deployed after a new threat has already reared its head, and so cannot catch early, or â€™day zero,â€™" threats.
A second approach to threat profiling is examining traffic originating from an endpoint, to identify and stop threat entering the network from the endpoint itself. This can include both signature-based and/or behavioral-based detection through analysis of the traffic originating from the endpoint. The advantage of this approach is that it is transparent to the end user and usually monitors the traffic from the endpoint both on admission, as well as post-admission to the network. The drawback of this approach is that, just like AV solutions, signature-based solutions need to be updated frequently, and there is a risk of false positives on many systems that monitor behavioral anomalies. It is also important to look at the speed of detection, and the speed of mitigation on detection of a threat, since many threats spread in a matter of seconds.
Summary and Next Steps
The approaches to pre-admission NAC are almost as varied as the threats that those solutions are designed to nullify. IT security pros must examine carefully which “pros” they are willing to trade for which “cons” to find the solution ideal for their networks. Of course, pre-admission is only one piece of the puzzle. For a robust NAC solution, post-admission NAC. and quarantine and remediation capabilities are also necessary.
Appendix: Comparison Table: Endpoint Detection Methodologies
In Part 3, we will focus on these remaining two elements of a successful NAC implementation, and in Part 4 we will discuss key considerations you should keep in mind when deciding on what NAC strategy will work for your organization.
Search the ENTIRE Business
Forum site. Search includes the Business