Creating a Virtual Perimeter With Halo NetSec

Move a server outside of the firewall, and you could potentially expose it to additional risks. With this in mind, it should be no surprise that many studies have identified the lack of perimeter security as one of the biggest inhibitors to public cloud adoption. But what if moving to the cloud was no longer synonymous with the loss of a perimeter? What if you could build an equivalent barrier of protection around all of your servers regardless of location, all though a single interface? In this blog post I’ll explain how to do exactly that using Halo NetSec.

Identifying Requirements

Figure 1 shows a classic internal network, with DMZ servers deployed as IaaS instances. Let’s assume we want to be able to burst these servers out to multiple public cloud providers as required. Let’s further assume that we wish to maintain an equivalent level of perimeter security, and minimize the amount of administration required to perform this task.

Let’s start by identifying the risk mitigation techniques being provided by the firewall to the DMZ. This will give us a list of requirements that must be met once we start trying to build equivalent security in a public cloud.

The firewall protects the DMZ by:

1. Limiting Internet exposure to only those services we wish to have publicly accessible
— In this example we’ll assume HTTP and HTTPS

2. Restrict outbound access from the DMZ to patch servers only
— This will prevent a Rootkit transfer if an attacker finds an exploit

3. Restrict access to administrative services
— In this example we’ll assume SSH is used to administer the servers

4. Provide a second level of authentication if administrators are working remotely
— Provide additional authentication beyond the OS logon

5. Maintain an audit trail of who is administering the servers
— Segregation of duties requires that the the Admins cannot modify this data

6. Central interface for controlling all traffic to and from all servers, regardless of location
— Simplified security leads to fewer human errors

So in order to build equivalent security in a public cloud, we need to be able to match each of these six risk mitigation techniques. Failing to do so will potentially increase the risk exposure to our servers.

Background Info If You Need It

For the purpose of this blog entry, I will assume that you are already familiar with using Halo. If not, I highly recommend you check out the Getting Started With Halo video, as well as the Halo Firewall Overview. Each of these videos are five minutes or less, and can get you up to speed quickly on the basics. I’ll use this as a jump off point for how we are going to meet each of our six requirements.

Meeting Requirements 1 and 2

Meeting our first two requirements are simply a matter of correctly building our Halo firewall policy. This is shown in Figure 2. We have restricted inbound access to HTTP and HTTPS, both of which are services offered by each Web server. This is the only inbound access we would like to permit from unauthenticated sources on the Internet. As for outbound traffic, we’ve restricted access to HTTP and HTTPS as well, which are used to communicate with the patching servers. Note in the outbound rules we have also limited the destination to that of the patching servers. This way if an attacker finds an exploit, they can’t leverage outbound HTTP or HTTPS to download their toolkit. This helps to dramatically minimize the damage an attacker can cause.

Note that the above firewall policy may look a bit different than what you are used to. This is because inbound rules do not have a destination defined, and outbound rules do not define a source. The firewall policy will be directly applied to each Web server you wish to protect, so specifying these endpoints is not required. So for inbound traffic, it is assumed the Web server is the target. For outbound traffic, the Web server is assumed to be the source. This helps to minimize the complexity of the rules.

Note that this structure also permits our firewall rules to be flexible and portable. For example, if target IP was specified in the inbound rules, what would happen if the server changes IP addresses or was moved to another cloud? If target IP was specifically defined, we would obviously have to manually update the firewall rules as part of the change. Since Halo can detect the local IP address and adapt accordingly, the same firewall policy can be leveraged regardless of virtual location. Regardless of where you initialize the server, the same firewall policy will be applied. This permits us to meet our first two requirements in both private and public environments.

Meeting Requirements 3 and 4

Our next two requirements revolve around restricting administrator access. Specifically, we need to ensure that only authenticated administrators can attempt to gain access to the protected server. To meet this requirement, we’ll leverage Halo’s GhostPorts feature. GhostPorts provides a second level of authentication via a one time password device. If you are not familiar with GhostPorts, check out the video GhostPorts Overview. This three minute video will get you up to speed quickly.

Next we’ll need to modify our firewall policy to permit access for our authenticated Administrators. This is shown in Figure 3. Note there are two new rules in the inbound policy. These rules permit access to the SSH port for each of the specified administrators.

One of the great things about Halo GhostPorts is that the port is invisible until the user authenticates. For example if I was to port scan one of the Web servers, it would appear that only the HTTP and HTTPS ports are open and listening. It is not until after I properly authenticate via GhostPorts that my source IP address (and only my source IP address) can then see that SSH is open as well. Note that GhostPorts simply makes the SSH port visible. I still need to be able to properly authenticate with the SSH server via whatever mechanism has been implemented (logon name and password, public/private keys, etc.). So with the addition of these two rules, we can successfully meet requirements 3 and 4.

Audit Trail of Administrators – Requirement 5

Our 5th requirement requires that all administration sessions to the servers be logged. Further, the sessions need to be logged in such a way that the Administrators cannot mess with the data. This is a serious concern as anyone with high level server access can usually alter the authentication audit trail. For example let’s assume that I do not want anyone to know that I was logged into the server at 3:00 AM today. Usually this would be a simple matter or logging back into the server and deleting all log entries associated with that session. If we wish to maintain proper segregation of duties, we need to ensure that administrators cannot modify the audit trail.

Luckily Halo can help out here as well. If you have a separate auditing group, they can be configured to receive email notifications whenever an administrator authenticates via GhostPorts. An example is shown n Figure 4. Note that the timestamp is referenced to UTC.

Halo also keeps a running log of security events as they occur. This can be accessed by logging into your Halo account, selecting “Servers” from the main menu, and then “Security Events History” from the drop down menu. An example is shown in Figure 5. Note that I can even filter the output so I only see the events I’m currently interested in.

Since all of the security event data is stored on the Halo grid, Administrators do not have the ability to delete any of the entries. This provides an audit trail that is far more reliable than local host log entries. With this in mind, we’ve successfully met requirement number 5.

Simplified Implementation – Requirement 6

Our final requirement was for a unified interface that can manage both inbound and outbound traffic flow, regardless of where the server is located. In other words, we want a single pane of glass for firewall rule management regardless of whether the server is currently located on public cloud vendor “A”, public cloud vendor “B”, or on the private IaaS environment.

Note that as we created each of the above firewall rules, we never had to identify the server’s current location. Our firewall policies were location independent, which means we can provide consistency across all possible environments. Move the server from one location to another, and the same firewall policy is enforced. Via a single interface, we were able to successfully manage firewall policies regardless of where the VMs are being executed, now or in the future.

There you have it. All six requirements met, without the need to inhibit bursting or server migrations. By leveraging Halo NetSec, we were able to define a logical perimeter that can span multiple cloud offerings. By the way, Halo Pro could provide identical functionality, but also include additional risk mitigation by verifying the configuration of each server, verifying patch level, and monitoring the system for intrusions. So if you need to secure your servers in public space beyond what can be provided by a simple perimeter, you have a path forward here as well.

Have any questions? Please feel free to post in the comments section!


Related Posts