Blog

PCI Firewall Requirements in the Cloud

When it comes to meeting PCI-DSS within a public IaaS cloud, arguably one of the most difficult requirements to meet is the firewall section. The current requirements were written at a time when all servers were located on-premise. So how do you “cloudify” the requirements in such a way that will permit you to maintain a compliant environment? In this post we’ll explore how you can leverage Halo firewall to meet these goals.
When it comes to meeting PCI-DSS within a public IaaS cloud, arguably one of the most difficult requirements to meet is the firewall section. The current requirements were written at a time when all servers were located on-premise. So how do you “cloudify” the requirements in such a way that will permit you to maintain a compliant environment? In this post we’ll explore how you can leverage Halo firewall to meet these goals.

First, a quick caveat. PCI-DSS compliance is determined by having an external QSA auditor review your environment. While the PCI-DSS requirements are pretty specific, some portion of their interpretation is open to the opinion of the auditor. For example I’ve spoken with QSA’s that claim compliance in a public IaaS environment is impossible. I’ve spoken to others that claim it is entirely possible, provided the appropriate compensating controls have been put into place. Luckily the PCI council will be releasing guidelines for public cloud in the near future which will help clarify the issue.

Sections 1.1 through 1.3 of PCI-DSS spell out the deployment rules for firewalls. In short, they require that systems hosting cardholder data are not exposed to the Internet at large. These sections further require that systems subjected to a high level of risk are segregated from other systems. Finally, a solid documentation trail must be created that identifies which firewall rules have been deployed and why, as well as who signed off on deploying those rules.
If you are familiar with public cloud management, the first requirement probably caught your attention. Most VMs within an IaaS public cloud are deployed with the management ports exposed to Internet access. Requirement 1.3.3 states that servers hosting cardholder data should not permit direct connections with Internet based hosts. This can put you between a rock and a hard place. While you could leverage firewall rules to limit management access to originating from the corporate LAN, this makes the LAN a single point of failure for VM management. It also limits your options if you have a remote workforce.

This is where Halo GhostPorts can be a big help. By leveraging Yubikey or SMS authentication, you can block all access to public servers until the user completes both levels of authentication. In fact, you can easily leverage Halo firewall to build a virtual perimeter around all of your public hosted servers.

There is one additional section of PCI-DSS that is worth investigating. Requirement 1.4 mandates that personal firewalls be used “on any mobile and/or employee-owned computers with direct access to the Internet”. While this section was specifically written to address laptop security, I’ve heard convincing arguments that it could be applied to any system running outside of the corporate LAN. Because Halo lets you take control of the server’s built in firewall, Halo can also help you ease the concerns of a QSA that calls out this item.

While PCI in the cloud is still a work in progress, the majority opinion is that it is possible to achieve provided the appropriate compensating controls have been put in place. CloudPassage Halo has been specifically designed to help servers running in public space receive similar risk mitigation as their on-premise counterparts. Since Halo is hypervisor and provider agnostic, it can provide a single point of security management regardless of where your VMs are located.

Stay up to date

Get the latest news and tips on protecting critical business assets.

Related Posts