Requirement 11.5 of the PCI-DSS standard states that file integrity monitoring tools should be used to alert personnel to unauthorized file changes. While this can be a daunting task in a standalone server environment, the deployment challenges can become quickly insurmountable in an elastic cloud environment. What happens when I clone a server dozens or hundreds of times? Do I need to manage file integrity on a server by server basis? What about temporary servers with a short duty cycle? Clearly, attempting to apply Gen2 file integrity management to a Gen3 environment can quickly become an administration nightmare.
File Integrity Monitoring – The Basics
File Integrity Monitoring (FIM) is the process of identifying when a file has been modified. This can be a bit more difficult to determine than it may appear at first glance. For example we’re can’t just look at the file’s modification date, as that data can be spoofed by an attacker. While we could save a copy of the original file, and then make frequent comparisons to the version on the system, this would introduce both storage and processing problems. To solve these issues, the accepted method of identifying file changes is via a hash algorithm or function.
A hash algorithm is simply a math formula that is capable of taking input data of a variable length (like a block of text or a binary file) and outputting a fixed length string of characters. A proper hash algorithm will generate a unique hash value for each possible input data set. If two different sets of input can generate the same hash, we refer to this as “a collision in the hash space”, and any algorithm with collisions should be avoiding when monitoring file integrity. By generating a different hash value for each unique data input, we have a method of easily detecting even minute changes in a file.
For example, here’s a SHA-1 hash of a random 100MB file I have stored on my system:
Now here’s a hash of the same file, with a single space character added to it:
There are three points here worth noting:
- The hash drastically changed even through the change was minor
- We don’t need to save the 100MB file to check for later changes, we only need to save 40 character hash
- While the hash tells us the file has changed, there is no way to deduce what has changed
FIM Issues in the Cloud
One of the problems with FIM in an IaaS cloud is that we not only want to ensure that files on a specific server have not changed; we want to ensure that files on a clone remain identical to the gold standard master. Since the clones are copies of the master, any variations could be a reason for concern. Unfortunately, Gen2 FIM tools are designed to perform comparisons on the local server only. They do not have the ability to validate hashes across two or hundreds of servers. This can be a real problem as you attempt to scale up in the cloud, while maintaining PCI compliance.
Halo FIM solves the problem of file integrity monitoring in the cloud by not only letting you detect changes on a single system, but across multiple systems at the same time. An example is shown in Figure 1. Here I’ve created a file integrity policy that monitors for changes in critical binaries. I’ve then created a “baseline” off of my gold standard master. A baseline is simply a set of hashes off of a server that I want all of my other servers to be compared against. This will ensure that I can identify both file changes and variations from the master.
Once the policy is created, I simply apply it to the appropriate group of servers:
Once complete, if a server is added to the group of financial servers that may not have file changes, but is not based off of the gold standard master, Halo will alert me to this discrepancy.
So while performing FIM in an IaaS cloud environment brings with it new management challenges, Halo has been specifically architected to meet those challenges.