In the last few posts I’ve been talking about the cloud’s impact on various security disciplines. In this post I’ll talk about auditing, which is arguably impacted the most. In fact public cloud auditing has turned into a bit of a quagmire, so it is beneficial to understand what is truly going on.
The purpose of a security audit is to validate the processes and controls used on a system to mitigate levels of risk. In other words, an audit looks at the checks and balances we have put in place in order to maintain a proper security posture. Typically, an audit will compare the system against a known baseline. This may be a similar system in a known to be secure state, or some accepted standard or specification.
So I’ll again ask the question, can we simply continue to use legacy auditing as-is in our Gen3 environment? This time, the answer is “not even close” 😉
The first problem is that there are no universal standards or guidelines you can use to evaluate a cloud provider’s security. The Cloud Security Alliance has published some good information, but nothing that could be perceived as an auditable standard. To fill the gap, some providers have fallen back on SAS 70, which is now transitioning to SSAE 16. In fact some individuals are touting SAS 70/SSAE 16 is the way to validate a provider’s security.
Here’s the problem with that statement. Both SAS 70 and SSAE 16 were created by the Auditing Standards Board of the American Institute of Certified Public Accountants (AICPA). The standards are intended for auditing financial records, not computer security. So we have a bit of the old “octagonal peg in a round hole” problem.
The other issue with SAS 70 and SSAE 16 is that they do not define the actual controls, they simply verify that the controls the provider has put into place have been implemented properly. For example let’s say a provider defines a control “All administrators will use a fixed three character password that cannot be changed”. So long as the provider can validate enforcement of a three character password, and show that Administrators have no way to change it, both SAS 70 and SSAE 16 would consider this control verified.The core issue is that SAS 70 and SSAE 16 do not define specific security guidelines similar to PCI-DSS. They simply validate whatever guidelines the provider has chosen to put into place.
This means SAS 70 and SSAE 16 provide little value add unless the provider publishes which controls have actually been put into place and audited. To date, I’m unaware of any providers who have been willing to do this. In the original audit definition I stated that a security audit checks a system against a known baseline. Since SAS 70 and SSAE 16 do not define a security baseline, I’m not sure we can consider them true computer security auditing methodologies.
So what about other specifications? For example I mentioned PCI-DSS defines a true auditable baseline. If a provider claims PCI-DSS compliance, and we leverage their service, do we automatically inherit full PCI-DSS compliance as well? If only it was that easy. 😉
For example consider Verizon’s PCI-DSS compliant offering. The PCI-DSS specification defines controls from the physical facility all the way up to the application layer. What Verizon is really saying is that the layers under their control have been PCI-DSS certified. As a tenant of their service, you will need to have the layers under your control certified as well in order to be considered fully PCI-DSS compliant.
Even if you are not interested in full PCI-DSS compliance, knowing that a provider has been audited to this level can be extremely beneficial. Unlike SAS 70 and SSAE 16 which provide no real security baseline, PCI-DSS certification would help you better evaluate the provider’s security posture.
Needless to say, auditing in Gen3 is still a bit disjointed. Expect this to settle out a bit over the next few years.