You know the drill, you’re the guy responsible for information security, your business comes to you on a Friday afternoon and tells you their super-important project needs to ‘go-live’ over the weekend. Being the amenable, business-aligned CISO that you are, you want to help but your hands are tied. Almost reflexively, you respond:
‘Have the endpoints, servers and networks been penetration tested?’
There’s a rhetorical tone to your voice. Unfortunately, you’re greeted with one of two faces: confusion or anger. Neither is good. The business representative sees security as a blocker, the ‘department of no’. You and your team are left despondent and sympathetic. This scenario was all too common during my time in the end-user space. It’s not a competition, but if it was, no one wins.
Penetration Testing 101
So, what is penetration testing and why do we have it? I like to think of it as an assurance mechanism – a vital tool to aid risk management. It’s a way to validate that your IT Systems, Networks and Applications are without undiscovered vulnerabilities. I chose my words carefully. A pen test isn’t there to ‘fix’ or remove all vulnerabilities – there’s a business/security trade-off conversation here which I could use 1300 words discussing. Let’s just agree that pen testing is about understanding the security posture of our environment. Pen testing allows an organisation to make balanced risk decisions, assuming qualified security professionals are on-hand to receive the results and contextualise for a business audience.
I have experienced first-hand two significant paradigm shifts which have impacted our ability to provide important, cost-effective security assurance: Cloud Computing and Agile Development:
Don’t go chasing waterfalls
Rightly or wrongly, penetration testing is perceived to be expensive: both financially and regarding resource consumption. Is the expenditure worth it? Are we identifying risk and providing value? Testing is performed at certain project milestones which are relatively rigid. My opening paragraph highlighted a common challenge with the model we have: Reactive and retrospective engagement. This point-in-time model is disconnected from project lifecycle process; security is begrudgingly engaged ‘because we have to’. However much everyone outside of IT loathed this framework, it worked; sort of. Projects had clear phases and actions were delivered synchronously. Our world has changed. This model doesn’t work anymore.
Let’s get Agile!
‘Don’t worry; we’re adopting an Agile lifecycle’
…a phrase with polarised feeling when discussed in a project meeting. The PM and development team see Agile and Lean methodologies as a way of slicing through unnecessary bureaucracy and organisational project baggage. The security team understands Agile as a euphemism for ‘wing it’ and a nebulous, unmeasurable series of parallel activities.
Agile is a development methodology, not a project methodology, an often overlooked but salient security consideration. Agile methods consist of a series of sprints which run at the same time. Agile allows for an organisation to ‘fail fast’, to work without prescriptive, often burdensome, requirements. Agile is not a means to throw the project management rulebook out of the window. As I said, Agile aids development cycles but requires a seismic shift in how we provide security assurance.
Organisations have adopted Agile methodologies with empirical success although as a security function, we are still trying to shoe-horn our waterfall testing. Square pegs in round holes do help no one. If security assurance is to succeed in an Agile world, then security must be part of the sprint planning process. Our guys need to be embedded in a cross-functional development workforce providing on-hand security consultancy. Telling an Agile team it’s going to take five days for a pen test, followed by three days to discuss and remediate findings simply isn’t going to cut it. A good place to start is to move security assurance ‘left in the lifecycle’. The earlier security testing can be performed, the quicker (and cheaper) remediation becomes. For application testing, look to deploy automated code scanning capabilities that can be embedded into the development planning processes. If code can be ‘checked in’ to your code repositories and scanned periodically (ideally daily) for vulnerabilities, developers are provided with suitable timeframes for remediation and retesting. I’ve used this approach to incentivise and empower developers to fix their own code but with the safeguards to avoid a ‘poacher turned gamekeeper’ situation.
I talked about trade-offs earlier and these come into play regarding infrastructure testing in a DevOps / Agile world. DevOps teams need infrastructure that is available instantaneously. As security teams, we need to support this requirement or else we will quickly be bypassed. Organisations should look to ensure that pre-built, automated system configurations exist which can be deployed quickly and consistently; have these builds reviewed and tested in advance. Platforms such as Chef and Puppet are great for this and tutorials are readily available online.
Infrastructure Pen Testing in the Cloud
Cloud computing brings with it a plethora of business benefits: reduced capital expenditure, ease of management and lower operational overheads to name but a few. Cloud does, however, require organisations to evaluate some of their IT processes. Security assurance is certainly one of these.
As organisations shift to the cloud, a change is often also needed concerning trust. As data are moved ‘off-prem’, and under the control of a Cloud Service Provider (CSP), there is often a need to ‘trust but verify’ with the CSP (FN); something paradoxical perhaps in a world some say (HL) operates best with zero trust. This is certainly true if your cloud model of choice is software as a service where end-to-end infrastructure and application testing simply isn’t possible.
For years, on-prem pen tests have been scoped to test each layer of an n-tier architecture, at a time and date to suit the customer, using a myriad blend of misuse cases and exploit scenarios. In a world of shared resource and multi-tenancy, Infrastructure testing like this is more complicated. Let’s start with the question of ownership: Generally speaking, the tester’s customer is a tenant, they’re paying for a service. The underlying infrastructure: servers, cables and flashing lights all belong to the service provider. Have they been consulted on the test? Are they appraised of the need to whitelist testing IP Addresses and the inevitable alerts beaconing from IPS and Firewall devices across their enterprise?
There are some myths surrounding cloud testing. There’s a view that cloud and black-box are intrinsically linked terms. Some feel that that cloud adoption means less governance or security. I challenge this assertion, but we do need to think differently. If we’re trusting our CSPs, trust needs to be built and periodically re-evaluated. I recommend that establishing security requirements for all your environments: on premise or otherwise. Location of data should not be a consideration here! It’s imperative that organisations understand the classification of data that they’re sending to the cloud and ensuring that controls exist commensurate with that data. Make sure that if you’re selecting infrastructure in the cloud, the environment has readily availability security capabilities to mitigate the threats associated with malicious and accidental threat actors.
Where infrastructure testing isn’t possible, look for mitigation. Ask your cloud provider about their IT operational processes. Are they ISO27001 certified? Can they provide you with information surrounding their legal and regulatory position? If a vendor is providing you with a ‘take it or leave it’ stance, I’d advise you look for another provider!