DevSecOps: How Deloitte is embedding security seamlessly through CI/CD on AWS Cloud for its customers

BrandPost By Amol Dabholkar & Nikhil Agarwal
Nov 15, 2021
Cloud Security

A frictionless security posture in a fast-changing Continuous Integration / Delivery model

08 how deloitte is embedding security
Credit: AWS

As businesses move to a continuous integration and delivery model of software development, automation of the software development lifecycle becomes a primary requirement to successfully implement CI/CD.

This directly impacts the security of the code, as the security controls in the software development lifecycle inherently cause friction and slowing business teams from realizing the full benefits and speed of an unrestricted CI/CD pipeline.

However, security controls and checks are a must, as the speed of churning out deployable code without checks and balances increases the risks of vulnerabilities with a potential of high impact in production environments.

To enhance their approaches to cyber and other risks, forward-thinking organizations are embedding security, privacy, policy, and controls into their DevOps culture, processes, and tools. As the DevSecOps trend gains momentum, more companies will likely make threat modelling, risk assessment, and security-task automation foundational components of product development initiatives, from ideation to development to deployment to operations.

DevSecOps fundamentally transforms cyber and risk management from being compliance-based activities—typically undertaken late in the development lifecycle—to essential framing mindsets across the product journey.* Moreover, DevSecOps codifies policies and best practices into tools and underlying platforms, enabling security to become a shared responsibility of the entire IT organization. In this article we will share some insights on how Deloitte is embedding security seamlessly through CI/CD on AWS for its customers and best practices.

It is at times not practical to completely take away manual security testing as the tools are often not perfect. The objective is to minimize manual interventions and maximize security automation while ensuring the risk of insecure software is manageable.

This article describes this thought process based on our experience in assisting organizations dealing with these challenges.

picture1 AWS
picture2

The challenge of enforcing Security in a CI/CD model

We noted that security teams have real challenges in testing the security of the applications in a CI/CD world. This is due to the fact that the security teams are substantially lesser staffed as compared to the application development / solution architect teams and hence cannot keep up with manually verifying the work of the application teams in a fast-paced CI/CD world.

The security teams also cannot afford to be present in each and every application development / scrum meeting due and may miss out on the decisions and changes that happen in these meetings.

Finally, the Security team also has a shared responsibility and primary accountability for security issues slipping past all the checks into production, which explains their hesitance in approving releases to the production without fully understanding and testing the security controls in the changes.

In many cases, even when automated security controls are present in the DevSecOps pipelines, security teams rely heavily on manual ethical hacking / penetration testing as an insurance to mitigate the risk of a security vulnerability getting to production.

Manual penetration testing: One of the final barriers to security automation in CI/CD

Manual penetration testing is often the last security check to be performed on an application change before it is promoted to production. Traditionally, in a waterfall approach to secure SDLC, manual penetration testing plays an important part in ensuring that any changes made are secure against a simulated attack that a real adversary could carry out against the application.

Penetration testing/ethical hacking is one of the final security checks before the application can be certified as ready for production, and the results of the testing are an input to the release managers as one of the decision points on whether to go ahead with the release.

This activity usually requires skilled personnel who have the experience and knowledge to use the tools and methodologies at their disposal, to deliver an effective penetration test that covers all the required security test cases.

If security issues are identified during penetration testing, then depending on the severity of the issue, these may need to be fixed before the changes can be deployed in production. This is especially true for critical/high severity findings. If the issue needs to be fixed before going live, it means the change must go back all the way to the development stage, via the system integration testing (SIT) and user acceptance testing (UAT) stages, before it can again be subjected to penetration testing for closure.

If the issues are of medium severity, a common practice is to allow the changes to go into production after agreement with the Security teams, and with the condition that the issue be fixed within an agreed timeframe. Finally if the issue is of low severity, the business/product owner and the Security teams may decide to accept the risk without fixing the problem after performing a cost-benefit analysis.

Manual Security testing and DevSecOps

With its emphasis on automation, DevOps focuses on keeping manual processes to a minimum. A manual penetration test at the very end of the pipeline becomes a bottleneck for the entire pipeline. Delays can be caused by one or more factors, such as:

  1. Test data, test setups and environment stability
  2. On-boarding a specialized tester who was not involved in the application design and therefore needs to be coached on the application flows
  3. Testers not having the full context of the application changes that need to be tested end up applying the full suite of security test cases against the application. Many of the test cases many are not relevant and end up increasing the time and duration of the test
  4. Issues that need to be fixed must go back through the development cycle, and must be analysed, fixed and promoted back to the testing environment for validation, further delaying deployment to production

As a counter point, it is critical to perform security testing since the extremely rapid changes that CI/CD enables increases the risk of issues being promoted to production in the absence of penetration tests.

The main reason for a manual penetration test is to ensure that application changes can withstand the typical attack payloads that a real adversary would use and respond/handle the attack in a way which does not degrade the security of the system.

This is typically done via a standardized set of attack payloads and methodologies (e.g. OWASP top 20 attacks) customized for the application changes. Depending on the software/platform stack that the application is running, penetration testing would also be extended to include attacks which are specially designed for the application environment.

With the ‘shift left’ best practice, there is increasing focus on automating security testing and moving testing to earlier stages of the CI/CD pipeline. However, we state a fundamental principle that must be adhered too from a security best practices perspective:

picture3 AWS

By this we mean that the effectiveness of security testing should not be degraded by pushing security to the left in order to minimize or eliminate manual penetration testing.

We therefore have two mutually contradictory requirements:

  1. In order to avoid bottlenecks in DevOps and realize its potential benefits, ‘manual’ penetration testing needs to be minimized
  1. In order to ensure the security of the system, the risks introduced by changes and the rapid pace of integration and deployment need to be correctly identified by comprehensive security testing

Building a strong security foundation

We find a change in the approach of the security testing to be a fundamental enabler for automating the ‘manual’ part of penetration testing and removing the bottlenecks to speed without compromising on the security.

1. Start with a framework

With the movement to cloud, and DevSecOps controls and technologies being embedded in the cloud platforms natively, there is good scope to achieving the objective on a cloud based DevSecOps pipeline with some fundamental changes in the approach for testing.

To give the security teams the assurance that all the security controls and processes are embedded correctly, having a DevSecOps framework that covers all aspects of security from a people, process, technology and governance is highly desirable.

We find it useful to start with a clear articulation of the major risks that are relevant to the organisation, that need to be mitigated by security controls

Stages

Plan and Code

Build

Test

Release

Operate

Risks

Hardcoded credentials

3rd party open source libraries

Compile time security issues

Unauthorised deployments to production

Security code bugs

Insecure configurations

Hardcoded passwords in scripts

Insecurely patched systems

Malicious backdoors by insiders

End of life / unsupported software

Runtime security bugs

Insufficient monitoring and feedback of security incidents

Using this risk table, frameworks can be created for the organisation in terms of people, process and technology.

An example of Deloitte’s DevSecOps framework is shown below. We have used this for Clients to enable them to have industry best practices as a base on which to build the software pipelines and processes.

picture4

The framework covers all the stages of DevSecOps and has 21 security capabilities with 70+ controls overall. The manual security testing highlighted in the red box.

2. Map the framework to your organization specific pipeline

A pipeline built on AWS using the AWS DevSecOps services is shown below:

picture5 AWS

Using the AWS DevSecOps services integrated with the security controls via various enterprise third party products, the technology capability to embed security controls corresponding to the framework requirements are achieved in the pipeline above.

The challenge now is to use the pipeline and the tools in a manner, which minimises the manual aspect of security testing and removes the main bottle necks to speed and delivery.

Different approaches: Security testing the Security controls

1. Onboarding applications to the security controls

A common recommendation to reduce dependence on manual testing is the ‘shift left’ approach to security testing.

There are multiple points in the DevSecOps pipeline where security testing can be performed. However, the key point in the following discussion is that these should not be isolated or independent test cases, but part of a well-defined and co-ordinated strategy that covers all aspects of security testing.

Security

Description

Threat modelling in the design phase

Detailed threat models enable security test cases to be generated, that are highly targeted and specific to the threat scenarios facing the application. This narrows the focus to only relevant test cases.

SAST

Static code scanning during the development phase ensures security bugs in the code are detected and resolved. The requirement here is for security engineers to fine-tune the rules so that false positives are minimised without generating false negatives.

DAST and IAST

Dynamic / Integrated automated security testing comes close to automating a manual penetration test, with the application run against a DAST tool in the test environment after it has been built.

This is an evolving area and suffers from issues such as false positives, technical constraints like providing OTP for post-MFA test cases, account lockouts during automated testing that renders the rest of the scan ineffective without manual intervention, etc.

RASP

Runtime application self-protection allows the application to handle real attacks by providing a shield around the application via real-time detection of actual attacks, enabling preventative action to be taken.

 

 2. Testing the controls

This is a fundamental shift in approach of changing the focus from manually testing the security of the applications to ensuring that the automated security testing tools are working effectively.

One of the issues with the automated testing tools mentioned above is that if they are available but not operating effectively, the security team has less trust on whether they are catching all the security issues correctly.

So in this scenario – the security teams lays detailed test cases/abuse cases that need to be passed by the security tools in order to give the security teams the assurance that the security issues in the real world would be handled as expected.

For example the security team may create a new project to purposely introduce insecure code containing the OWASP top 10 vulnerabilities/vulnerable third party libraries in a controlled manner, and check if the security tools at the various stages of the pipeline are able to catch these.

If any tool misses the issues, the tool rules would need to be fine-tuned to be able to catch the issue during the retest.

This gives the security team the assurance that the tools are operating correctly and are able to catch the security vulnerabilities.

3. Minimizing the manual testing components

There are different decision points where key inputs have to be considered when deciding the scope and applicability of manual vs automated security checking

With the above approach, the security teams can have the assurance that the application is on-boarded to the tool correctly and is catching the security issues as effectively as possible.

The security team can then forgo some or most (depending the tools being used) of the security testing and verification that happens manually, for these automated tools.

In order to do this, there needs to be a clear report which is visible to the security team during the build and testing phases, which gives them a detailed view on:

  1. The number and type of source code security vulnerabilities
  2. The number and type of vulnerable third party libraries
  3. The number and type of dynamic application testing issues

With these three important pieces of information, the security team is in a much better position to decide on the course of action:

  1. Allowing the release of the build to happen or
  2. To send it back for fixing of the security issues discovered by the tools or
  3. To ask for a targeted manual ethical hack against a specific scenario which has not been covered by the tools

Conclusion

Security could potentially operate as a force of friction with business and technology requirements because of imposing controls on an idea (be it a business functionality or a technology based solution).

With the movement to CI/CD and DevOps, the business expectation of delivering software continuously and with speed is a necessary condition to remain competitive in a world fast adapting to this paradigm.

Without security being seamlessly integrated in an automated manner on the CI/CD pipeline, with the correct people and process controls, security via manual checks will continue to be a bottleneck to the business objectives.

This security automation can be effectively achieved by:

  • Integrating the necessary security automation checks and tools in the CI/CD pipeline
  • Onboarding the applications properly on these tools (i.e. with minimal false positives)
  • Giving the control to the security teams to dictate test cases that give them the assurance that the Security tools are able to operate effectively (i.e. with minimal false negatives)
  • Giving the security teams the full visibility on the output of the tools to make decision on the next steps
AWS
AWS
aws