Note: A modified version of this article first appeared in DZone.
I was recently browsing through the 20 latest DevSecOps reference architectures for 2018 and was struck by several things: each has a number of shared elements, and the PowerPoint skills of the different authors varied wildly.
Even more so than PowerPoint skills, I was struck by the fact that, just as DevOps fundamentally changed our methods for producing applications, DevSecOps has the same potential to transform the way we secure applications.
This encouraged me to attempt to set down my own DevSecOps philosophy and reference architecture.
My DevSecOps philosophy is based entirely upon the CI/CD pipeline. There are three fundamental components that we must secure:
- The code and application as it moves through the pipeline,
- The tools that comprise the pipeline, and
- The environment in which the pipeline resides.
There is a bit of a problem here. Most security professionals don’t code. Likewise, the majority of software developers know very little about security. How do we most effectively bring these two camps into alignment?
Part of the solution is to move towards automated testing. Automated testing focuses on tooling, and architecting tooling is relatively easy. The second, more difficult, painful, and rewarding approach to this is to change the way we think about security — to stop the way we currently test, and to completely re-evaluate and restructure our security testing methodology. It is difficult and painful because we — particularly in security — have been following the same patterns for several decades, and change is hard. Several peers have admitted to me that they, “just don’t get it.”
However, those same people understand the fact that our current security practices have repeatedly and demonstrably failed to protect critical digital assets, as evidenced by the daily monotone of successful data breaches and cyberattacks. The “Shift-Left/Test Everywhere” mindset is potentially rewarding because it represents a profound change in our approach to securing digital assets; and it is clear that we need to change, given the lack of success of traditional approaches.
Let’s start with a generic development pipeline. Code goes in on the left & an application exits on the right. Here is an example of a development pipeline.
Application security testing has almost always favored the right side, or deploy side, of the development pipeline. Anyone who uses software knows that this approach leads to buggy software. Security defects are expensive, but eliminating those defects is also expensive, particularly when defects are discovered at the end of, or on the right side of the deployment cycle.
The concept of “shifting security to the left” in the development pipeline derives from the notion, “Test Early, Test Often.” And what is the outcome of all of this testing? Perfect code? Applications that are free from vulnerabilities? Hardly…
Software/security testing does not guarantee that software will be bug-free; rather test results are a means of providing an estimate of software quality to stakeholders at multiple levels.
A Proposed Solution
In almost every DevOps effort I’ve been part of, the initial excitement of, “Becoming Agile”, has led to almost instantaneous sprawl. It’s a bit like springtime. GitHub repos, EC2, GCE, & WVM instances, containers, new tools, & orchestrators pop up like daffodils.
There is usually some method to all of this, but in keeping with historical tendencies, security concerns are usually left in the dust. There even may be a mandate to utilize many of the AWS/Azure/Google native monitoring & security services, but in order to move quickly we accept that it’s okay to leave our private key sitting out on a bastion host so that lazy developers and administrators don’t have to learn how to chain SSH requests.
The biggest impediment to ubiquitous testing isn’t tooling or processes necessarily. It’s changing how people think about their work. According to the Forrester State of Agile (2017) Report, behavioral change is the biggest roadblock to getting this right.
Agile and Lean require new roles, new types of jobs, new ways of working, and increased collaboration and communication. Managers must wean themselves from years of directly managing large teams in favor of becoming servant-leaders. Behavior change is the top impediment to organizational change, cited by more than half of the respondents.
How do we foster behavioral change? It’s difficult — particularly in large organizations, and it is complex enough that we’ll set this aside for the moment and focus on something that we can change across most organizations — testing strategies and tooling.
Where in the pipeline do we test? The cartoon below is one way to visualize it.
If we are clever, we can take advantage of tools — particularly when static security testing — that were not specifically designed to be security analysis tools. For example, even something as simple as a unit test can potentially be used to prevent unsafe coding practice that eventually would result in a vulnerability being introduced into the application.
Below is a diagram that shows points in the pipeline where security analysis can be performed. The black boxes are points where security testing is predominant. The gray boxes indicate other test points, which may or may not have any security testing impact.
Where do we start?
Security Analysis and Test Plan
It starts with an initial security analysis and subsequent test plan that describes how, when and where testing will be performed, where test results will be archived, and who will be responsible for applying the results back into the pipeline. Both the analysis and the plan itself should reside in a repo like GitHub, and should be subject to the same SCM policies as any other code. This will ensure that everyone involved can both contribute to and understand the security test plan and their individual responsibilities within this plan.
Both Docker or Ansible are great solutions for providing developers with a secure, reusable workspace. Let’s face it. Every developer, tester, and IT person has hacked their perfect workspace. They have their favorite programs, console commands, macros, shortcuts, colors, fonts, etc. Get with your developers and use their input to create a unified environment that can be spun up anywhere. Secure that environment, and then, depending upon the programming languages in-house, add the appropriate testing tools.
For Eclipse or IntelliJ IDEA, you could install SonarLint, PMD, FindBugs, and JUnit. If you use Visual Studio or Visual Studio Code, install ReSharper, SonarLint, CodeRush, FxCop, and NUnit. For Atom, SonarLint, RSpec, brakeman, and Mocha. There is no perfect mix of security tools. The examples above are just that — examples. The tools that each organization will use will depend upon factors such as programming language(s), organizational size and complexity, and workplace attitudes. Regardless of the IDE tools used, ensure that all of the testing results are logged, aggregated, and displayed in real-time for everyone to monitor.
While you are designing the ultimate development environment, consider adding linters, git controls, and some mechanism to protect secrets such as passwords, API keys, etc.
Generically, lint or a linter is any tool that flags suspicious usage in software written in any computer language. Linting is an automated process that does a static code analysis (without running the code) to find potential errors and enforce coding standards Linting is usually quite fast, and allows you to enforce not just coding standards but security standards, documentation and error handling.
Prevent confidential strings like passwords and API keys from getting committed to Github with tools like git-secrets, and while you’re thinking about repo protection, why not add Blackduck Hub (BDH) to ensure that you do not incorporate known vulnerabilities when you import Open Source code into your project. Like BDH, FoSSology is an alternative Open Source license compliance toolkit.
Protecting secrets within your CI/CD environment could easily be a topic on its own, but since I’m attempting to summarize a wide swath of tools in a short time, I’ll mention the one I’m most familiar with — HashiCorp Vault; however, other solutions exist, and AWS has just released a dedicated secrets solution that goes beyond what you could once accomplish with parameter stores.
Static Application Security Testing (SAST)
SAST tools are a set of technologies designed to analyze uncompiled source code, bytecode and binaries for coding and design conditions that are indicative of security vulnerabilities. The goal of SAST is to identify vulnerabilities in your source code before you deploy to production. SAST solutions are often programming language-specific, & analyze an application from the “inside out” in a nonrunning state.
SAST techniques can detect buffer overflows, security vulnerabilities, pointer errors, memory leaks, timing anomalies (such as race conditions, deadlocks, and livelocks), dead or unused source code segments, null-pointer dereferences, divide-by-zeros, buffer overruns, buffer underruns, double-frees, use-after-frees, frees of non-heap memory, and other common programming mistakes.
Many language-specific tools exist. Common language-independent SAST tools include Sonarqube, Sentry, HP Fortify SCA, Coverity, Checkmarx, and Veracode. Your goal is not to test with every available tool, but instead, to develop a suite of tools that align with your organizational attitudes, requirements, and budget.
Gartner ranks the major SAST and DAST tools as follows:
In addition to the SAST tools mentioned above, we don’t want to leave out Code Coverage tools. Code Coverage is a measurement of how many lines, statements, or blocks of your code are tested using your suite of automated tests. It’s an essential metric to understand the quality of your testing efforts. Code coverage shows you how much of your application is not covered by automated tests and is therefore vulnerable to defects. Try to ensure that all of your code is tested, although in practice, 70%-80% coverage is often considered sufficient.
Dynamic Application Security Testing (DAST)
DAST technologies are designed to detect conditions indicative of a security vulnerability in an application in its running state. Most DAST solutions test only the exposed HTTP, HTML, REST, & SOAP interfaces of Web-enabled applications; however, some solutions are designed specifically for non-Web protocol and data malformation such as remote procedure calls and callbacks.
DAST techniques can simulate and detect problems associated with authentication, authorization, client-side attacks, inappropriate command execution, SQL injection, erroneous information disclosure, interfaces, and API endpoints.
Popular DAST tools include nmap, Metasploit/Rapid7, Burp Suite, Nikto2, Arachni, the OWASP ZAProxy, SQLMap, Lynis, and Accunetix. Some of the best DAST tools, unlike the majority of premium commercial SAST tools, are Open Source.
Both SAST and DAST toolsets require orchestration in order to automate testing. The most common CI/CD orchestration tool today is Jenkins. Jenkins is relatively simple to install, configure, and use; and it is powerful in its own right. Jenkins shines through when we add in the thousand or so plug-ins that are both commercial and community developed. Plugins exist for many of the tools mentioned in this article. Likely the most difficult decision you’ll make when using Jenkins is which plugins provide the most value. Jenkins is often configured to run a build immediately after every commit. This guarantees that the full test suite will run whenever application code is updated.
Penetration Testing and Vulnerability Scanning
Most of the DAST tools mentioned above can also be used for penetration testing and vulnerability scanning. With the understanding that there is some overlap, this last category of tools takes a more general or holistic approach to security scanning.
For example, Burp, Nikto2, and Arachni are very specifically web or HTTP scanners. They do not scan for SQL injection vulnerabilities. SQLMap, on the other hand, does not scan for web vulnerabilities, but excels at exploiting SQL injection. None of the DAST tools mentioned (with the exception of Metasploit/Rapid7 and possibly, nmap) scan for RPC vulnerabilities, yet a quick search of the Common Vulnerabilities and Exposures (CVE) database lists 338 RPC vulnerabilities.
The principal penetration testing and vulnerability scanning tools that will be run prior to release include: Tenable Nessus, Qualys, Retina CS from BeyondTrust, and Rapid7 nexpose. All of these tools provide the ability to scan for vulnerabilities and the option to test if any found vulnerabilities are exploitable. These tools provide comprehensive visibility and insight into the security of applications with frequent, fast and automated application scanning. They can also be configured to focus on particular classes of vulnerabilities so that developers can prioritize remediation efforts. Last, their reporting function provides useful documentation for compliance and audit.
Security of the Pipeline
In this article we have focused on testing the security of one or more applications as they transit the pipeline; that is, security in the pipeline. While it’s important to test a pipelined application, it is even more important to test the pipeline and its environment continuously as well. As I mentioned above, lazy developers and admins will often sacrifice security for expediency.
Given the velocity of effort and the number of individuals involved in creating and maintaining a DevOps infrastructure — particularly in organizations that are just beginning their travel down the Ways — malicious code can be injected into the CI/CD environment. If an adversary can access and gain control over even a part of your CI/CD infrastructure, then none of your testing results can be trusted. Perhaps the most fabled example of this is the thought experiment described in, “The DevOps Handbook.”
“Where would an adversary or even a disgruntled developer place that malicious code?
A good place to hide that code is in unit tests because no one looks at them, and because they’re run every time someone commits code to the repo.”
These same tools you use to pen-test the application and to scan it for vulnerabilities should be used to scan both the pipeline tools themselves (such as a Jenkins cluster) and the environment in which the pipeline resides (AWS, for example).
In addition, the use and integration of monitoring tools such as Nagios, intrusion detection systems like Snort, configuration management tools such as Chef and Puppet, and logging tools like Graphite, the ELK stack, or even a security information and event management (SIEM) solution like Splunk, should be highly prioritized. Security telemetry should be evaluated continuously and should be made available to everyone on the team. Security incidents should be remediated immediately and the system reconfigured so that any particular security incident occurs only once.
To my mind, agile development is all about mindset, the pipeline, and tools. In this article, I’ve focused on tooling because, frankly, it’s easier than trying to understand how to effect institutional change.
I do know that at some point, some organizations begin to believe that rapid development works when team members are encouraged to go fast and break things, and that in order to beat their competitors the organization needs to out-imagine them. Not every organization can succeed at this. Unfortunately, the solution to this is not formulaic.
The mantra of DevSecOps is test everywhere — at every stage of-of the pipeline. Test both static uncompiled code and dynamic applications. Test and monitor both the pipeline tools and the environment as well. Security testing is not separate from development; it is woven into development.
Moreover, in DevSecOps, it is imperative that information security architects integrate security into DevOps workflows at multiple points in the pipeline and in a collaborative way that is largely transparent to developers, while preserving the teamwork and speed of an agile development environment. Best of luck.