The application security landscape is dominated by tools loosely referred to as code scanners, which includes static application security testing (SAST). Twenty years ago, these tools represented a leap forward in an organization’s ability to understand the security of its applications. Today, however, ever-increasing development agility and application complexity reveal fatal weaknesses with regard to the usability, scalability, and effectiveness of these tools. For scanning to become an effective tool for vulnerability remediation, organizations need a tool that will be used by developers and is purpose-built for modern applications and continuous integration/continuous deployment (CI/CD) pipelines.
As a complement to the Contrast Application Security Platform, Contrast Scan offers a pipeline-native approach for scanning applications for vulnerabilities early in the software development life cycle (SDLC). It combines demand-driven static analysis, risk-based policies, and a product design that enables developers to find and fix issues as they code. As a result, Contrast Scan harmonizes the objectives of security and development teams while improving the quality of code that’s ultimately sent to production.
Historically, organizations relied on SAST to help analyze the security of their software. Even with considerable changes in the software development landscape, significant academic advances in static code analysis research, and increasing maturity of application security programs, SAST offerings have not fundamentally advanced since the early 2000s.
When evaluating different application security testing (AST) tools for purchase in the past, many customers chose tools that produce more results (quantity), rather than more accurate results (quality). There are multiple reasons for this—the main one being the particular software development paradigms of the day, which predated the wide adoption of Agile and DevOps:
When it comes to accuracy, traditional SAST solutions achieve a mere 26%.1
In this case, customer behavior drove vendor design. Security vendors raced to create tools that yielded more and more results, regardless of result validity. Tools went from just being noisy to becoming mathematically impossible to effectively triage. Multiple studies show that while these traditional scanning tools warn users of a huge number of possible threats, the output is so consistently riddled with false positives that results are largely ignored inside organizations. As a result, few of these threats ultimately get addressed in development.2,3,4 While the tools create a lot of work, they deliver little value.
Fixing a vulnerability gets more expensive as the development process gets further from where the error was introduced.5
As application demand increased over time and developers adopted methodologies to ship code more frequently, application security was forced to play catch-up. Vendors created plugins and integrations to support developers in their repository or in their pipeline, without addressing the problem of cataclysmically imprecise vulnerability testing. This sea change only magnified SAST’s existing problems. Instead of having a few days to triage results once a quarter or once a year, developers needed to scan applications every day—sometimes multiple times a day—and often without the involvement of security experts.
All of this tallies up to a huge time expenditure. The vast majority of organizations (73%) report that each security alert they receive consumes an hour or more of application security time.6 Further, 72% of organizations indicate that true vulnerabilities consume 6+ hours of application security team time; 68% say that true vulnerabilities consume 10+ hours of development team time.7
More than half (55%) of developers admit to sometimes skipping security scans to meet deadlines.8
Traditional SAST solutions bury true vulnerabilities in a noisy sea of false positives. Only about one-quarter of organizations report being capable of reviewing all alerts in scan reports and returning guidance for remediation to the development team.9
One vendor’s data showed that their legacy SAST solution finds an average of 1,700 results in a selection of open-source libraries10—but they do not provide good feedback on the relative importance or irrelevance of those results. Although public data is not available for all traditional scanning tools, any experienced practitioner most likely will indicate that these kinds of results are quite typical. Even assuming that the tools find all the true positives amongst their total results, their poor accuracy shows a minimum of 99% noise (accurate but irrelevant findings and false positives).
Contrast customers report an average of 21 vulnerabilities per application—down from 26 in 2020.11 In another survey, the majority (79%) of organizations reported that their average application in development has about 20+ actual vulnerabilities.12
Effective static analysis for today’s current landscape must provide answers for these unfortunate truths:
Some of the challenges here involve branching, reflection, inversion of control, dependency injection, dynamic code, and inheritance. All of these pose significant obstacles to delivering complete analysis in times that customers would find acceptable.
In 2014, Contrast Security pioneered using code instrumentation as a breakthrough technology to analyze the security of applications. This approach avoids the weaknesses inherent in code scanning tools to provide accurate and actionable information. Since then, hundreds of large enterprises around the globe have adopted this technology through the Contrast Application Security Platform—with outstanding results.
But despite the availability of more accurate tools like Contrast Assess interactive application security testing (IAST), legacy code scanning technology is still widely used across the industry. Often, this is because existing processes are designed around scanning, and these kinds of established norms can be very hard to change.
As the leader in using instrumentation for security, Contrast continues to push the boundaries of application security research and developed a new breakthrough technology for scanning complex and distributed applications within today’s rigorous development systems. This new approach (“demand-driven static analysis”) is designed to act as a companion to Contrast’s existing platform technologies and to provide unprecedented ease of use and speed with superior, accurate, and actionable results.
Contrast Scan (SAST) uses demand-driven scanning technology to allow development teams to achieve high-quality results while using a familiar “scanning style” methodology. This allows customers to achieve dramatically better security outcomes than when using traditional static analysis without having to change their existing processes.
Modern security needs to be reoriented toward the real risk posed by the application via a tool that can identify a small subset of critical vulnerabilities with as little obfuscation by false positives as possible. Another consideration is that there are limits to any type of analysis. If discovery isn’t possible, then development deserves a clean “go-to-the-next-step” signal—rather than a pile of results that leave everyone involved with just enough of a case to support their go/no-go decisions to drive organizational conflict.
For example, consider two tests that doctors use to check heart health: a blood pressure test and an exercise stress test. Almost anyone can measure blood pressure; it can be done in 10 seconds and the equipment is ubiquitously available. One can easily discern if a pressure is much too high or much too low and get visibility into broader heart disease risk factors. But physicians do not look at these numbers too closely because the results have limited value with respect to comprehensive cardiac evaluation. It’s just an easy (but important) first step in the diagnostic process.
To get a more complete and accurate picture of heart health, doctors put sensors on the patient and place them on a treadmill for an exercise stress test. Doctors use those sensors to get a much deeper view of the heart while it is in operation to identify weaknesses (e.g., arrhythmia, cardiovascular disease, etc.). The first few minutes of the stress test are enormously informative. This means that even though some candidates can only make it halfway through the 15-minute test (and therefore have imperfect test coverage), the amount and quality of the data observed still allows the doctor to perform a much deeper analysis and detect more issues than they can with just the blood pressure numbers. Thus, for example, if someone has great blood pressure numbers and shows no warning signs after a partial stress test due to being unable to complete it because of a work emergency, we can be extremely confident they are not walking around in a dangerously diseased state.
While Contrast Assess is our “stress test” that provides authoritative results, Contrast Scan is built on the vision of the initial blood pressure test. Our approach to modern scanning was designed to bring customers great security results with low noise, speed, and high confidence. To accomplish this outcome, we need the test to be fast and accurate. It combines a number of critical capabilities:
To make static analysis scale to cover large code bases, traditional whole-program algorithms compromise on precision—causing large numbers of false positives as a side effect.15
Demand-driven analysis uses lightweight pre-analyses to quickly “zero-in” on the parts of the code that are security-relevant, and then use maximally precise analyses to determine the security state of the code.16 The high level of precision (which maximizes full context-, flow-, and field-sensitivity) not only reduces false positives to a minimum, but it also helps the analysis itself focus on just the code that really matters.17
As a result, code paths that are probably irrelevant to the security state are minimally inspected, which also reduces analysis times by several orders of magnitude. Theoretical and empirical studies show that demand-driven analysis boosts both analysis precision and speed to levels that traditional, whole-program analyses cannot possibly accomplish.18
How a product is designed has a big influence on how it is used, shaping how users think about the results. At a design level, Contrast leads to a drastically different user experience. An important example of this is how we categorize the findings of the scanner into groups. Contrast results fall in one of these categories:
The formation of these three buckets introduces a fundamentally new way to view SAST results, automatically triaging vulnerabilities according to automated next-step actions within the modern development environment. For instance, the discovery of a vulnerability should stop a build. The discovery of a warning should give pause to a developer, but ultimately should not break the build or demand immediate triage. The discovery of a lead is not important to anyone except experts who must chase down the derivation in an application security assessment setting—not an analytic to act on in a pipeline.
Although we believe customers get the best combination (or coverage) and correctness by combining IAST and SAST on the Contrast Application Security Platform, it is still worthwhile to compare the efficacy of Contrast Scan with legacy SAST alternatives.


Contrast Scan gives developers fast and accurate security feedback at the pace of innovation without the disruptions of legacy solutions. However, when viewed as part of the overall Contrast Application Security Platform, Contrast Scan offers customers even greater benefits. There are two questions that Contrast customers need to ask themselves to determine what deployment is most appropriate for their environments:
In the real world, some small percentage of vulnerabilities will escape detection no matter what one does—either by bad luck, human error, or just because vulnerabilities are complex. Having visibility and protection in production is an additional requirement for modern applications and application programming interfaces (APIs). Here, the Contrast Application Security Platform also includes Contrast Protect to deliver this critical ability—using the same technology as Contrast Assess, but reorienting it toward the performant protection of applications.
The addition of Contrast Scan to the platform of solutions strengthens our mission to secure software across all stages of the software development life cycle (SDLC). This comprehensive suite of capabilities was purpose-built for modern, distributed applications—offering embedded, continuous testing and protection that reduce application security risks.
1. “OWASP Benchmark v1.1, 1.2,” OWASP, accessed August 20, 2021.
2. “Foteini Cheirdari and George Karabatis, “Analyzing False Positive Source Code Vulnerabilities Using Static Analysis Tools,” IEEE Xplore, January 24, 2019.
3. “Zachary P. Reynolds, et al., “Identifying and Documenting False Positive Patterns Generated by Static Code Analysis Tools,” IEEE Xplore, July 3, 2017.
4. Joonyoung Park, et al., “Battles with false positives in static analysis of JavaScript web applications in the wild,” Association for Computing Machinery, May 14, 2016.
5. Jeff Williams, “How To Start Decluttering Application Security,” Forbes, January 27, 2021.
6. “The State of DevSecOps Report,” Contrast Security, November 2020.
7. “2021 State of Application Security in Financial Services Report,” Contrast Security, May 2021.
8. “The State of DevSecOps Report,” Contrast Security, November 2020.
9. “2021 State of Application Security in Financial Services Report,” Contrast Security, May 2021.
10. “Micro Focus Fortify Static Code Analyzer: Performance Guide,” Micro Focus, November 2018.
11. “2021 Application Security Observability Report,” Contrast Security,
12. “The State of DevSecOps Report,” Contrast Security, November 2020.
13. “2021 State of Application Security in Financial Services Report,” Contrast Security, May 2021.
14. Paul Anderson, “Human Factors in Evaluating Static Analysis Tools,” GrammaTech, March 27, 2017.
15. Brittany Johnson, et al., “Why Don’t Software Developers Use Static Analysis Tools to Find Bugs?,” North Carolina State University, May 2013.
16. Swati Jaiswal, et al., “Bidirectionality in flow-sensitive demand-driven analysis,” Science of Computer Programming, May 1, 2020.
17. Johannes Sp th, et al., “Context-, flow-, and field-sensitive data-flow analysis using synchronized Pushdown systems,” Proceedings of the ACM on Programming Languages (Vol. 3), January 2019.
18. Yuanbo Li, et al., “Fast graph simplification for interleaved Dyck-reachability,” Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation, June 11, 2020.
19. Francesc Mateo Tudela, et al., “On Combining Static, Dynamic and Interactive Analysis Security Testing Tools to Improve OWASP Top Ten Security Vulnerability Detection in Web Applications,” Applied Sciences/MDPI, December 20, 2020.
Schedule a demo and see how to eliminate your application-layer blind spots.
Book a demo