<iframe src="//www.googletagmanager.com/ns.html?id=GTM-WQV6DT" height="0" width="0" style="display:none;visibility:hidden">

100% Accuracy

Contrast Scores High Marks Running OWASP Benchmark

Startling Results Reveal Significant Static and Dynamic Weaknesses

Contrast ran the Benchmark, and the results were dramatic. The top commercial Static Application Security Testing (SAST) products had an accuracy score of 32%, and the worst scored 17%. For Dynamic Application Security Testing (DAST) products, the results were just as startling, with the top product scoring 17% and the worst 1%. 

Screen Shot 2017-04-25 at 10.40.51 AM.png

Benchmark Accuracy Results

The Benchmark is free and open application security test suite with 2,740 security challenges.
1,415 of those are real vulnerabilities that tools need to detect, and 1,325 are decoys –
or False Positives: they look like vulnerabilities to a naïve tool, but are not.


SAST & DAST Leave Businesses Vulnerable

For over a decade, businesses have been relying on SAST and DAST products to try to secure their applications and check off compliance requirements. The 2015 OWASP Benchmark Project, sponsored in part by the US Department of Homeland Security (DHS), shows that existing SAST and DAST solutions are leaving businesses vulnerable to attack. 

Contrast Assess (IAST) Approach Tops the Benchmark

Interactive Application Security Testing (IAST) solutions like Contrast Assess integrate into a running application to assess security with the full operational context. As clearly demonstrated by the OWASP Benchmark, this approach is not only many times more accurate, but is faster and easier to deploy as well.false-sense-of-application-security

Rethink and Redo Application Security

Anyone can use the OWASP Benchmark Project to evaluate the pros and cons of current solutions. Contrast Assess is a natural choice to augment or replace existing SAST and DAST solutions. Ask your current application security vendor for their benchmark results, and contact Contrast Security to learn more about ours.

Contrast Datasheet   Contrast Datasheet

 

How Vendors Stack Up: Interpreting OWASP Benchmark Scores

Make sure you ask Application Security vendors for their OWASP Benchmark “Accuracy Score.” The Accuracy Score provides the complete picture. Vendors may be tempted to claim their “True Positive Score” as their score, but that’s not the complete picture. The OWASP Benchmark Accuracy Score combines True Positives and False Positives to measure true product accuracy.

How The Benchmark Score is Calculated

The OWASP Benchmark calculates the overall accuracy score for a product by subtracting its False Positive Rate (FPR) from its True Positive Rate (TPR). That balances reporting vulnerabilities, with being right. A perfect accuracy score of 100% occurs when the TPR for a product is 100% and the FPR is 0%. 

For example, picture an application with multiple vulnerabilities and the following three application security testing products. (1) The application security testing product does nothing. Therefore it finds no vulnerabilities in that application, and generates no false alarms. Its TPR is 0% and its FPR is 0%, so it scores 0% on the benchmark. (2) A different application security testing product finds that every line of code, or web page, contains a vulnerability. So, its TPR is 100% because it finds every vulnerability, but its FPR is also 100%, so it would score 0% on the benchmark as well. (3) The third security testing product has a TPR and FPR that are equal, which means the product is effectively guessing. That product would also score a 0%.

In the case of Contrast, its TPR was 100% and its FPR of 0% on the latest OWASP Benchmark. Subtracting the FPR from the TPR yields a score of 100% for Contrast Assess.
Open Source and Vendor Neutral

The Benchmark Project adheres to the OWASP principle of being free and open.  Anyone can download and use the Project resources, as well as review and contribute to the Project. The primary Benchmark resource is an application with currently slightly fewer than 3,000 test cases, across 11 different vulnerability categories. The test cases include real vulnerabilities as well as scenarios that look like vulnerabilities, but aren’t, to check for false positives. In addition to the test application, the Benchmark Project includes a tool that normalizes the output of the application security product under test, and calculates an accuracy score.