SECURITY INFLUENCERS BLOG

Security influencers provide real-world insight and “in-the-trenches” experiences on topics ranging from application security to DevOps and risk management

START FREE TRIAL

Manual Application Vulnerability Management Delays Innovation While Increasing Business Risk

Traditional approaches to application security (AppSec), such as legacy static application security testing (SAST) and dynamic application security testing (DAST), lack visibility across an application’s attack surface. As they analyze lines of code using brute force or look for code vulnerabilities based on a predetermined malware signature list, legacy SAST and DAST approaches miss false negatives while incurring high volumes of false positives. In addition, the lack of visibility into software routes means developers must expend significant time searching for and verifying that vulnerabilities were fixed. Simply put, modern software development demands a new approach to AppSec.

When Competing Forces Collide: Speed and Security

Development and security teams are frequently at odds. On one side, developers focus on code commits, release dates, and timelines. Their performance metrics are based upon getting a product out the door—on schedule and on budget. On the other hand, security teams are most concerned with preventing cybersecurity risks to the organization and ensuring they maintain compliance with applicable regulations.

Many organizations struggle to bridge the gap between these juxtaposed positions. 68% of organizations have a mandate from the CEO that nothing should slow down development. Aa a result, developers are under increasing pressure to shorten their release cycles and commit more code faster. Indeed, 52% of companies admit to cutting back on security measures to meet business deadlines.

The conflict between security and development teams creates significant security risks for an organization. If security impedes development, security testing may be performed in a haphazard manner. In some cases, developers may be under so much time pressure that they will skip application security testing altogether.

This poses serious risk. It starts with the fact that many applications contain serious vulnerabilities. Indeed, upwards of 35% of applications contain serious vulnerabilities based on recent Contrast Labs research. This has not gone unnoticed; cyber criminals have upped the ante—and the data proves this out. Over the past year, 43% of data breaches were tied to application vulnerabilities per the latest Verizon Data Breach Investigations Report.

A successful application security (AppSec) program is contingent on effective vulnerability management. But legacy AppSec approaches fail in this regard. Following is a quick look at some of the reasons this is the case.

43% of data breaches are the result of an application vulnerability per the latest Verizon Data Breach Investigations Report

Vulnerability Identification with Brute Force Using Legacy SAST and DAST

SAST and DAST are two very different approaches to application security testing. Legacy SAST takes a “white-box,” or signature-based, approach to testing, whereas DAST employs “black-box” testing that sends HTTP requests containing attacks and then checks responses to determine if the attack worked.

Challenges with SAST Testing

The challenge for SAST tools is that their constructed threat model is simply a guess at what vulnerabilities may exist within the respective application. The problem is static scanners focus on lines of code and attempt to piece together how the application runs and the data flows work. But it is impossible for them to trace the complex maze of program execution, state management, validation, encoding, and other programming idioms. The result is a list of vulnerabilities that are never exercised in runtime (or false positives).

When it comes to API security, the challenges are even greater. Static scanning is trained to look for standard “source” methods and to trace the program through. However, APIs use custom methods to read a JSON or XML document from the body of the HTTP request, parse it, and pass the data into the API. With every framework performing these tasks differently, it is impossible for a static tool to analyze the data flow—which results in false negatives. Open-source frameworks and libraries are just as problematic. Open source contains custom library functions that only custom rules can detect. As static scanning tools do not have visibility into the code, they are unaware of the risk.

52% of companies admit to cutting back on security measures to meet business deadlines.

Challenges with DAST Testing

DAST is a “black-box” testing model that sends HTTP requests containing attacks and checks responses to determine if the attack worked. While sometimes the responses are definitive, often the evidence is unclear—if there was a successful exploitation or if the application simply broke. API security is even more difficult, as there is no way for a DAST to know how to generate well-formed requests. Additionally, it is exceptionally difficult to provide the right data to automatically invoke an API correctly due to many applications using custom, nonstandard protocols, and data structures for their APIs. This translates into numerous false positives as well as false negatives.

43% of organizations cite API security as a serious concern

False Positives and Negatives Waste Valuable Time

False positives and false negatives are very different, but they both result in wasted time for an organization, especially the development team. Each false positive that a static or dynamic scanning solution generates must be manually investigated and remediated by developers, delaying code commits and slowing release cycles.

False positives can consume an immense amount of time diagnosing. Research shows that around one-quarter of all security alerts are false positives. Per one report, each takes an estimated 164 minutes to remediate—and this is when the application is still in the stages of early development. These can quickly tally into substantial time expenditures that slow code commits and development cycles.

Compared to false positives, the impact of false negatives comes later when a detected vulnerability in production code forces costly patch development and incident response activities addressing breaches caused by exploitation of these vulnerabilities. Almost half of organizations have had an incident caused by an unpatched vulnerability.

The costs of fixing these vulnerabilities can be substantial. Indeed, when vulnerabilities must be fixed after an application goes into production runtime, the cost can be 100x more than if the vulnerability was fixed in early development. And with vulnerabilities endemic to many applications—26.7 vulnerabilities per application on average—the cost can be substantial for an organization with numerous applications in development.

Half of organizations indicate they have had a security incident caused by an unpatched vulnerability.

Measuring the Risks of Vulnerability Remediation Verification

While legacy SAST and DAST can help to identify vulnerabilities in code, this is only part of vulnerability management. Once a vulnerability has been identified, it needs to be remediated and undergo additional testing to ensure that the fix has occurred and the changes have not created a new vulnerability or impacted application functionality.

In many cases, vulnerability remediation and remediation re-testing are manual processes. Once developers have run an application security test, they are presented with a list of vulnerabilities to remediate. Manually verifying remediation takes valuable time for developers—and is one of the causes of friction between developers and security teams. In addition to contributing to relationship challenges, manual remediation verification can introduce potential risk to an organization. For example, almost half of security professionals report that they struggle to get developers to make vulnerability remediation a priority.

When vulnerabilities are missed and code is released into production runtime, organizations are put at risk. Successful exploitation of vulnerabilities can harm organizations in multiple ways. A compromised application can result in serious data exfiltration with myriad implications. For example, exposure of personally identifiable information (PII) that incurs significant fines and penalties from the corresponding regulatory bodies charged with enforcement of the European Union’s General Data Protection Regulation (GDPR), the Payment Card Industry Data Security Standard (PCI DSS), the California Consumer Privacy Act (CCPA), among others. Each of these also comes with mandatory public communications that can create substantial harm to an organization’s brand.

Of course, a data breach is not the only possible result of a successful exploitation of an application vulnerability. In some instances, bad actors seek to disrupt operations, shut them down, or even use them for nefarious means. Consider the buffer overflow vulnerability in the messaging application WhatsApp, which was weaponized by the NSO Group, an organization specializing in exploit development. The exploited code was sold to governments for extra-legal monitoring of dissenters and other persons of interest.

In other instances, an exploited vulnerability can generate significant losses in operational productivity and even revenue. The magnitude of these types of attacks becomes even greater when the target is an application used to manage operational technology (OT). The impact here can extend to public health and safety.

Vulnerability Management Demands an Inside-Out AppSec Approach

Vulnerability management is a serious undertaking when it comes to AppSec. Legacy AppSec approaches take an outside-in approach that lacks the accuracy and velocity required by modern software. Security certainly cannot be compromised to satisfy these requirements. But at the same time, business acceleration is a mandate from the C-suite and board of directors. This puts security teams and developers at an impasse.

A paradigm shift in AppSec is required, one that takes an inside-out approach to securing applications—in development and in production. Using instrumentation to embed AppSec within software automates vulnerability identification as well as remediation verification. This eliminates both false positives and false negatives, unleashing developers to focus on the outcomes on which they are measured, while empowering security teams to demonstrate applications are free of vulnerabilities and protected.

To learn more about the challenges of vulnerabilities, download a copy of our eBook, “How Manual Application Vulnerability Management Delays Innovation and Increases Business Risk.”

Tim Freestone, Vice President of Corporate Marketing

Tim Freestone, Vice President of Corporate Marketing

Tim leads the Corporate Marketing organization at Contrast, which includes Creative Services, Operations, Field, Channel, Growth, Communications, PR, and Customer Marketing across North America, EMEA, and APJ. Before Contrast, Tim led a high-performing team at Fortinet charged with brand, content, and demand-gen/growth marketing. Previous to Fortinet, Tim built out demand-gen operations for NetApp in the Americas and then globally. He cut his teeth in the tech space in New York where he was a founding partner in a technology marketing services agency that grew to over 50 employees with no external investment.

SUBSCRIBE TO THE BLOG