Skip to content

It’s time to replace our broken AppSec tools with something that actually works: Runtime Security

It’s time to replace our broken AppSec tools with something that actually works: Runtime Security

Tell us straight, Santa: Where did these old-school Application Security (AppSec) tools come from? Did you get the Security Specialist Elves to cobble them together from toadstool scrapings and cobwebs? 

Given the way things are going, we’re pretty sure developers didn’t design traditional AppSec tools. Developers, as in, the people who have to use the tools’ findings to fix security issues (or, as the case may be, who waste their time chasing non-issues, aka false positives). … not to mention the missed false negatives, which leave security leadership with a false sense of security. 

When we say traditional tools, we’re talking about these lumps of coal floating in the AppSec acronym soup:

  • Static Application Security Testing (SAST)
  • Dynamic Application Security Testing (DAST)
  • Web Application Firewalls (WAFs)
  • Static Software Composition Analysis (SCA)

There are so many tools, and the complexity ratchets straight up if you have to install and manage all four of those coal lumps (or even a fifth, if you need something like Application Security Posture Management [ASPM], Application Security Orchestration and Correlation [ASOC], etc.). It’s created a fog of disjointed security strategies, delays, friction and added cost that’s overwhelming the teams that are trying to reduce security risk.

 Let’s face it: Today’s AppSec approach is fundamentally broken. It’s simply too hard to cobble together robust security with this mishmash of cruddy tools. 

The specific gripes: 

The Island of Misfit AppSec Tools

SAST tools are near-sighted: SAST tools look at source code, so they can’t see how the fully assembled application actually works. They don’t see how the custom code interacts with the runtime platform, app/application programming interface (API) server, libraries, frameworks and code from other repos. Since most vulnerabilities span both custom code and all these components, these tools make a lot of mistakes.

DAST is skin-deep: DAST tools, on the other hand, do actually analyze the fully assembled running application. But since they work from the “outside-in,” they can only detect vulnerabilities that they can actually exploit and detect in the HTTP response. Possibly the worst part: DAST tools can only scan the “front door,” not all the back-end interfaces and connections in modern applications. Bottom line? DAST takes a lot of work and misses a lot.

Static SCA tools love to cry wolf: Static Software Composition Analysis (SCA) tools suffer from similar limitations as SAST in that they narrowly look at one aspect of the application — i.e., the dependencies — in isolation from what’s happening when the application runs. In other words, they can’t determine how libraries are actually used by the application. Granted, a handful of the top SCA tools do, in fact, run “reachability analysis” — in other words, for some of their supported languages, they analyze whether application ever actually touches a vulnerability. If not, whatever vulnerability found in that source code is harmless. That helps. But even taking reachability analysis into account, the vast majority of SCA findings still wind up being unexploitable false alarms.

WAFs are stuck looking over their shoulder: WAFs have long been criticized for being difficult to manage, for breaking applications and APIs, for missing real attacks, and for being bypassable. The root cause of the issue is that they only look at HTTP traffic for pre-determined patterns that represent previously seen attacks. There’s no way for these tools to capture the patterns of all possible attacks.

Over the past few weeks, we’ve been iterating the sorry state of affairs that’s been wrought by these traditional, pockmarked tools. Here’s a recap: 

Alarm fatigue 

You’re undoubtedly heard it umpteen times before: Security teams are being deafened by a constant bombardment of alerts, overwhelmed by a flood of alarms streaming in from a hodgepodge of siloed tools. In fact, recent research shows that 59% of surveyed IT decision makers receive more than 500 cloud security alerts per day.

It would be one thing if all these alerts warranted attention because they were accurate. Sadly, far too often, they’re anything but. In fact, far too many security alerts are low-priority or flat-out bogus. 

Research has found that more than two-fifths — 43% — of organizations have a false-positive rate of 40%. False alarms are far more than simply annoying. In fact, false positives profoundly affect the overall economics of an AppSec program. 

As Contrast Chief Technology Officer and Co-Founder Jeff Williams has explained, figuring out if a tool-reported vulnerability is true or not can take anywhere from 10 minutes (if you're really good) to many hours, he says. “If you're resource constrained — and just about every company is — then you simply can't investigate every single vulnerability that your tools report,” he says. 

Here’s an example: Say you run a static or dynamic scanner on an application and it generates 400 possible vulnerabilities. There will absolutely be truly critical issues in the flood of alerts, and those vulnerabilities do warrant swift attention. The thing is, they’re needles that are obscured in a haystack of irrelevance. In fact, some 65% of organizations say only 10%  of alerts are actually critical. 

That means that out of the 400 possible vulnerabilities you found with the static or dynamic scanner, only 40 are true vulnerabilities. …

... but you couldn’t know which were true positives and which were false positives, so you had to spend time trying to carve through that pile of vulnerabilities to try to figure out what was real and what was fiction. “Imagine that I only have time to go through 100 of these ‘possibles,’ Jeff suggests. “Even if I'm really fast, it's going to take me 10 minutes to investigate each of these. This adds up to over eight days to do all 400.  But I only have two days, so I'll just analyze 25%. That means I'll confirm 10 true positives and miss the other 30 real vulnerabilities in my application.”

The results of all these false positives:

  • Few true positives get remediated,
  • Remediation takes a long time, and
  • You get stuck with ever-growing backlogs.

Read it and weep. 

Really bad AppSec math

On average, organizations take nearly two months to remediate critical vulnerabilities. A 60-day mean time to respond/remediate (MTTR) means that you’ve got a huge backlog. This is broken math, and it just doesn’t make for a healthy AppSec environment. 

An MTTR like that means you’ve got a mega backup, which means you need security experts to prioritize what to fix. They can’t discern what’s a real issue — vs. what’s a false negative or false positive — until experts dig down and figure it out. 

Read it and screech.

The flaming dumpster fires of AppSec budgets 

It makes absolutely no sense: According to Forrester’s 2022 Security Survey, 63% of security decision-makers planned to pump up their firm’s AppSec budget in 2023, despite the economic downturn. The more often those firms had been breached, the more likely they were to pour additional fuel onto the fire. … as in, they’re spending even more money on the technology that should have protected them from those breaches in the first place.  

Why are people increasing AppSec spend? The answer is simple: Because their tools aren’t working. Where is this money getting wasted, you may ask?  On the Island of Misfit AppSec Tools, on the lousy processes and sky-high backlogs they pump out, and on the incessant bleating of false alarms emitted by those tools. 

Read it and freak

Never mind trying to fight off advanced cyber threats with conventional AppSec tools. As it is, they can’t even keep up with old-school threats like the OWASP Top 10: Traditional tools have inferior efficiency and effectiveness against even well-understood threats. 

But be of good cheer: There’s an end in sight to the broken AppSec promise. It’s called Runtime Security. 

Unleashing the Runtime Security revolution

Runtime Security is a radical new way to fix what’s gone wrong with AppSec.  It’s an operational paradigm that encompasses the entire application life cycle, providing critical insights and protection, all the way from development on to deployment. 

The Runtime approach unifies various capabilities — including Interactive Application Security Testing (IAST), Runtime Application Self-Protection (RASP), Runtime Software Composition Analysis (R-SCA) and Observability — under a single technological umbrella.

These interrelated features, powered by Runtime Security technology, form one, cohesive framework that enables continuous monitoring and proactive defense across all stages of software development, making it an ideal fit for DevOps practices and continuous integration/continuous deployment (CI/CD) pipelines. 

Runtime forms a versatile, efficient security model that matches agile methodologies and the fast-paced nature of modern software development. Here’s a bit more detail: 

Lifting the hood on Runtime Security

In a nutshell, Runtime Security uses sophisticated instrumentation to reshape existing security processes so they’re more proactive and insightful. Runtime is one, single technology, but it includes a suite of capabilities that redefine how we approach AppSec:

  • Runtime Application Security Testing (R-AST): R-AST primarily focuses on detecting vulnerabilities in custom code through comprehensive code analysis. But it doesn’t differentiate between third-party and first-party code: It puts the security of both code types under the microscope. While it can identify vulnerabilities in libraries, the majority are found within the custom code itself. R-AST operates in real time during development, promptly alerting teams to these vulnerabilities as they arise during coding.

  • Runtime SCA: Runtime SCA begins by identifying libraries used in your application, creating a Software Bill of Materials (SBOM). R-SCA provides a more accurate picture than that you get from static SCA tools, which only examine dependency manifests in the source code repository. Static SCA tools miss the boat on two fronts: They often report on libraries that the running application never even uses,  and they miss dependencies injected by the runtime environment — something that’s common in container-based development. R-SCA addresses these gaps, ensuring that its SBOM is both comprehensive and relevant, without the inaccuracies of static SCA. Like static SCA, R-SCA then cross-references this more accurate SBOM against  a database to see if there are any publicly reported vulnerabilities against the particular version(s) of your application's dependency. R-SCA then reports only these as vulnerabilities.

  • RASP: RASP is a proactive feature that forms your first line of defense against vulnerabilities in production. It offers a view of potential threats — including identifying threat actors, their methods and their targets — all at the code level. Its speed and precision in detecting and mitigating threats are unparalleled.

  • Runtime Application Security Observability: Imagine having a live security blueprint for your distributed applications. This feature, known as security observability, is a way to monitor all application and API activities in multiple directions, including attack surface, defenses, dangerous methods and outbound calls to API endpoints, system interactions, database connections and file system interactions. Observability painlessly, automatically answers the questions that would otherwise force security teams to lose weekends researching a security scenario.

That’s just a taste of what Runtime Security can do. 

Are you ready to get off the Island of Misfit AppSec Tools? Are you good and fed up with false positives/negatives? Done with tearing your hair out over broken AppSec math, sick and tired of burning your AppSec budget on Fool’s Gold? 

If so, then join us on Tuesday, Dec. 12, at 11 am PST/2 pm EST, when you can tune in to a webinar with Forrester Research and Contrast Security. Get ready for the revolution: Learn how to use Runtime Security to remake AppSec and strengthen your apps/APIs — from the inside out.

Register Now

Read more:

Lisa Vaas, Senior Content Marketing Manager, Contrast Security

Lisa Vaas, Senior Content Marketing Manager, Contrast Security

Lisa Vaas is a content machine, having spent years churning out reporting and analysis on information security and other flavors of technology. She’s now keeping the content engines revved to help keep secure code flowing at Contrast Security.