Skip to content

Accuracy in AppSec Is Critical to Reducing False Positives

    
Accuracy in AppSec Is Critical to Reducing False Positives

According to a new report from the Neustar International Security Council (NISC), over one-quarter of security alerts fielded within organizations are false positives. Surveying senior security professionals across five European countries and the U.S., the report highlights the need for more advanced and accurate security solutions to help reduce alert-wary cybersecurity teams overwhelmed by massive alert volumes.

Alert Fatigue and Its Causes

Following are some of the key highlights from the report:

More than 41% of organizations experience over 10,000 alerts a day. That said, many of them are not critical. Teams need to be able to quickly differentiate between low-fidelity alerts that clutter security analysts’ dashboards and those that pinpoint legitimate potential malicious activity. This expanding volume of low-fidelity alerts has become a source of “noise” that consumes valuable time—from developers to the security operations center (SOC). Thousands of hours can be wasted annually confirming if an alert is legitimate or a false positive.

While security tools may trigger alert notifications, this doesn’t mean the activity is malicious. Security configuration errors, inaccuracies in legacy detection tools, and improperly applied security control algorithms can all contribute to false-positive rates. Other contributing factors include:

  • Lack of context in the alert generation process
  • Inability to consolidate and classify alerts

Another reason for the deluge in alerts is the fact that many companies deploy multiple security controls that fail to correlate event data. Disparate events may not be linked, with the tools used by security analysts operating in separate silos with little consolidation. Log management and security information and event management (SIEM) systems can perform correlation between separate products, yet they require significant customization to accurately report events.

Tools like these often require a security analyst to confirm the accuracy of the alert—namely, if it’s a true legitimate alert or false positive. While these types of solutions can coordinate and aggregate data to analyze alerts, they don’t address the challenges posed by high rates of false positives.

Further complicating matters are intrusion detection and prevention systems (IDS/IPS) that cannot accurately aggregate multiple alerts. For instance, if a single alert shows that an internal system attempted but failed to connect to an external IP address 50 times, most tools will generate 50 separate failed connection alerts, versus recognizing it as one repeated action.

Security Alert Overload Introduces Risk and Inefficiencies

The time it takes to investigate and validate a single alert can require a multitude of tools just to decide if an alert should be escalated. According to a report by CRITICALSTART, incident responders spend an average of 2.5 to 5 hours each day investigating alerts.

Unable to cope with the endless stream of alerts, security teams are tuning specific alert features to stem the stream of alerts to reduce volume. But this often ratchets up risk, as they may elect to ignore certain categories of alerts and turn off high-volume alert features.

As a result, one of the challenges development teams have in managing alert fatigue in application security (AppSec) is finding the right balance between setting liberal controls—that could potentially flood systems with alerts—and more stringent alert criteria that could find teams subject to false negatives.

While false positives may be annoying and burden teams with additional triage requests, false negatives tend to be more nefarious, because a functionality of an application that is tested is erroneously flagged as “passing” yet in reality it contains one or more vulnerabilities. For AppSec teams, the objective is having the ability to detect valid threats that provide quality alerts, supported by the context and evidence to inspect them accurately and continuously.

Reducing Alert Fatigue with Instrumented AppSec

Fortunately, technologies like instrumentation help automate security testing to reduce false positives and false negatives.

Instrumentation is the ability to record and measure information within an application without changing the application itself. Some current “flavors” of security instrumentation today include the following technologies:

  • Software Composition Analysis (SCA). SCA performs inventory and assesses all open-source libraries
  • Runtime Application Self-Protection (RASP). A RASP monitors threats and attacks, while preventing vulnerabilities from being exploited.
  • Interactive Application Security Testing (IAST). An IAST monitors applications for novel vulnerabilities in custom code and libraries.

By instrumenting an application with passive sensors, teams have more access to information about the application and its execution, delivering unprecedented levels of speed and accuracy in identifying vulnerabilities. This unique approach to modern AppSec produces the intelligence and evidence necessary to detect vulnerabilities with virtually no false positives and no false negatives.

At the end of the day, your security tools need to give you less, but significant, alerts that contain the correct intelligence to best inform your security and development teams. With technologies that use instrumentation, like SCA, IAST, and RASP, you can achieve high accuracy due to the visibility into an application and its runtime environment as code loads into memory to provide enhanced security logging for analytics.

Patrick Spencer

Patrick Spencer

Patrick Spencer (Ph.D.) leads the content marketing and PR/Communications team at Contrast. He has nearly a decade and a half of experience in various senior marketing roles within the cybersecurity sector and is the recipient of numerous corporate and industry awards. After leaving the corporate world to start his own agency several years, Patrick joined Fortinet to lead content marketing and research. His many duties included serving as the editor in chief for The CISO Collective. Patrick’s roots in cybersecurity go back to Symantec, where he spent nearly a decade in senior marketing roles of increasing scope and responsibility. While at Symantec, he served as the editor in chief for CIO Digest, an award-winning digital and print publication containing strategies and insights for the technology executive. In addition to the above roles, Patrick has also served in various senior- and executive-level marketing capacities at several SaaS-based marketing companies.