Back to blog

AI's speed paradox: Security is being left behind

AI-powered code is developing so fast that security defenses can't keep up, leaving new vulnerabilities in its wake. The speed is outstripping traditional security measures, demanding immediate and radical changes to organizational risk management.

Securing the rush: Why runtime protection is now critical for AI-generated code

While AI offers significant potential to enhance coding efficiency, the increased speed of code development and release velocity introduce new Application Security (AppSec) challenges that demand a shift in how organizations approach risk management.

Organizations simply cannot fully rely on AI to consistently produce secure code. Even if AI might improve code security in some aspects, the sheer speed of deployment often offsets those gains, potentially leading to a net increase in risk. AI-generated code may still contain vulnerabilities, as LLMs can inadvertently introduce flaws or misinterpret security best practices. The speed of code deployment makes it harder for traditional security measures to keep up, increasing the window of vulnerability for attackers, who can often exploit vulnerabilities within three days. In stark contrast, software companies can take up to 63 days to patch a vulnerability.

Traditional AppSec approaches, primarily focused on pre-production vulnerability scanning like Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST), were already inadequate before the proliferation of AI-generated code. That’s clear by the increasing number of vulnerabilities in backlogs: As of late March and early April 2025, the National Vulnerability Database (NVD) had a backlog of about 25,060 unprocessed Common Vulnerabilities and Exposures (CVEs) awaiting analysis, up from around 17,000 in August 2024. The processing rate is the same year over year, the NVD said, but submissions increased 32% in 2024 — a volume with which the prior processing rate just can’t keep up. The backlog is still growing and will continue to grow, and pre-production code scanning tools are of little help.

Why traditional security can't keep up

These tools often struggle to identify runtime-specific vulnerabilities or complex interactions that emerge only in production environments. Furthermore, they frequently lack the crucial context needed to prioritize vulnerabilities effectively — information like whether a vulnerability is accessible or actively being attacked — leading to alert fatigue and wasted effort for security teams. This is a critical gap given that up to 76% of apps have security flaws.

This combination of factors — the limitations of existing security tools and the increased coding speed driven by AI — creates a significant gap in visibility and protection, leaving organizations exposed to attacks on their web applications and Application Programming Interfaces (APIs). A recent joint Cybersecurity Information Sheet from the Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency’s Artificial Intelligence Security Center (NSA AISC), the Federal Bureau of Investigation (FBI), and international partners — AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems — highlights the critical role of data security in AI systems, outlining potential risks arising from data security and integrity issues across all phases of the AI life cycle. While this guidance addresses important data-related risks like data supply chain issues, maliciously modified data and data drift, securing the code itself — especially AI-generated code — against attacks in runtime is equally paramount.

Organizations urgently need a new approach to AppSec that provides continuous security in production environments where attacks actually occur. They need to empower Security Operations Center (SOC) teams to detect and respond to application attacks in real time, minimizing dwell time and potential damage.

Contrast ADR: Empowering your SOC against AI threats

Contrast Application Detection and Response (ADR) is uniquely positioned to address these critical challenges. It provides deep runtime visibility into applications, regardless of whether the code was written by a human or generated by AI.

Contrast ADR empowers SOC teams to:

  • Detect attacks on vulnerabilities introduced by AI-generated code. By operating in runtime, it sees exactly how the application behaves with real-world data and user interactions.
  • Detect active attacks targeting applications (including those built with AI-generated code) at the earliest stages, regardless of known vulnerabilities. It identifies and blocks malicious activity targeting AI-generated code at runtime, instantly deploying compensating controls to prevent exploitation, and even protecting against zero days originating from AI-assisted development flaws.
  • Focus on real threats, not theoretical risk, by identifying those actively being exploited to improve SOC efficiency. This context-driven prioritization allows SOC teams to reduce alert fatigue by providing actionable intelligence on exploitable vulnerabilities within the application context.

By providing this level of runtime visibility and protection, Contrast ADR empowers your SOC to respond effectively to attacks on applications and APIs, regardless of the code’s origin, and equips them with precise, code-level context to rapidly inform developers and streamline remediation.

In this era of development powered by ultra-fast AI, securing your applications is no longer optional. Runtime protection with Contrast ADR is not just an upgrade; it is an absolute necessity. Without it, you are leaving your systems vulnerable in the face of rapidly evolving AI threats.

Book a demo

Contrast Marketing

Contrast Marketing

Enlarged Image