Video

The AI-Driven Shift to Runtime AppSec

How ADR and runtime security are transforming AppSec beyond shift-left

Background Image

 

In this special episode of the Resilient Cyber Show, host Chris Hughes sits down with two of the most respected leaders in modern application security: Jeff Williams, Contrast Security Founder & inventor of the OWASP Top 10, and Naomi Buckwalter, who leads Product Security for Contrast. Together, they unpack the rapid evolution of Application Detection & Response (ADR) and explain why traditional AppSec approaches like shift-left, SAST, and WAFs are no longer enough to keep up with today’s real-world threats.

From AI-accelerated exploit development to the dramatic rise in application-layer attacks, Jeff and Naomi explain what organizations must do to gain true runtime visibility, cut through false positives, empower SOC teams and focus on the small percentage of vulnerabilities that genuinely matter. If you’re responsible for securing modern applications, understanding runtime security is essential.

What to expect in this 35-minute conversation:

  • Why traditional AppSec tools and shift-left strategies create noise, backlog, and blind spots

  • How ADR delivers real runtime context to pinpoint the attacks and vulnerabilities that actually matter

  • What SOC teams gain from true application-layer visibility and high-fidelity signals

  • How AI is accelerating both software development and exploit creation—and what defenders must do in response

  • Practical guidance for reducing false positives, prioritizing real risk, and strengthening collaboration across Dev, Sec, and Ops

 

About the speakers

Chris Hughes

Chris brings nearly 20 years of IT and cybersecurity experience to his role as president and CEO of Aquia. As a United States Air Force veteran and former civil servant in the U.S. Navy and the General Services Administration’s FedRAMP program, Chris is passionate about making a lasting impact on his country and our global community at large.

In addition to his public service, Chris spent several years as a consultant within the private sector and currently serves as an adjunct professor for cybersecurity master’s programs at Capitol Technology University and the University of Maryland Global Campus. Chris participates in industry working groups, such as the Cloud Security Alliance’s Incident Response and SaaS Security Working Group, and serves as the Membership Chair for Cloud Security Alliance D.C. He is the co-host of the Resilient Cyber Podcast.

Chris holds various industry certifications, such as the CISSP/CCSP from ISC2, as well as holding both the Amazon Web Services (AWS) and Azure security certifications and Cloud Security Alliance’s Certificate of Cloud Auditing Knowledge (CCAK). He holds a B.S. in Information Systems, a M.S. in Cybersecurity, and an MBA. He regularly consults with IT and cybersecurity leaders from various industries to assist their organizations with their digital transformation journeys, while keeping security a core component of that transformation. 

Chris is co-author of the books, “Software Transparency: Supply Chain Security in an Era of a Software-Driven Society,” and “Effective Vulnerability Management: Managing Risk in the Vulnerable Digital Ecosystem,” — both published by Wiley. He has also contributed many other thought leadership pieces on software supply chain security and has presented on the topic at a variety of industry conferences. 

Jeff Williams

Jeff brings more than 20 years of security leadership experience as Founder and Chief Technology Officer of Contrast. In 2002, Jeff co-founded and became Chief Executive Officer of Aspect Security, a successful and innovative consulting company focused on application security. Jeff is also a founder and major contributor to OWASP, where he served as the Chair of the OWASP Board for eight years and created the OWASP Top 10, OWASP Enterprise Security API, OWASP Application Security Verification Standard, XSS Prevention Cheat Sheet, and many other widely adopted free and open projects. Jeff has a Bachelor’s from the University of Virginia, a Master’s from George Mason, and a law degree from Georgetown.

Naomi Buckwalter

Naomi Buckwalter, CISSP CISM, is the Senior Director of Product Security for Contrast Security and author of the LinkedIn course: “Training today for tomorrow's solutions - Building the Next Generation of Cybersecurity Professionals”. She is also the founder and Executive Director of Cybersecurity Gatebreakers Foundation, a nonprofit dedicated to closing the demand gap in cybersecurity hiring. She has over 20 years of experience in IT and Security and has held roles in Software Engineering, Security Architecture, Security Engineering, and Security Executive Leadership. As a cybersecurity career adviser and mentor for people around the world, her passion is helping people, particularly women, get into cybersecurity. Naomi has two Master's degrees from Villanova University and a Bachelor's of Engineering from Stevens Institute of Technology.

Full video transcript

Chris: 
Thank you for joining the Resilient Cyber Show. My name is Chris Hughes and today I'm joined by Jeff Williams and Naomi Buchwalter from Contrast. How you both doing?

Jeff:

Great. Hi, Chris.

Naomi: 

Hey, what's up

Chris:

I'm excited to have you on. We had a bit of a false start a couple of weeks back with some technical difficulties. You know, it happens to even the best of us in cybersecurity sometimes, but we are here now.

Jeff:

Just good security, that's all.

Chris:

Yeah, yeah. Good, endpoint security and good security in general. So I've known each of you for several years, honestly, and been connected for a long time with each of you, but for folks that don't know, know, Jeff and Naomi and contrast, you know, can you tell us a bit about yourself and the team may be starting with you, Naomi.

Naomi:

Hey guys. I'm Naomi Buckwalter. I'm the senior director of product security at contrast. And I've been at contrast for three years, but I've been a fan for much, much longer. I was actually a customer back in like 2019 time and I've been a follower of contrast since you guys got started as a company. So. Always been a fan. I've done AppSec since my start in IT. Actually, I was a developer for quite a number of years, moved into security. It's been security ever since. I've done security in all different domains, but AppSec's always been my favorite.

Chris: 

Awesome. How about you, Jeff?

Jeff:

Yeah, I've been in, I don't know, I guess I started hacking in high school. So like the mid eighties and, didn't, I didn't intend to get into software or security, but, it accidentally came back. I, I started working on a high assurance project for the government and really got into security then. And then, then the internet happened. I started doing a lot of security consulting. helped to start OWASP and I did a lot of consulting for really big companies, including Naomi's former company. And then had this epiphany about runtime security. And so, you we started that and started Contrast and here we are.

Chris:

Yeah, the rest is history. I was kind of laughing on mute there because anytime we talk about AppSec, like, you know, your name is kind of synonymous with it. You've been around since I think the phrase has existed.

And you you've seen that many evolutions of application security and as you talked about now the emphasis around runtime and we've seen kind of this rise of the phrase of application detection and response or ADRs as people are calling it. You know, and some are hearing this and I'm sure they're thinking like is shift left no longer a priority? You know, why do you think ADR is catching on and runtime is becoming, you know, such an emphasis in the community and like, you where did shift left go wrong?

Jeff: 

Yeah, there have been a lot of different iterations of AppSec. You know, there was like a pen test phase and then there was a bunch of scanning phases and a static phase and a maturity model phase. And then there was DevSecOps and then ShiftLeft. And I don't know, I feel like a lot of people have tried to take shortcuts with AppSec and, you know, try to do it on the cheap and just, you know, barely get by.

And the reason that runtime security works and is different than those other things is it has real runtime context. so ADR, application detection response, put sensors in production to watch how applications are actually behaving from inside the apps themselves. And when you get that kind of real context, you can focus on the real attacks, the real vulnerabilities that are not just reachable, but actually being reached in production. And I think that's where people need to start. Like, you know, we, we talk about a risk first approach, but if you're just running static tools and dealing with the highs and critical, you really don't have any idea what's real. Like that could be completely wrong. Cause you don't have enough context to tell whether they're high and critical or that all your lows and mediums aren't high and critical.

Chris:

Yeah. You and I actually had a deep dive on this about shift left. And I can't remember if it was live or, you know, via a written form and an article. can't remember, I could spend some time, but you know, feel like shift left intuitively makes sense, but then as you start digging into it, even some of the studies that were cited in terms of the costs and cost reductions of, you know, fixing things earlier in the SCLC is, are, are, know, debatable or questionable at best, I guess you could say in terms of the studies that get often cited. And then as you said, there's been a pivot of focus on exploitability or reachability and things of that nature. But again, that's in a stack context. It's not in the runtime, you know, production environment that's tied to real threats that are being executed. I think that's contributing to, you know, kind of the rise of this phrase.

Jeff:

That's probably the biggest problem with shift left is it seems like it ought to work. It's like, sure. The, the earth is the center of the universe and stuff like that. So easy to believe if you don't really understand what's going on. And, you know, I've just seen company after company put big amounts of dollars behind shift left. projects, and then they just end up generating a huge backlog of vulnerabilities that nobody triages and they don't know which apps they belong to in production and it just, doesn't work.

Chris:

Yeah. Yeah. Agreed. And it's, it's kind of a become like us and security beating developers over the head with lengthy spreadsheets of findings, you know, before they can go to production. And then we wonder why they hate us and don't want to work with us and don't want to collaborate with us. no. Well, I, there is a study from Harvard and, think it's open source, foundation that said developers find security a soul withering chore is what they called it. So it doesn't sound like they necessarily love security.

I was going to ask you, Naomi, as a practitioner, a user, and now someone who focuses on product at contrast, where have you seen the shift left go off the rails or become problematic or challenging for organizations?

Naomi:

No one's ever said they straight up hate us. But yeah, mean, the undertone is there, I think. And I could see that. Where have we gone wrong? Well, it's all about context. mentioned this.

If your tools don't know what the heck is actually happening under the hood, of course it's going to pop a message every time or an alert and say, this is a problem. It's even on the other side. So in the practitioner side, sometimes I don't see all the layers of controls that go around something. So even if I say there might be a broken access control somewhere in the code, someone else might say, but we have this other control going on and other layers happening, you know, and I'm like, okay, I didn't know about that. Right. So it's a lot of context missing in a lot of the conversations that you have with developers. And then also the tools are missing the context.

It's really all about seeing the bigger picture. So when we say where Shift Left has failed, it really is just a communication issue.

It's like the tools don't have the context. The developers don't have the context of what security is looking for. Security doesn't have the context to what developers know. And there's always this passing of the baton thinking someone else will know what to do. Meanwhile, the baton gets dropped left and right. And gosh, everyone's just running in circles.

Jeff: 

I think there's this real danger that we're going to fall into the same trap again with AI and people think AI is going to solve the problem and there's companies now that launching AI based application security products, but it still comes down to context. And if all you can see is the source code, you don't have any more context. Ultimately, it comes down to what data do you have that's real?

If you get the right data, then maybe AI could make some contributions, but I don't think it's just going to automatically fix the false positive problem. In fact, I think probably the opposite.

Chris:

I'm always reading reports from vendors and the industry and research and things like that. And there was one recently on AI and AppSec, and it seems like there's the Spider-Man meme where developers are pointing at security, security is pointing at developers and everyone's pointing at the vendor of who's actually responsible for this AI generated code.

So I do fear that we're kind of heading in the same direction. And you both talked about context and tools and tech stacks. And I think part of that is, we've seen the rise of application exploit as a primary attack vector. If you look like M-Trans or DBIR, leading reports. But historically, when we look at like SecOps or SOC, for example, they don't have application level context or visibility.

It's more endpoint and infrastructure centric, you know, that kind of thing. You know what? And when we look at ADR, for example, you know, what do you think the implications of the future of the sock and scene and so on look like? You know, when comes to ADR and getting that application level context, how's that going to change things?

Naomi:

Well, how are we seeing ADR change the modern sock? So SOC teams typically don't even care about the application. I know it sounds deep, but or not real, but when they get application layer traffic, they're like, this looks legitimate. So like even things they put in. the web application firewalls, you know, and all the different rules that say we're going to X attack and Y attack.

I will tell you this, when I was in the SOC, I put in a WAF and I was deeply terrified of blocking legitimate traffic. So what did I do on the WAF? I let everything through except for like the most, most obvious attacks, right?

So even people with a security mindset like I had at that point, I was still very deathly afraid of breaking anything in the business. And what we're seeing with the SOC today is more of the same, but now they, with ADR, they can do more insights. So they could be more, a granular in their rules if they wanted to.

With ADR, they can say, yes, this is definitely an issue. We need to block this. Or, hey, we actually see this attack happening. Let's block this. So they actually have more insight and more true positives to work on. They have more confidence in the rules. And as a SOC person, as a previous security operations person, I could say, I wish I had something like this. Because instead of just relying on a WAF doing a thing, now I know with ADR, hey, you've got that insight. You've got that. contacts with the application, you know it's a problem, go ahead and block it. We're okay. Everything's good.

Jeff:

Yeah, I think that's right. I think the SOC can easily handle application security issues. The things that you do to respond to an application security incident are not that different from what you do to respond to incidents at other levels. You apply a patch, you block an IP address, you turn on a rule. I mean, it's not that complicated.

APSEC historically has just done a terrible job of communicating with the SOC. We don't use the right language. We don't call things incidents. We call them vulnerabilities. You know, we don't, we talk about risk rating and not like, is this exploitable? Do we give them a run book? No, we tell them how to remediate. It's just the wrong language that we use. 

I've spent the last year and half or two years working on that talking to SOCs trying to figure out how to communicate with them, tell them when there's an incident and what they need to do to solve it. And if you give the sock really good information about a real attack, that's obviously, you know, not a false positive, they're perfectly happy to deal with that. And they can, it's not, you know, it's just, we burned a lot of bridges by giving them laughs and saying like, Hey, you're on your own.

I've talked to hundreds of organizations over the past few years about how they use their WAF. And the answer in almost all of them is it's there.

It's probably not blocking. It may not even be really have an effective rule set. And the data that they do collect goes into their SIM and nobody looks at it. It's just, I mean, it's, it's just pure compliance at this point, which is disappointing because app.

Incidents are serious. Your apps are like the closest thing to your data And so we need the sock to engage and it's our job to bring them into the fold and get them You know DevSecOps that like a third of it is Ops and nobody talks about that But we got to bridge that gulf because that's where the action is. That's where CISO is fun

Chris:

Yeah. So funny, you said the DevSecOps piece, because as you said, I was thinking like, you know, for a long time, it was like a DevSecOps, but the sec is silent was kind of a joke or people would make phrases like that. Uh, but when it came to ops runtime, you know, production visibility in terms of application security, that piece was, was missing from the conversation. Um, and I want to ask you, but you about, talked about the WAF there, for example, like some may look at the WAF and say, you know, isn't this good enough? Like we're protecting the application layer, but you know, you all have talked about this, you've researched it. And as you said, it's kind of a compliance driven a requirement in some extent extent, know, do you have a WAF? Yeah, of course we have one. Okay. But like, it, you know, is that good enough? And as you're talking about maybe a health with DDoS or some other things, it like attackers have figured out ways to bypass it or teams aren't fully implementing it. You know, things of that nature, like what does the future of the WAF look like in this kind of ADR paradigm? perhaps start with you on that one, Jeff.

Jeff:

Yeah. So we did a study and we wanted to know, you know, is the WAF actually able to stop attack application layer attacks. And so, you know, first we, we, we tested both WAFs and, EDR products by the way, cause people say, I got a WAF. got an EDR. Like, I don't need to worry about apps. Like I'll catch it.

And the answer is they're not going to.

And in fact, if you run an attack directly against an operating system, then something in infrastructure is probably going to catch it. Like the EDR is probably going to see it and like, that's ransomware. I'm going to stop it.

But if you run that attack through the application, then EDR doesn't see it. It's like money laundering. Like the attack went through the app and now the app is trusted. So the operating system trusts it and you can do whatever you want. So we showed pretty effectively that like from almost, you know, for a broad range of attacks, it's invisible to the WAF and it's invisible to EDR, especially when you go through the application layer.

Chris:

Yeah, I was gonna ask you Naomi, we talked about the SOC and how we kind of burned some political capital or trust with the SOC and SecOps folks in the past, like from the contrast perspective, the product perspective, it's a bit of a cultural shift, like getting the SOC and those folks engaged in application level incidents. What does that look like as you guys are going out there engaging with customers, working with the product team on your end, getting in the right telemetry to them? it folks using Is contrast directly? it contrast, you know, funneling some of that telemetry and such to the soccer scene? Like, you know, what does that kind of look like as you go out and implement it?

Naomi:

Yeah, I actually went to the Splunk Conf this year over in Boston and I spoke to a number of folks who really didn't know anything about their application tech stack. They didn't know anything about it. They just know network traffic and where that goes, goes, right? So when we showed them, when I showed them a demo of ADR, their eyes just lit up. You could see...

like the heavens opening up and the angels singing and you could just see like, wow, I had no idea. This is so cool. Like this, this happens on my application services is great. And so when we showed them this, they wanted more. So the, the way that we move forward here, it's show the SOC, what they're missing. It's like, Hey, it's not just network traffic. This actually goes somewhere. This does something on your applications. This, this, this does something like this does something. Here's what it is. And then they say, show me this rule or show me this. What happens here?

And then you start answering these questions and then more of the heavens open up and more of the angels start singing. And they say, this is what we've been missing this whole entire time. And to have these conversations, it's not just protecting your applications. It's also protecting the availability of your servers too. Boom. I want to keep the uptime running to 5,009s, whatever it is these days. Like, I want that, I want that. okay. Well, let's talk about that. know, like get them talking about what they care about and then tack on the security because everyone cares about security. At least they say they do.

In general, they'd like, we want the business to make money. We want availability. We want all these things. and by the way, we want to protect our data and our business.

Jeff:

It's actually, you know, from a cultural perspective, it's exactly the same problem as building trust with developers. It's we can't give them a huge list of nonsense and then expect them to love AppSec. And so. At contrast, we're trying to focus on the 5 % or less of issues that are real.

And it's both in production and in development. And in fact, it starts in develop in production where you see like, Hey, this vulnerability that we found, let's add context to it. Let's enrich that finding so that we know exactly what's going on. Is it attached to a critical application? Is it, currently discovered by attackers? Is it currently being exploited by attackers? You know, all the things that make it real in production.

If we go to development and operations with a vulnerability that's critical under attack in production, uh, and you know, other factors that we can fill in. They're take it seriously. They love that. But when we go to them with, you know, uh, uh, I don't know, a log injection problem that's pretty much unexploitable in any environment. And even if you did exploit it, wouldn't cause any harm. Like that's, that's just distracting everybody and making folks. you know, feel like you're wasting their time. Nobody wants that. So we've got to fix, we got to fix that paradigm.

Like running a static tool in early development, like ship way left, generating a ton of false positives and then making developers fix all those things that aren't real is not going to work.

It's going to undermine your culture and you'll end up with much worse security. I would much rather people started with the list of stuff in production that's on fire and fix those problems. And then, you know, how far down that list can you go? you that's, that's the way we should work. Not, you know, giant pile and randomly choose stuff on the left to fix.

Chris:

Yeah. I feel like to some extent, say, admiring the problem of this, this, you know, utopia type scenario where we're going to fix everything from the onset, but the systems are already running in production. Like there's already problems there. There's already threats being conducted against these applications. Like we need to start there. And as you point out all the questions that you were listing there, it really gets down to, does this matter to me?

Should I care about this? It's like what you're getting down to there. And I think, you know, we hear all the time about cognitive overload of people in the sock or sec ops or, you know, developer or app tech practitioners being exponentially outnumbered by developers. Like you're getting down to what should I really care about? What really matters? And that's what we're getting to.

Jeff:

I don't think people are going to give up running static analysis tools early in development. And I think that's fine. Like if developers want to run those and make their code a little better before, but,

I feel like the really, important feedback loop is get context from production and real, real attacks, real vulnerabilities, and then bring that all the way left and just make that feedback loop really fast. If we can do that and developers can fix the real problems fast, we'll be in a way better situation than taking, you know, industry average. don't know what it's like 190 days to fix a vulnerability, like those timelines are crazy. It's a massive window of exposure.

So I want to change the formula. Like let's fix the 5 % of things that are really critical and do those real fast. let developers, if you have best practices and guardrails and all that, to try to minimize those vulnerabilities early, that's fine. But you're really never going to get anywhere close to everything.

Chris:

Yeah, I feel like the rise of ADR and this emphasis on runtime is kind of a, you know, a me a call of the community or an acknowledgement that like, we're not going to fix everything before it goes to production. And we need to, we need to recognize, we need to accept that, but we need to observe what is happening in production. then as you said, kind of correct those issues, get back to the team. How can we address things that are truly being exploited or attempted to be attacked and so on. And you actually talked about something I wanted to bring up. You all have a great report called software under siege. And you talk about the disadvantage that the defenders are fighting with. For example, in the report it talks about, you know, five days from vulnerability disclosure to exploitation development.

48 minutes from initial exploit to lateral movement. I've even seen some using AI now to develop exploits literally within, I think it's minutes or hours of a CVE being published, you know, for trivial amounts of a couple, you know, a few dollars, for example, per exploit development.

You know, how does ADR equip defenders to kind of mitigate and, you know, bolster their advantage when it comes to that challenge? I'd be curious, you know, what you all think about that when we see this kind of asymmetry between attackers and defenders. Maybe start with you on that one, Naomi. 

Naomi:

With ADR, course, you get protection for classes of vulnerabilities, not just specific CVEs. So yeah, it's right. You're right that attackers are using AI to like go through old GitHub repos and be like, what kind of vulnerabilities exist with this code? then they just go to the most popular packages and just start exploding things. And yes, that's happening. But ADR is right there in production running next to your code saying, does this look like a type of vulnerability that I protect against? And it is? Okay, great, block. So when we're talking about specific CVEs, it almost doesn't matter who's creating them or vulnerability. It doesn't matter where they're coming from. They're the same type of running code that we've been having. It doesn't matter if it was written faster using AI or found with AI. It's still an exploit. It's still a vulnerability. And if protect and ADR is there to protect against it, Boom, it's right there.

Jeff:

I think the trick is protecting against attacks in the right place. So just so everybody understands when ADR detects an attack, it's not like a WAF.

It's not just looking at the HTTP request and going, there's some funny characters in there. So it's probably an attack and I'll block it. That's how you get false positives. And that's how you break applications.

ADR watches that request as it gets processed by the code. It sees a bunch of untrusted data come in and it watches it flow through the application. And if any of that untrusted data makes it into a SQL query and changes the meaning of that SQL query, that's when ADR flags it and says, hey, that's an attack because that should never happen. Like no user data should ever change the meaning of your queries.

And so we're not intervening at the perimeter. We're intervening, you know, right before that query goes to the database, we verify that, this thing was not tampered with. And if it was, we block it. So it's a completely different story in terms of false positives than what you get with OAuth.

Naomi:

And I think, Jeff, you want to be clear that it's not just SQL that ADR protects that's a great point. mean, it's, yeah, it's expression language injection, XXE, file traversal, command injection.

Oh, S injection. Yeah. It's really good at, at injection kinds of problems because it watches the data as it flows through the application and injection problems are the exact ones that static tools are really bad at data flows.

Chris:

Yeah. Yeah. You actually, as I said, I mentioned that software under siege report. You all provide some excellent insights there in terms of probing and a viable attacks that you're, kind of observing. And it goes into those types of attack types that you guys just talked about there. So.

Again, a plug for that report is really excellent in terms of real world insights that you observed across running.

Jeff:

I think differentiating between probes and what we call real attacks or viable attacks is really important. It's more than 99 % of the attacks are just probes. They never connect with the vulnerability that they're trying to exploit. so, you know, for the most part, you don't have to worry about them. They're not doing any damage. We want you to focus on the

Viable attacks the ones that actually reach the type of vulnerability that they were targeting and that's what I say like, you know runtime techniques can focus on the the small percent of things that really matters That's what i'm talking about

Naomi:

and as a practitioner I would say the probes aren't useless like I actually do like looking at them You can get a sense of what the attackers are trying to do So if you see an attacker and by the way, it makes it really easy to just block them.

You can watch them for a little bit and see what they're doing. It's hilarious. They're trying a whole bunch of different things like PHP attacks. You don't have PHP here, right? But it's hilarious.

So they'll try different things and you'll see the evolution of what they're trying to do. And it's fascinating to see what attacker will try.

They'll try unsafe deserialization for one or they'll try OS command injection for another. And they'll just keep trying different things.

A little bit here, a little bit there, trying to go under the radar. but we see all the probes. You see all your network traffic categorized by the type of attack that you're trying to do. And it makes it so much easier to respond to in case there is an issue underlying under the hood. If there is an exploitable vulnerability, ADR is there to help. ADR is there

Chris:

Yeah, I think it's really excellent that you guys both framed it like that because it on one hand, you know, those, those probes are kind of like a signal, right? It's a threat informed defense. You can see what are they trying to do? How do I get a sense of, know, what the attackers are attempting, but those viable attack paths are, do I really need to focus on right now?

versus what should I be thinking about in the future that they're trying at the moment that may not be viable. So that's a really great way to discern between the two.

And I wanted to ask you, we brought up AI, of course, no conversation right now would be complete without a mention of AI.

But when you look at the landscape around AppSec, I feel like in security more broadly, like security is typically a laggard, a late adopter, cloud, SaaS, mobile, you name it. Like we're always very risk averse, we tend to be late to adoption.

But AI seems to be a little bit different where I feel like, you know, people are a little bit more willing to lean in and at least experiment, explore. For example, there's been some promising things that Jeff talked about where people are trying to combine SAST and, and, and LLMs to maybe drive down false positives or people on the SOC and SEC Ops side are using LLMs to produce more comprehensive investigations and reports of incidents and things of that nature. know, when you look at the landscape of AppSec and AI, you know, where and how do you guys kind of see it playing a role as we move forward?

both from your individual perspectives and as well as from the contrast side as well. Either you can jump in first on that one.

Jeff:

My take is that AI is only going to be as good as the underlying data that you give it. And so I'll just come back to that point I mentioned it earlier.

What we do is we have sensors sort of across your application landscape, APIs, web apps, and all the infrastructure and so on and we collect all that telemetry and we build something we call the contrast graph. And it's really a model of your application, your entire application layer. It's a context rich graph that when we have that, then we can expose AI to that and get some really interesting output. So we use it to generate fixes for vulnerabilities right in the repo so that developers don't have to Futs around with it, we generate, we have all this context which allows us to generate really good fixes. Like we know all the libraries that you're using in your application.

We know what connections are made. We know what other defenses are in place on that route. We even know how you did defenses in other parts of the application so that you can follow the same pattern instead of just applying some generic fix. So I think that's an interesting piece of the puzzle.

Naomi:

I would say I am concerned about one thing when it comes to the future of AppSec and AI. I'm hearing a lot from the just reading articles, but companies are using AI to make decisions for their business, right? So we know this, but even within their products, like an insurance company might say, and they've probably done this, but they'll use AI to like, who gets benefits and how much the premiums cost and who gets denied and all these things.

But imagine that, but at scale. So now you've got AI and all these different types of products, all these different companies. And now AI being trained on this weird data, where are those models coming from, is making decisions as if the product had done it. So instead of writing in business logic in the product, where a coder will actually be like, if then this, know, if else this, right, do this thing, thing, whatever. AI is now doing those things. It's probably going to be harder for AppSec people and just operations people in general to know if that's right.

If they'll be like, if this is normal user behavior, if this is normal application behavior, go ahead and accept it. Otherwise AI is going off the rails a little bit. How do we actually stop this thing? So I think in the future, AI and AppSec is going to have a little bit of a change in relationship. It's like, how do we make sure the application doesn't go off the rails because of AI?

Chris:

Yeah, it's an interesting problem because if you look at, know, just bring it back to software development and cybersecurity, like in the age of zero trust, quote unquote, zero trust.

Studies and research are showing that developers are just implicitly trusting the AI-generated code, for example, without any validation or rigor. And that can happen in AppSec too, is like, you know, when it comes to determining something's a false positive or about finding it needs to be investigated or maybe blocking or allowing something, those same challenges and risks are inherent there when it comes to us using AI too.

I think that's important to Jeff's point of the quality of the data is what's going to inform those decisions, whether semi or fully autonomous type activities. So, you know, we've run the gamut of everything from shift left to traditional SOC and SIEM tooling and gaps around runtime, visibility and context, the importance of ADR. I'm curious from both of you, kind of any closing thoughts of ADR and the future of AppSec and words you want to leave practitioners with as we wrap up, maybe start with you on this one, Naomi.

Naomi:

Well, let's see. mean, AI is here to stay. We all know this. And I don't think we should be scared of it, though. I think we should embrace it.

And as an AppSec person, if you're not next to your developers learning about what they're doing and how they're using AI, then you're going to be missing a huge piece of the puzzle.

So get your head out of the sand, start studying, keep up to date with the latest and greatest with DevOps.

Jeff:

Yeah, that's a good point. I AppSec has always been, I kind of think of it as floating up the application stack. Like in the early days it was, you know, kind of raw HTTP level stuff and

As frameworks evolve and new technologies came out, AppSet keeps kind of floating up as the stuff below it gets more standardized. I wish we'd done better at stamping out SQL injection or an XSS. But we do have a lot of work to do to get our heads around AI. And I'm proud of what OWASP is doing in that area.

The OWASP LLM Top 10 is really good. And there's a number of other projects there that are making some really cool advances.

But the big thing I wanted to mention, you know, after our conversation was that I think a lot of people are going to be irritated at this message that you can't secure everything.

I often talk to CISOs and others that are really clutching the idea of stamping out all the vulnerabilities really tightly. And it's hard to give up because they're like, if we just worked harder, you if our developers did their job, I hear stuff like this, which is absurd by the way. But, you know, I hear this idea that if we just tried harder, we wouldn't be pushing all these vulnerabilities into production. And, you know, that's just not reality. It's, there is no try harder. There's no maturity model that's going to like, you know, process your way out of this. You're going to have to deal with vulnerabilities in production.

And so I'm encouraging folks to embrace that and step down from this perfect vulnerability management kind of approach to things. If you're just, you're like, well, we solved all the highs and criticals. You're not in a good place. And it's going to be tough for organizations that are like.

Shift it all the way left that way. Like, I don't mean like shift left, like into the repo. I mean, just like they do it in development and they're counting on that. And their only defense in production is a WAF. Uh, that's not a good place to be. So, you know, we've been trying that approach for my whole career. That's, you long 25 years, at least in abstract and it hasn't worked and it's not going to work. And so what is it going to take?

to get you to see that building a mountain of vulnerability backlog is not the outcome that we were shooting for. What is it gonna take to say, hey, look, the rest of the world, everything like supermarkets and banks and everything, they don't try for 100 % security upfront. They put a lot of investment into monitoring and response. And we've got to balance AppSec out.

We just have to.

Chris:

Yeah, I agree with you. I wrote a piece called the cybersecurity delusion dilemma, where we think that, you know, cybersecurity is the center of the universe. And we, we fail to realize that the business has competing interests of speed to market and, know, competitive differentiation and meeting user demand and, and, you know, requests, et cetera. There's so many, you know, revenue generation, et cetera. So many competing priorities for cybersecurity. And of course, you know, secure by design shift lefties are all novel or not. No novel, but noble concepts that we should be trying to do.

But there's also a pragmatic acceptance that, we are not going to stop all the things. are not going to catch all the things before production. need to focus on what is in production, what can be exploded, what's actually being attacked. And this is a much more pragmatic approach, you know, nothing that we shouldn't do those things again, defense in depth. And, you know, all these things are still relevant, but it's just much more a pragmatic approach to cybersecurity in my opinion. And, you know, we can keep shooting for perfect and to stop everything before production.

Or we can acknowledge that we haven't been able to do that. And now we need to focus on what's actually running and what's going to be exploited.

Jeff:

Well, I think we've got to acknowledge that irritating everybody else in the organization is not the path to success. You can't irritate your way secure. I used to say you can't hack your way secure, but I think you can't irritate your way. Secure is a lot more, more perfect for a lot of organizations.

Chris:

Yeah. Ironically, I can, when we think about DevSecOps, was like framed as breaking down silos right between devs and sec and so on. But all it did was bolster those silos because people got so frustrated with us throwing these massive spreadsheets at them and saying you got to fix all the things, you know, like it just bolstered those silos rather than broke them down.

So I appreciate you both coming on Naomi and Jeff. As I said, I've been connected with all of you, both of you for quite some time. You always have some great perspectives. I really appreciate the insights that you know, contrast is sharing as contrast security.com check them out. And you know, they're championing this concept of ADR. I think it is going to be the future of AppSec.

So thank you both for jumping on.

Naomi:

Thanks, Chris. take care.

Jeff:

Bye, everyone.

You can't stop what you can't see

Schedule a demo and see how to eliminate your application-layer blind spots.

Book a demo