Skip to content

Generative AI: Less alert fatigue, more code sloppiness

Generative AI: Less alert fatigue, more code sloppiness

Contrast CISO David Lindner: Generative AI could revolutionize application development. But before you get all misty-eyed, please do remember that it’s trained on the world’s code base and may well regurgitate whatever “oops!” it ingests. 

Are your devices exploding with cybersecurity alerts? Do you have more open browser tabs than you can count? Incidents popping up constantly? 

Are your synapses short-circuiting because your resource-starved team just can’t keep up, your systems don’t always play nice with each other and you know it’s just inevitable that you’re going to miss something important? 

If you answered “yes” (and of course you did, because what cybersecurity teams aren’t frazzled by alert fatigue?), then you’re probably tingling with delight over the prospect of generative artificial intelligence (AI) swooping in to help. 

Contrast CISO David Lindner is in the same boat. That’s why he’s both excited by the prospect of generative AI helping out with security … and leery about the vulnerabilities it could reproduce in code automatically churned out for exhausted software engineers. 

Please, just make the alerts shut up

When it comes to the “exciting” side of the coin, Lindner called out Microsoft’s recently released Security Copilot AI service in a recent CISO Insight column. “[Microsoft Security Copilot’s release]  is exciting for all security teams, especially those who are dealing with alert fatigue and constrained resources,” he said. 

Copilot is just one of many generative AI security tools and services that have been introduced since ChatGPT was launched as a prototype on Nov. 30, 2022. As Microsoft describes it, the service will enable users to ask natural language questions and commands, such as “What are all the incidents in my enterprise?” “Give me information on this code snippet,” or even information about incidents or alerts from your other security tools. 

But when it comes to alert fatigue and constrained resources, there's the flip side of the coin, Lindner said: “We can never have enough alerts to learn what's going on in our environments, on our laptops, in our systems, whatever. But at the same time we're always in continual alert fatigue. “It's like, ‘Oh, add this alert,’ ‘Oh, add this alert,’ and then also, ‘Holy sh*t, I've got a lot of alerts!’”

Checking out all those alerts has entailed a ton of manual work, and that means that inevitably, you’ll miss things. That’s why security information and event management (SIEM) was invented: The systems combine security information management and security event management to provide real-time analysis of security alerts generated by applications and network hardware. 

Still, “SIEMs aren't perfect,” Lindner says. “And you know they're not going to integrate with all your tools or understand your environment,” he says — especially in a software-as-a-service- (SaaS-) first environment such as Contrast’s. “If you can bring in a powerful … tool like Microsoft Security Copilot that can help alleviate some of those pains, at least in my eyes, I can see how that would function,” Lindner said. 

Microsoft promises that Copilot will enable teams to “defend at machine speed;” to synthesize data from multiple sources into clear, actionable insights; and to allow them to respond to incidents “in minutes instead of hours or days.”

Don’t let your security principles slip

The appeal is easy to see. "Generative AI is exciting for software developers, and it should be,” Lindner said. “The speed and efficiency at which code is delivered should increase dramatically.”

Of course, there’s always a “but.” In this case, Lindner asks us all to take a breath before plunging in, to ensure that we don’t suck in tainted code: “Please don’t assume the generated code is vulnerability-free, and continue to follow your Application Security [AppSec] practices to hopefully deliver vulnerability-free code to production," he suggested

Don’t get cocky

It’s a real concern: According to a study put out in November 2022 by researchers affiliated with Stanford University and titled Do Users Write More Insecure Code with AI Assistants (TL;DR: Yes), software engineers who use code-generating AI are more likely to develop applications with security vulnerabilities. The study also found that participants suffered from more misplaced faith in their code security than those who didn’t use an AI assistant (specifically, the study focused on OpenAI's codex-davinci-002 model). 

The less participants trusted AI and the more they actually engaged with the language and prompt formatting, the more secure their code, the study found. “We find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities,” wrote Neil Parry, a PhD candidate at Stanford and the lead co-author on the study. 

Mind you, there have been other, serious issues with generative AI besides code security. Lindner pointed to the horror show that law professor Jonathan Turley recently endured when a fellow lawyer asked ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on it. 

“The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a [fake] March 2018 article [purportedly] published in The Washington Post as the source of the information,” The Washington Post reported. “The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.”

OpenAI responded with a statement saying that “we strive to be as transparent as possible that [ChatGPT] may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”

Code errors, factual errors: Clearly, there are snakes hiding in this fun sandbox we’re all gleefully cavorting in. Godspeed to OpenAI et al. when it comes to making progress to sidestep serious pitfalls like that. In the meantime, Lindner asks that we all keep in mind that, though generative AI will bring speed and efficiency, “We can't forget the security principles that we've worked so hard to bake into the way we do development, or we're going to be way behind, because I can tell you right now that the way generative AI is working, you have to be, like, uber specific to get it to the point where you'd be comfortable with it. 

“If all you do is say, ‘Hey, create me a Java Spring application named XYZ, that's what it's gonna do, and it's gonna go out and grab the intelligent data it has. It's not gonna think, ‘Security,’ right? It's not going to generate it based off best practices or certain internal requirements that you may have [with regards to] how you're creating code.

“I want to make sure that we don't forget that piece. [It’s not just] ‘Go, create me a website,’ or ‘Go, create this feature,’ right? ChatGPT is great, but we need to think about … the implications as these AI tools get more autonomous.”

Get a Demo

Lisa Vaas, Senior Content Marketing Manager, Contrast Security

Lisa Vaas, Senior Content Marketing Manager, Contrast Security

Lisa Vaas is a content machine, having spent years churning out reporting and analysis on information security and other flavors of technology. She’s now keeping the content engines revved to help keep secure code flowing at Contrast Security.