Skip to content

‘Assurance’ isn’t clearing the murky waters of software transparency

    
‘Assurance’ isn’t clearing the murky waters of software transparency

Just what, exactly, is “assurance?” 

If you’re a veteran of computer security, you know that the term got thrown around a lot in the 80s and 90s — back when National Institute of Standards and Technology (NIST) Fellow Dr. Ron Ross was working at the National Security Agency. 

The term — or, rather, some watered-down version of it — is in play right now, big time, as the government has pushed to move the software industry out of the murky waters that have led to, for example, vulnerable code libraries getting sucked into software left and right. 

Ross dropped by the Code Patrol podcast to talk with Contrast Security CTO Jeff Williams about assurance and how it ties into our current lack of software security transparency and the government's attempts to improve the software market.

As Ross explains, back in the day, the first major computer security evaluation methodology was the Trusted Computer System Evaluation Criteria (TCSEC) — better known as the Orange Book (PDF) — put out by the Department of Defense (DoD). The methodology covered security policy enforced by a computer system; accountability, including how to identify users, authentication and auditing; documentation; and assurance, as in:

a computer system’s mechanisms to independently evaluate how all the other mechanisms were designed and built, what kind of processes were used, and what might improve their quality. 

So, how’s that going? 

Not so great

Fast forward to today, and “we've lost, at least, maybe two generations of people who understand what the term assurance means,” Ross guesstimates. 

“As it was, it was defined, [and then] it just kind of disappeared,” he says. “But today, the simple version of assurance that [NIST defines]  is [that] it's just the grounds for justified confidence that a claim that a vendor or somebody may make, or a set of claims [that] has been or will be achieved.”

How, exactly, do we clear the murky waters if all we have is the magic fairy dust of “claims?”

The fact that assurance has been stripped down to that simplified definition is a focal point for the prevailing lack of transparency into today’s secure coding practices — a lack of transparency that’s been targeted by the Feds as the government has moved to require letters of attestation from software providers about their secure coding practices and Software Bills of Materials (SBOMs). Those legislations will hopefully grant visibility by informing software consumers about what ingredients went into the software soup they’re buying: say, 25% reasonably secure code, 25% insect parts, 50% vulnerability-plagued libraries like Log4j

The black box behind the UI

Both Ross and Williams — both cybersec veterans with a passion for security — share the same view of the common lack of transparency, or what Ross calls the black box — as in, what’s in this thing?

“The black box is a metaphor I use for anything where you have a user interface and you actively engage in that application or that product, but you don't really know a lot about … what's inside,” Ross says. “It could be a smartphone, a tablet, an application, operating system, almost any component within a system, a complex system. There's a lot of complexity, a lot of moving parts.” 

Both he and Williams believe that that lack of transparency can have terrible side effects if we don't understand how those products, components and systems are built. “They could fail at times that could not just be inconvenient, but could cause a loss of life, like a medical device or a braking system in an automobile that fails because those computers are really driving everything in the new world of physical convergence, where everything is now coming together driven by the software and the firmware that, in essence, powers those computers,” Ross says. 

Trying to make security decisions from below the water line

Williams sees the modern tie-in to “assurance” being that senior security leaders need to make risk-based decisions. Every system in the federal government has to be authorized to operate. “The real problem I see is, ‘What kind of information are senior leaders using to make those decisions?’” Williams asks. “There’s a real lack of transparency. … I use the metaphor ‘above the water line, below the water line’ to characterize, ‘what [are] things organizations can do to implement a good cybersecurity program?’”

You can use cyber hygiene, as in, cybersecurity best practices, Williams explains. “The software, the hardware, the systems, the firmware, all the things that come together, that’s all above the water line,” he says. “That’s the world of engineering, and that's where assurance becomes critically important, and that's where you can do things to affect systems that are more trustworthy.”

But how much risk is there that you built the system correctly to begin with? “That's totally different than risk management,” Williams stresses. “[You’re still] concerned about now, ‘What's in the black box? How does it work? How much can I trust it?’ The set of claims the vendor makes: that's the part where information needs to be increased. So the authorizing officials or any senior leader can make an informed judgment, a credible decision on whether to choose that product or go to a different product and what that really means to make that enterprise more secure.”

You can’t make these decisions unless you have the whole argument in front of you, all in one place — the argument that says, “here are the top-level claims I'm making.” Williams thinks those claims should be about threats, as in, does a software provider claim to have addressed the threats? For each threat, there should be a set of controls. For each control, there must be a set of evidence. 

That’s a logical argument. But that’s not what we see in the world today. “What I see in the world is very disconnected arguments,” Williams says. Since the days of the Orange Book, we’ve lost the notion of traceability, he explains.

“Especially at the higher levels of the Orange Book … there's traceability from the threat all the way through requirements, high-level specs, and down into implementation and test results,” Williams says. “You could see a clear line of sight. But we've completely lost that.”

For more on assurance, trustworthiness and what these experts see in this developing space,  have a listen to the podcast. 

Got an idea for a podcast on secure coding? Got somebody you’d like to hear from? Please do drop us a line at podcastideas@contrastsecurity.com. We’d love to hear from you. Also, please do remember to like, share and comment on the episode.  

Listen Now

Lisa Vaas, Senior Content Marketing Manager, Contrast Security

Lisa Vaas, Senior Content Marketing Manager, Contrast Security

Lisa Vaas is a content machine, having spent years churning out reporting and analysis on information security and other flavors of technology. She’s now keeping the content engines revved to help keep secure code flowing at Contrast Security.