Today we're talking with Michele Chubirka. Michele has over 15 years experience in design, engineering and architecture of enterprise application and network security solutions, including maintenance and administration of multiple vendor technologies. She's a security architect, freelance writer, blogger, podcast host, industry analyst, and just overall expert.
In this interview, Michele explains the meaning of a term she coined, "Security Stoogecraft," and why she feels it is the best way to describe today's security landscape. Also as it relates to Greek mythology, Michele and I talk about the idea of the "Cassandra complex" and how it explains a company's inefficient approach to security. Michele shares her expertise on communication within a corporation and how a lack of true communication is what is holding the security industry back.
The following is a brief excerpt of our interview.
Jeff Williams: Now, in modern environments, there's a lot of data collection going on, SIM Solutions, and so on - I noticed a presentation on your LinkedIn profile called, "Thin Slicing a Black Swan." And you talked about sort of the Cloud of information we have and how it's not really giving us the intelligence that we need. Share a little bit about what you're getting at there.
Michele Chubirka: So, what I was trying to do with that - I actually worked with a guy, Ron Reck, who was my research partner. He is an expert in Semantic Web technology and what I wanted to do - I had read "Blink" by Malcolm Gladwell and then I read "Gut Feelings" by Gird Gigerenzer, and the whole idea of "Smarturistics are fast thinking" really impressed me. And I wondered why we were not trying to replicate that in our security systems. Because it seems like we have this gluttony for data. We're trying to get as much information as possible, not realizing that we're just overwhelmed with it. The quote that I came across from somebody at an R.S.A. conference once during a panel discussion was, "We don't call it big data. We call it big garbage." Because most organizations, unless you've got a whole team of data scientists, most organizations do not know what to do.
They cannot handle large, large amounts of data. And you hear this, this is the dark side of the whole big data trend, right? And so I talked to myself, "Is there a way to thin slice this?" You know, when you go into an emergency room and you complain of chest pain, they are not going to go through a big data kind of thing to find out, you know. They're not giving you a half hour interview about your childhood history, your eating habits or exercise habits. They're not gathering, they're not spending a lot of time finding this out. They thin slice it. They find out, "Okay, was there anomaly on the E.K.G.? Okay, send them to cardiac care." So why we can't we do that something similar in security? And that was the question that I had. I don't know if I came up with any answers, but I think what I see now is a couple of years later since I did that, now people are starting to look at anomalies or focused on anomalies. They're not so much focused on blacklisting and just trying to solve an open world problem with a close world solution, which...
Let me explain that. It means, you know, with the traditional blacklisting technology, if you don't hit that signature, then it's okay, but it isn't really okay, is it? You and I both know that just because I don't have a signature for it doesn't mean that it's good traffic. So what the opposing side is, "Let me look for anomalies." I know what the behavior should look like and now technologies are out there, looking for baselines, they're developing baselines, and then they're looking for something that is different from that. They're using graphing theory and N.L.P. to thin slice through a lot of the noise and they're looking at it at a very high level. Because to be honest, do I need to know what's this particular kind of attack? Okay, maybe if I'm, you know, doing the after action report and I'm determining the stand of the breach. But in real time, what I need to know is two things, good, bad. That's it. That's all I need to know.
Jeff Williams: Yeah, it's interesting. I mean, I spent the last few years doing work on real time detection of application security vulnerabilities and I think it actually does change everything like culturally - things work differently when you can detect problems in real time, as opposed to, you know, detecting them later and you have to have this you know sort of exposed analysis where it's time consuming and you track risk and so on. But when you're detecting it in real time, the process is different. The culture around it is different. I actually think it works much, much better.
Michele Chubirka: The trick is getting it real time, which... It's hard. That's a hard thing to get and it's also a culture shift. And I think... I mean, I see it coming now. I see the people coming in now that are developing new products that are based on that ability. They're using, you know, graphing theory and they're looking for, relationships and they can make, they can do some prediction. But it means that this attachment that we have to every single piece of data that most organizations have and security professionals have, can we let some of that go?
Jeff Williams: Yeah, it's really hard to get security groups to let go of some information, you know. I've just demoed a tool at a conference in Boston that part of the workload is just, you know, clean out those vulnerabilities, run the analysis again. It happens in real time so you don't have to cherish every single piece of security data that, you know, like it's precious gold that you just refined from raw ore. It's a different way of thinking.
Michele Chubirka: It's about context, right? In the past, there was a minimum background around context. It's been about managing tools. And I think, if we're going to get better at this, and if we're going to, you know, get off the hamster wheel that we've been running in for the last 10 years, we're going to have to start thinking in terms of context and how can we be more effective. And that means throwing stuff away. I mean, a network engineer came to me recently and he had put in some next gen firewall for proof of concept and he said, "There's I.R.C. on the network." I'm like, "Yeah. Find me something I care about." You know. Sure. I mean, look, I.R.C. can be used for infiltration. Are they doing that now? Probably not, you know. Are the really smart ones doing that? Probably not. I have to worry--
Jeff Williams: When we do application code of use, oftentimes the tools flag, the use that "System.out.printIn" in an application, because it's not the best way of logging data. But it's such a minor security thing, like it doesn't really matter. And so it just always fire. That's our term for a vulnerability that just really doesn't matter, you know, the "System.out.println."
Michele Chubirka: Yeah, we need to start prioritizing who were going to chase and what we're going to chase because we can't chase at all. And the idea that the marginal line of security is that idea is gone. I mean, you're going to be popped, to some degree, you're going to have, you know, malware in your network, you're going to have all kinds of problems on your network security issues. But you have to prioritize and you have to put in seat belts and airbags at different points.
Jeff Williams: So, where do you think we end up in, you know, say five to ten years? Where is security technology going to be or some of the problems that we've discussed, things that we can get in front of?
Michele Chubirka: I think the only way we do it is one of the things that bother me a lot and I actually did a podcast on this. I said that I'd rather be Leonardo than Oppenheimer. To think there's a lot of emphasis is still on breaking in the security industry. The researchers spend a lot of time and that, you know, smart guys, don't get me wrong, but I am not just interested. You know, they always tell you when you're working as a business person, "You shouldn't just point out the problems. You're supposed to point out solutions." I would love to see the next security group or security researchers coming up, instead of just saying, "Oh I found this exploit, this exploit, this, you know, this problem." Okay, it's open source software, can you tell me how I should write it? Did you come up with the solution as well?
Jeff Williams: It's interesting. I got to say, culturally, there's almost no benefit for people to building better security defenses. I tried to focus on that and wrote the enterprise security A.P.I. library to try to demonstrate how to do things right. And really, I mean, that got some use but, you know, really the things that people get congratulations for and get those kudos for, it's breaking stuff.
Michele Chubirka: Or building stuff, right? Building is the way the business and it's the idea to me, they get attention for... They get some negative attention from the media. I don't really think that makes so many friends or makes it...it's a great business model. I mean, if we're going to get to the point, and we are actually. You know, Neil Gershenfeld, the father of the fabrication movement, right, said, "The digital revolution is over. We won." So what's the next step? It's compute as raw material. I go back to my other point where I said, you know, as technologists, we love technology, but we need to get over that to some degree. It's just the hammer. That hammer is, you know, it's cool if it builds a house for habitat for humanity.
But if you take it and hit somebody over the head with it, it's not a cool thing. So we need to get over it. We're at the point of pervasive computing, you know, that model of ubiquitous computing where it's so prevalent that it has become transparent. When we come in, you know, from that Island of Misfit Toys, and we tell people, "Oh no, but you need the security control. No, no, no. You can't do that. You can't do that. You can't do that." Okay, tell us how to fix it. We can't just then write the libraries. We need to be communicators, we need to be business people, we need understand how things work, we need to better ourselves in processes, we need to align ourselves, create affinities so that people want to do it.
To listen to the rest of my interview with Michele, click here.