Skip to content

Interview: Andrew Hay of Open DNS

    
Interview: Andrew Hay of Open DNS

In this interview, Jeff Williams interviews Andrew Hay of Open DNS. They discuss bad credential management and the recent eBay breach, thinking with the mind of an attacker, firewalls, security in the cloud, and fast-moving agile and DevOps life cycles in the software development life cycle (SDLC).

 

 


Or you can READ THE TEXT BELOW:

Jeff:  I'm Jeff Williams, CTO of Contrast Security. And I want to thank you joining us for another episode on our Security Influencer’s channel. Our goal is to provide a series of brief and we hope highly informative interviews with and for security professionals. Our theme in 2014 is "Discussions on the Implications of Continuous."

Today, we're talking with Andrew Hay from OpenDNS. Andrew is a Senior Security Research Lead & Evangelist at OpenDNS. He leads the research efforts for the company. He's also a prolific blogger and speaker; he used to be an analyst at 451. Andrew, thank you for joining us today.

Andrew: Thanks for having me.

Jeff: All right, let's get started. So being from OpenDNS, tell me about what's going on? What are the top DNS-based attacks that are going on these days?

Andrew: So, though not DNS-specific, malware bots, spammers are continuing to utilize domain generation algorithms or DGAs to stand up tens, hundreds or even thousands of randomly-generated domains at a time. And the main reason to do this is that they're really trying to direct unsuspecting users through a variety of hosts like sending spammed e-mail with unscrupulous links, masked as valid links or sending tweets. There's been a lot of DGAs popping up with just random Twitter accounts in the hopes of getting someone to interact with their site.

Hackers have better tools than you...

Beyond D.N.S. attacks, there's really, one of the big concerns I have is data and just general information leakage on the whole. That's definitely one thing that I'm seeing a lot happening these days.

And just today or just this morning, eBay announced a breach where over the past couple months, data was exfiltrated containing...so, they said e-mail addresses, names, passwords, physical mailing address. But then they continued in their announcement to say "But no personal information was leaked." I would consider all of the aforementioned things, personal information.

Jeff: Well, it wouldn't surprise me and I'm just learning about this breach. But it wouldn't surprise me if we see the typical sort of pattern. A company says it's not as bad and then it turns out that it was worse. They're going to say that it was sophisticated hacking techniques and then only possible by elite hackers or state-sponsored organizations. And then, it'll turn out that it really wasn't that sophisticated. Hopefully… it's starting to look bad.

Andrew: Well, they've come out and said specifically that the attackers were able to get hold of eBay administrator credentials and gain unauthorized access to the systems. So, it wasn't through anything other than bad credential management that led to this whole breach.

Jeff: Yeah. So getting back to DNS attacks because I'm interested; it's a particularly interesting reliance that the Internet has on domain name system. I'm particularly interested, from our applications security point of view, we hear about domain spoofing, that kind of attack. What can companies do to protect their apps against that kind of problem?

Andrew: Well, I think the main thing that they have to be very clearly aware of is what their assets are doing and are capable of doing when connected to the Internet. By "assets," I mean any system that is comprised of hardware and software. Or just software hosted somewhere else, like a SaaS or PaaS application.

But you need to know how you can interact with that and how attackers might interact with that system. And obviously, we're kind of at a little bit of a disconnect between the motivations of the attackers and the motivations of the defenders. Really, we want to make sure that everything is going to be operational and working 24/7 and available to customers in a secure and safe way. But the attacker, they just want to get in through X, Y or Z mechanism to get at what they want to get at. And we may not be defending from an applications security perspective or just defense standpoint in general, for that specific targeted attack that's happened.

Jeff: So how do firewalls play into all this? Do they play a role in defending sort of the new, modern enterprise?

Andrew: I doubt. So I started as a networking and firewall guy way back when. After my dial-up tech support days. I think there's always going to be a place for the firewall at the network edge. That being said, the network edge is no longer the choke point for all of the organization's Internet traffic. In fact, the network perimeter is eroding. You've probably heard or seen that pronouncement by numerous companies, but it is true. Users are on the move, they're working remotely, working on mobile platforms that may or may not be company-owned; most times, they're not company-owned. And nobody really wants to back haul their traffic through a corporate network via some clunky site-to-site or client side VPN that's going to route all their traffic out to the Internet.

They just want to be able to connect safely and securely wherever they are and on whatever platform they're using. Whether it's their flashy new Android tablet or a clunky old laptop that work gave them.

Jeff: I'm glad you mentioned mobile. Because it's interesting to me...organizations that allow mobile devices. And then, they're moving some of their infrastructure over to the Cloud. It seems like you could get pretty quickly to an organization that really doesn't have internal IT. They've got mobile applications pushed out via app stores accessing their applications running in a cloud-based environment.

Do those organizations lose a critical amount of control over their IT? How can they deal with that?

Andrew: I think if they look at their different options...there was a time and with every new iteration of technology, whether it was mainframe, security came very late in the mainframe game, when we went to hosted servers, security caught up a little bit faster. When we went to virtualization, we just iteratively got better. With Cloud there was a gap like all the others where security wasn't a concern. We were dazzled by the price and cost, just really the cost savings and the ease of use of deployment for Cloud services. And then, it's the kind of thing like, well, "We'll just figure out security later."

A lot of organizations didn't really think about what would happen if they took those servers that were usually held within their data center and put them out on the public Internet thinking, "Okay, well my Cloud provider, they'll protect me." But Cloud providers aren't really in that business.

Jeff: So are we just doomed? And I mean this in the broadest sense. You mentioned and I lived through these days, right? We had mainframes and eventually we got mainframe security. And then we got Internet-hosted servers, we actually went through a P.C. age and eventually, we got to P.C.s that have a little bit better security.

We went through the same thing on the Internet; everyone adopted it and then we got security. And now, we're in this new era of moving to mobile and Cloud. And we're sort of somewhere in the process of getting towards security.

Are we always doomed to play catch-up? Is that just the way security has to be?

Andrew: I don't believe so. I think there's always going to be a place for security just like there's always going to be a place for old folks' homes. The nature of the human body is the age and at some point we need additional care. And technology is the same. The knowledge of the technology gets broader and then also, the attack surface area or the methods to attack the surface area of the technology also grows. So we generally, we do have to play catch-up. I've never known anyone that has proactively put their parents or their grandparents in an old folks home. Just in case they won't be able to take care of themselves.

It's usually a mitigating control. Like "Okay, my grandmother put the electric kettle on the stove. Probably, we should have a discussion about this." And that's generally what happens with application security and security in general, is people aren't doing this proactive method mainly because they don't know the threats, they may not have budgeted for the security side of things.

I would like to think that it's not going to be catch-up all the time. There are certain methods that you can take to be more proactive. It's just really a user education. Like, "This is why we need to be proactive." This is why we need to predict the issues from happening. We need predictive securities so that we can block these things before they impact you. And it's more of a frame of mind.

Jeff: I started writing code in the mid-'80s. And sometimes, I feel like I need to be in the old folks home. Especially with new development life cycles like Agile and DevOps. They're doing things that we really didn't imagine back in those days. Continuous integration, continuous deployment. It's moving really quickly.

And I guess what I'm hoping is that there's an opportunity there for security to become continuous as well. We can do security, really, as part of the process of building code.

There's a lot of folks out there that say "You've got to do security during the SDLC." But really, what they mean is: take these old, monolithic security activities, like a security architecture of your security code review or something, and shove it into a fast-moving DevOps life cycle. And it's really kind of incompatible.

I'm wondering, how can we get security to be itself more agile and more dynamic?

Andrew: So just to your point, nobody goes and builds a car and then says "You know what? We should probably now go and put airbags in for safety." Because that would be ridiculous. You'd have to rip the entire dash apart and reform a lot of the components in order to satisfy that.

And it will, in all likelihood, not be as efficient as had it been put into the original design. And I think that security needs to be a key component of not only development, but the operationalization of code and applications and hardware. It can't be an afterthought because that's when we got caught on our heels. And we're very reactive by nature as humans. But if we take the care to think about security as part of the software development life cycle and even the DevOps push to get everything out in very short sprints in an Agile or SCRUM methodology, we need to factor in security into this mix. And until we do that, we're going to be caught on our heels.

Jeff: Yeah, I think that's right. We've got to get out of this reactive mode and really become part of the engineering process.

One thing I noticed in your Dark Reading survey that you did about security monitoring, I thought it was interesting. No one approach really stood out. And I was a little surprised by that. I thought, actually, that what we'd see is a lot more use of real-time sensors.

So I'm wondering, your opinion. What happens with monitoring? Organizations get better and better censors, gather lots and lots more data. Start gathering application layer data. How do enterprises deal with that?

Andrew: Well, I know, from experience and from talking with colleagues, a lot of organizations are hiring or planning to hire data scientists. And these folks understand machine learning, big data analytics, mathematical algorithms. The hope is that the organization can build their own data repository without having to shell out hundreds of thousands of dollars for SIEM or log management products and associated consulting fees to tune the system to their environment.

What they want to do is use the data scientists to bubble up those important issues to the top of the pile by more efficiently interacting with the data that they're collecting. Whether it's centralized syslog, for example, versus the incorporation of data from other APIs, from third-party products that could be used to enrich data that maybe a particular SIEM vendor wouldn't have the ability to do.

You've got to remember that a lot of these SIEM and monitoring products, they started from the network side of things. So the whole network investigation was really the impetus for the SIEM ecosystem. And no one has really started as, like an application security SIEM vendor or like a code-based SIEM vendor. It's, "We're going to focus on the network and then we'll move into other areas." So if that's not your line of business or you don't care about that, then commercial products might not be a fit for your organization. You may be better off developing it in-house.

Jeff: So is that the way we get out of this reactive security approach? Is we start playing Moneyball and base everything on real-time monitors and responding to everything really quickly?

Andrew: I think that's probably one aspect of it. We also need to shift to more of a risk management style of handling mitigations and technical controls. We can't just buy the flashy new box because the vendor tells us that it's going to solve world hunger and cure everything that ails us. We need to really think of "How is this going to make our security better?" And "Is the cost something that we can really...can we swallow this over the next three years and feel good about our risk posture as a result?"

I think people need to be put more into this process. We need to put more security in the development side and the IT operations side of things. And that in turn, will bring us closer to the proactive big, huge, grey area that everyone seems to be playing in. So it moves us more towards the proactive side of that.

Jeff: The conversation continues online. We'd enjoy hearing your thoughts on today's discussion and your ideas for additional security topics you'd like to hear about.The whole series is available at www.ContrastSecurity.com.  Andrew, thank you so much for participating today. I really enjoyed the conversation.

Andrew: Thanks for having me. It was great.

Jeff Williams, Co-Founder, Chief Technology Officer

Jeff Williams, Co-Founder, Chief Technology Officer

Jeff brings more than 20 years of security leadership experience as co-founder and Chief Technology Officer of Contrast Security. He recently authored the DZone DevSecOps, IAST, and RASP refcards and speaks frequently at conferences including JavaOne (Java Rockstar), BlackHat, QCon, RSA, OWASP, Velocity, and PivotalOne. Jeff is also a founder and major contributor to OWASP, where he served as Global Chairman for 9 years, and created the OWASP Top 10, OWASP Enterprise Security API, OWASP Application Security Verification Standard, XSS Prevention Cheat Sheet, and many more popular open source projects. Jeff has a BA from Virginia, an MA from George Mason, and a JD from Georgetown.