Businesses are looking to tools to improve productivity — no surprise right. Business apps are not just “stand-alone” and isolated but they are in the cloud and integrated with other tools and data. Integrations and “plug-ins” with other apps and 3rd party libraries improve the dissemination of data and is integral to building corporate collaboration and fostering camaraderie for remote teams. Those PDF files, Word docs, and contact data that you sent to your colleague can all be stored and indexed within your messaging app. Handy, right! And, if that communication occurs in the cloud, well it is open and accessible to malevolent behaviors.
Jeff Williams, Contrast Security CTO and Co-Founder, takes a look into the security of messaging apps and shares his insights on several hot topics.
Team messaging apps, like Slack, are quite popular these days, but what's the security and reliability for these services?
Jeff Williams Response – People tend to think that these messaging apps are harmless, because chat messages are generally not that sensitive and there isn’t a lot of intellectual property there. However, the highly connected, always-on nature of chat programs makes them among the easiest software to attack. Probably the biggest question is whether you should run your own chat server (requires management, but no remote access) or use one in the cloud (more scalable, but anyone can attack). There are two schools of thought here: One argues that running your own server puts you in control and limits attackers to people on your internal network. The other argues that there really is no such thing as an “internal” network anymore and that cloud providers are better at running secure servers — they “more motivated” to be secure.
Making headlines lately are the frequent outages many messaging apps have experienced — does this introduce a security issues? And, what's the downside of lax security in these business-messaging apps?
Jeff Williams Response – An outage is the least of the potential problems with messaging apps. Imagine an attacker finds a vulnerability in Slack, Hipchat, or a Jabber client. If they can take over one user’s client, they can probably access their contacts and attack them all. It’s the perfect platform for spreading a worm. The fact that they’re now Internet-scale makes them a very attractive target. Once a client is compromised, the attacker could easily have full control of the victim’s computer, laptop, tablet, phone, or whatever else they are using.
What security certifications do these messaging apps need?
Jeff Williams Response – Ideally, they would at least be tested for security vulnerabilities by a reputable firm. Currently, Slack has a bug bounty program set up, and they seem to pay some decent bounties for their vulnerabilities. That encourages security researchers to examine their code. But there’s no replacement for real security testing against a reasonable threat model and a structured test plan. There are no certifications that would apply to an application like this, although the new CyberUL might be a possibility in the future.
What security and encryption features should enterprises seek out?
Jeff Williams Response – Most importantly, I would look for strong authentication features, such as SSO and multi-factor authentication. Encryption is nice, but I recommend a strong policy against posting any sensitive information or intellectual property in chat. Access control features might be important to you if you need private groups, etc. but be extremely careful with “integrations” – plugins that enable special features in chat programs. Historically these have proven to have high rates of vulnerabilities. Also be very careful about how you use “tokens” to enable other programs to post messages. These tokens are sensitive and often grant full access to the chat functionality.
Jeff Williams Response – But assurance is what you really want to focus on here. You want a chat program that isn’t going to be attacked and used to instantly compromise all endpoints in your network. If you can’t get direct evidence that security testing was performed, at least try to find out what type of security tools and processes are in place. They should have standards, tools, and training for the team building the application. And if they’re doing it, they should have evidence that it’s really happening, like threat models, security requirements, test results, and vulnerability disclosures. If they don’t, they’re not actually doing it.