In this interview, Erik Costlow reveals some of the ways that today's hackers are using mobile apps to steal information not just from business, but also directly from mobile device users themselves. Erik also shares with us how device security should never be taken for granted by developers.
In this interview, Erik Costlow reveals some of the ways that today's hackers are using mobile apps to steal information not just from business, but also directly from mobile device users themselves. Erik also shares with us how device security should never be taken for granted by developers.
Noel: When I was learning a little more about one of the sessions you give, "Software Security Goes Mobile," you mentioned something that definitely struck me because I hadn't really thought of it before—"the ability for apps to listen to each other." How is something like that addressed by developers?
Erik: Within mobile platforms, there are means for applications to communicate with each other. In Android, these are “broadcast intents.” This is a fine means of communication, but it opens questions like, “when I broadcast my intent, what information am I including” and “when I receive responses, where do they come from?” These questions can be answered, but some potential issues, like an application sending location information too widely, which would allow other apps without the location permission to get the actual location. Users provide this information by asking for information like “when will the bus come” and then taking any answer that comes back. Some malware apps actively listen and respond to popular applications’ broadcasts because there’s a high probability that a device will have them.
Noel: What are some other common programming errors that developers can make when not focusing on the right areas of mobile app security? What makes these errors so common, or doable?
Erik: Programming is a hard discipline, and like anything, it’s possible to make mistakes. Security is an interesting niche because often an application will work fine, and if you know certain techniques, you can make apps do more than they should, such as granting system access, stealing data, or hijacking other users. What makes them common is that most tests are done to verify “does this work,” rather than “if somebody who knows how to break things comes along, what can he do.” Often it’s a challenge of communication, in that security is a non-functional requirement so there isn’t a full expectation of what to do. For example, I’ve seen many apps that write your plaintext password to logfiles and when I pointed it out, they just hadn’t thought about why it’s a bad idea to write everyone’s passwords down in an area that lots of people can access.
There are other errors like cross-site request forgery, where web applications are written to respond to requests, so if someone visits my site and is still logged in to yours, I can make them submit requests to yours. Financial organizations were quick to react to this when those requests were “transfer money,” but it’s still pretty common elsewhere. Why it’s so common is that the apps work correctly, they just do a little more for people who know how to break things.
Noel: I recently got the chance to speak with Genefa Murphy, who is also at HP and works with the UX of mobile apps, and we were discussing how things like security and performance have become "givens" in the eye of many users, and that they simply want to be wowed with their apps' appearance and its bells and whistles. Is this a dangerous way of thought, to just assume that "the security is going to be there—why wouldn't it be?"
Erik: Society has a reasonable expectation of trust. For example, no one attacked me when I walked down the street earlier. When I had lunch, the chef didn’t poison me. I don’t normally worry about either of those risks. Companies that provide services to others should be taking reasonable steps to ensure that their services are secure, a practice that is commonly called “due diligence.” When companies don’t act responsibly, they quickly lose the trust of their customers. For companies looking to provide products or services, most everyone wants customers to trust them. Making security a priority is really just a way of retaining the trust that you build in your brand.
Security isn’t usually the first thing shown to customers. For example with my talk, I hope that people are interested in mobile application security. I’ll also promise that I won’t kidnap or kill any attendees, but that wouldn’t make a very good description of my talk.
Noel: The abstract for your session mentions the "knowledge gap" between hackers and developers. There are so many that believe that the bad guys are always one step ahead of the good guys, and that this knowledge gap has always existed. What's the best way for developers to combat this, and not seemingly always playing catch-up, or waiting for the next new attack, to know how to prevent it in the future?
Erik: In many cases, the bad guys are ahead. In other cases, not so much. It’s often harder to defend something than attack it because when you attack, you only need to find one means of breach, but the defender needs to get everything right.
For predicting activities, there are a lot of techniques. First, the market is now providing opportunities for researchers to get paid for using knowledge to protect others. That’s helped manage what would have been significant 0-day attacks. Second, the best thing that individual developers can do is to learn good software architecture and design skills. Things that are well built tend to be easier to work with so that maybe a vulnerability is never introduced, or if a new attack is discovered, it’s clear to see where the appropriate patch should go.
There’s also very interesting work going on in regards to event detection and machine learning, which is outside a developer’s responsibility but relevant for companies’ board members. HP for example has done a very interesting job to combine products, where ArcSight has a way of gathering lots of messages and logs from different sources, but the problem is that it often doesn’t know what certain files are. For example, someone is sending classified materials documents outside the company. The Autonomy IDOL engine can then read the documents to understand what the document is and make a determination that, if the probability that something is classified is over .6 or so, stop the event and tell me every step that got us here.
Product manager for HP’s Enterprise Security group, Erik Costlow is responsible for product strategy, working closely with customers as well as development, sales, and marketing teams. He has contributed to industry best practices including OpenSAMM. Previously, Erik worked as a software security consultant for Fortify Software (acquired by HP). His projects there included designing and leading a security static analysis project at a large financial services firm, designing a project plan to guide developers of externally-facing applications across three continents, and preparing for a 2013 implementation of twenty key application security controls affecting 15,000 developers globally, across seven functional lines of business.