A report released by McAfee has put a number on the size of the financial problem from web application abuse - in this case it’s called ‘Web 2.0 breaches’. And that number is $1.1Billion. Over 60% of respondents reported losses of $2Million from their business. Now that is a significant problem. The interesting fact is that 79% of the respondents have increased firewall protection since introducing Web applications into their business. Looks like all the firewall protection is missing $2Million worth of abuse. How big does this financial impact need to be before business people people start questioning the nature of security around web applications? And the final interesting part is that only 40% of businesses had budget allocated to securing Web 2.0 applications. This problem is not fully understood as a business problem. The scale of the problem may be much larger. It’s not often that you get to wonder is $1.1Billion in losses just the tip of the iceberg?
We all have an idea of what sites we think are safe: Twitter, Facebook, linkedin – they’re safe. We expect to be able to visit them without risking infection. Of course, if you visit porn or piracy sites, you get what you get. These kinds of incidents redraw those lines. They make users re-think which sites they trust.
This week’s incident was definitely a wake-up, especially for the really large online properties like Twitter. And it’s a big responsibility – if Twitter becomes a hub for malware, that’s going to hurt a lot of people. They obviously know their stuff, and have implemented filters against obvious XSS attacks. But of course, those filters are going to fail – you have thousands of smart people out there looking at your code as much as they please. When they figure out how the filters work, they can craft the attack to that. That’s what happened here.
It points to the fact that Twitter and other large online properties need better approaches to preventing this kind of abuse. That vulnerability was hard to find – someone had to probe that input over and over again to figure out precisely what it would do under very specific circumstances. They may even have written a program to do it for them. Could Twitter get an alert on that kind of behavior, and manage it?
The Twitter attack this week could have been a lot worse. A devastating payload could have been delivered that would have caused catastrophic problems for regular users worldwide. That is why it’s imperative that companies look at ways to protect web applications from hackers who are increasingly trying to find holes.
It’s the latest in a wave of automated SQL injection attacks that compromise Web site databases and inject a hidden iframe into Web pages – the iframe loads malware from a third party domain, compromising the sites’ users.
Secure development experts cluck at attacks like this – SQL injection takes advantage of insufficient input validation in application code, a well understood developer error. It can be stamped out over time, but it’s still a big problem today. These attackers likely found a SQL injection vulnerability in a commonly used piece of server software, tailored a single-request attack for it, and sucker-punched millions of sites at once to see which ones would fall. It’s hard to defend against that.
It’s also interesting that input filtering at the gateway couldn’t really block this. The SQL was heavily encoded. Signature-based firewalls can’t reliably block this kind of malicious request without blocking many valid requests as well.
So what can be done? Lots. First, a gateway can be a lot more sophisticated about how it checks application input. A broad signature match may not be enough evidence to block a request, but it is a clear indicator that the request is suspicious. Delaying the request and performing additional analysis probably makes sense. It may slow down a small set of valid (but unusual) requests, but it will do a much better job of identifying an injection attack reliably.
Second, SQL injection isn’t really the end, just the means. If you can’t always prevent a SQL injection attack, you can detect that one has taken place and respond to it quickly. The sudden existence of an iframe linking to an unknown domain in your pages is something you want to know about right away. You also probably want to strip it out until you can learn more. A gateway that looks at HTTP responses in the right context can provide that visibility.
Kelly Jackson Higgins from Dark Reading wrote an interesting article titled Accepting The Inevitability Of Attack.
The crucial idea that security involves three different components of prevention, detection and response is important for understanding next generation security and particularly Web application abuse. Traditional security methods have focused primarily on prevention – from implementing secure development lifecycles, pre-and post development code scans and blocking traffic using Web Application Firewalls. But what of detection and response? Detecting a malicious user of your web application in real-time before the damage is done is more valuable to many of today’s on-line companies. And how valuable is a response to that malicious user in order to protect the business and make sure a future attack doesn’t affect a normal user? It’s not just companies that are affected by Web abuse – normal paying users of the site area also affected by poor performance of Web applications.