Content Security Policy in Flask and Django part 1

What is CSP?

Content Security Policy is a defensive mechanism implemented in all major browsers that enables us to whitelist resources we would like to allow to appear on our website. So that if we do not explicitly allow some domain, or if we do not whitelist inline scripts etc. they would not be allowed to execute.

The entire mechanism is very straightforward, just add Content-Security-Policy header, enumerate directives and sources you would like to allow, and here you go. Example directives are:

  • script-src - for setting policies for JavaScript inline and external scripts,
  • style-src - same for css,
  • default-src - applies to all elements that could have not been assigned to a more specific policy (it does not include policies enumerated in base-uri and some others),
  • report-uri - if we want to get reports about violations - set uri. You will get a JSON summary of what happened, you can work on later.

Of course, this list is much longer and policies for base uris, objects, images and so on can be set. Also keep in mind that if a policy for, say, CSS was found in style-src, it will not try to match against default-src. These values are not inherited. And, once again: some directives are given the default value.

Content Security Policy - sources

When it comes to sources, the following options are available (among others):

  • self - allow same domain (https://example.com, http://www.example.com are different domains),
  • none - disallow,
  • host with or without path (eq. example.com, example.com/res/main.js, https://example.com)
  • unsafe-inline - allow inline scripts,
  • nonce - set “nonce” value for each request for each element you want to allow. This requires strong cryptographic tokens,
  • sha-<algorithm>-<base64> - count sha256, 364 or 512 and encode it with base64 for each element you want to allow.

Note that when using “self” policy, inline scripts will be blocked, since they are the most common way of injecting JavaScript code into the website. Of course, the unsafe-inline policy may be used, but it’s preferred to go with nonce or sha policies.

The CSP allows reporting and testing the policies we write. If you have a big project with lots of distributed resources and are afraid of unintentionally blocking access to them, making your website inaccessible, you may want to set Content-Security-Policy-Report-Only header instead.

Attack scenario: exploiting XSS vulnerabilities in a web application

In our case, we have a website that allows users to make use of some customization. They can, for instance, adjust the page’s look and behavior by choosing which CSS and JavaScripts they would like to load. Of course, as responsible entrepreneurs, we limit this choice to options coming from trusted sources (like the ones created by own developers) or use other techniques to provide safety. Otherwise, we would risk a keylogger being injected making it possible to steal data user types, losing session cookies, which would allow an attacker to login as another person (and this would happen even if no actual password leaks) and more.

Additionally, one inline script was used that logs some message to console. It is only used that we have correctly defined our policies. If it gets blocked, this means we have configuration issues. Right now It works smoothly: 

Choices are strictly limited and any attempt of tampering with them is stopped. If a user tries to type “<script>alert(1)</script>” nothing will happen since all HTML special characters are escaped and the resulting html file will look like this:

<<body class=&ltscript&gtalert(1)&lt/script&gt>>

<p id="main-paragraph">

   This site shows different possibilities regarding css styling. Type in one of the following options to change background:

</p>

Until we notice a lack of apostrophes around our input which leads us to an idea. What will happen if we type “class1 onclick=alert(2)”? Bingo!

As a side note: if we wanted to insert any string into this field (for example in order to connect to an external server to upload stolen data) it would also become escaped leaving potential attacker nowhere near causing any actual harm. But there are two magical functions: eval() and String.fromCharCode(). If we wanted to display “You’ve been pwned!” message we would fail:

But if we encoded the malicious payload with two Python lines of code:

In [1]: s = '''alert(\"You've been pwned!\")'''

In [2]: print(','.join((str(ord(c)) for c in s)))
97,108,101,114,116,40,34,89,111,117,39,118,101,32,98,101,101,110,32,112,119,110,101,100,33,34,41

and inserted these numbers encapsulated in eval(String.fromCharCode()) instead:

we would end up being able to accomplish our task.

Let’s take a look at the dropdown which allows us to load a JavaScript that triggers a change in the first paragraph look: 

This HTML code will be returned the by server when we type in “class2” in the text field and select option “2” from a dropdown. As can be seen, a relative URL to load script is rendered:

<form method="POST" action="/" id="mainForm">

<input type="text" name="css_class"> <br/>

<select name="additional_script">

<option value="">No additional scripts</option>

<option value="/static/js/additional1.js">1</option>

<option value="/static/js/additional2.js">2</option>

</select><br/>

<input type="submit" name="submit"><br/>

</form>

<script src="/static/js/additional2.js"></script>

And what if we somehow change the url? Let’s write a script that shows another alert box and post it on pastebin and then manipulate choices list.

This was possible because we were able to change the link to point to external, untrusted source. 

<form method="POST" action="/" id="mainForm">

<input type="text" name="css_class"> <br/>

<select name="additional_script">

<option value="">No additional scripts</option>

<option value="/static/js/additional1.js">1</option>

<option value="/static/js/additional2.js">2</option>

</select><br/>

<input type="submit" name="submit"><br/>

</form>

<script src="https://pastebin.com/raw/R570EE00"></script>

Two inputs and two vulnerabilities. This does not look promising. Of course, we have to take measures to deal with these problems but what if the same developer, most probably unaware of the threats, fails again in the future? Or attacker finds another way to inject malicious code into our website? Can we take a more global approach? Check the second part of the article showing how to implement CSP using Python frameworks - Flask and Django (coming next week). See also django-trench - our own open-source library providing multi-factor authentication for Django.  

Note: This is not a technical manual though and if you wish to check how to apply it in your particular use case, please look at the reference links. The correctness of implementation details should be checked by a professional pentester or security consultant. 

If you’re interested in rising your programming skills in the security area, why don’t do it with our backend team? Here’re our job offers for backend developers! 

References

English:

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP 

https://www.w3.org/TR/CSP2/ 

https://caniuse.com/   

https://www.blackhat.com/docs/us-17/thursday/us-17-Lekies-Dont-Trust-The-DOM-Bypassing-XSS-Mitigations-Via-Script-Gadgets.pdf 

Polish:

https://sekurak.pl/wszystko-o-csp-2-0-content-security-policy-jako-uniwersalny-straznik-bezpieczenstwa-aplikacji-webowej/ 

https://sekurak.pl/xss-w-google-colaboratory-obejscie-content-security-policy/ 

 

Navigate the changing IT landscape

Some highlighted content that we want to draw attention to to link to our other resources. It usually contains a link .