The security implications of jQuery are not terribly exciting, but they are commonly misunderstood and can have a large impact. In many situations, we see this costing significant time and money for larger organizations; jQuery vulnerabilities are often raised with risk blown out of proportion and are a common source of disagreement between penetration testers and developers.
It’s worth noting that almost all of jQuery security issues surround functions which were so commonly misused that the jQuery team modified behavior to protect developers. Although the changes have been widely interpreted as bug fixes, it can be easily argued that vulnerabilities introduced by jQuery are nothing more than developer error. It is our hope that this article can be used for organization to better assess the risk of common jQuery security issues.
$() is identical to, and the most common written form of the jQuery() function, it returns a jQuery object: essentially chunk of content to be written to DOM.
In most use cases, a jQuery function will take a selector, element, or object as a parameter. The selector, denoted by a hash (#), is an identifier for existing html content in the current DOM. In the following example we will use the jQuery html() function to modify an element with the #myDivTag selector:
<script src="https://code.jquery.com/jquery-1.10.2.js"></script> <div id="myDivTag">My old div tag text!</div> <script> $( "#myDivTag" ).html("<b>My new div tag text!</b>"); </script>
Notice the "My old div tag text!" does not show. The jQuery modifies the DOM at runtime to replace the text of our div element:
This capability is not new. In the old world, the above code would be written similar to the example below:
<div id="myDivTag">My div tag text!</div> <script> document.getElementById("myDivTag").innerHTML = "My new div tag text!"; </script>
As you can see, the jQuery function is similar to the getElementById() function. But there is an important difference: jQuery accepts more than just a selector ID, including HTML and script content. Below shows an example of this:
Our ‘Hello world!’ is now bound to browser DOM and visible on screen.
Application pentesters may already see the attack vectors in the examples above, however there’s a point we cannot stress enough: jQuery() and some element specific sub-functions are execution sinks. Penetration testers must evaluate the sources of data consumed by jQuery functions and determine if and how they are bound to DOM. For many applications this can be an extremely time consuming manual process. Fortunately, there is some help from tools like Burpsuite Professional’s passive scanner which will recognize simple occurrences of certain DOM properties placed within a jQuery function (Example: $(location.hash)). The unfortunate part is that more complex instances DOM XSS in cannot be reliably detected with automated methods.
If you arrived at this page today because a vulnerability titled “jQuery XSS Vulnerability” was raised on a pentest report, you’re not alone. At the time of this writing there are no known direct XSS vulnerabilities in the jQuery framework (not including jQuery plugins). Unfortunately, it is extremely common for the behavior changes to be interpreted as bug fixes.
Let’s take a closer look at the behavior change that has caused so many headaches. Below is an example of the most common vulnerable code:
<html><body> <script src="https://code.jquery.com/jquery-1.6.1.js"></script> <script> $(window.location.hash).appendTo("body"); </script> </body></html>
On the page below we can introduce arbitrary script directly into the browser DOM, this even bypasses Chrome’s XSS Auditor:
This XSS vector is so common that jQuery eventually changed the selector handling characteristics to prevent such attacks. A change was soon put in place to block HTML strings starting with a ‘#’. This requirement defeats XSS vectors from the window.location.hash property as content will always start with a hash.
In the following example using jQuery 1.6.1, an XSS bug is simulated. This passes script beginning with a # character as it would when being consumed from the location.hash property:
The code successfully executes.
In the example below we upgrade jQuery to 1.6.3 and run the same code:
The code no longer runs because the string starts with a # character. Not long after this change an additional behavior change was made to further fine tune the html detection of jQuery. In version 1.9.0b1 it became mandatory for html content to start with a < character. The discussion can be found here.
The jQuery ajax $.get() function (not to be confused with the .get() function) is used to make, as you might have guessed, ajax GET requests. It was found that in versions prior to 1.12.0 would automatically evaluate response content, potentially executing script if it were contained in a response.
Unlike the selector handling issue described above, we believe this behavior to be considered dangerous and potentially unexpected to even savvy developers. The important follow up to that statement is the scenarios in which this issue may manifest are far more unlikely than that of the previous issue.
This behavior may facilitate two potential vulnerabilities in an application.
Like almost all modern software, jQuery aims to be powerful and versatile. There are countless safe and legitimate functions which can contribute to security vulnerabilities when misused. The jQuery issues described here were all a result of software which functioned as designed but was implemented improperly.
We see a lot of confusion regarding the X-XSS-Protection header and thought it might be worthwhile to go over exactly what this header is and what it isn’t.
X-XSS-Protection: 1; mode=block
X-XSS-Protection: 1; report=http://example.com/your_report_URI
XSS Auditor isn’t a solution to XSS attacks. As Justin Schuh of Google mentions, “XSS auditor is a defense-in-depth mechanism to protect our users against some common XSS vulnerabilities in web sites. We know for a fact it can’t catch all possible XSS variants, and those it does catch still need to be fixed on the affected site. So, the auditor is really an additional safety-net for our users, but not intended as a strong security mechanism.” Because of this, XSS Auditor bypasses are rated as ‘SecSeverity-None’ and, if you we’re wondering, are not eligible for bug bounty payments.
XSS Auditor takes a black list approach to identify dangerous characters and tags supplied in request parameters. It also attempts to match query parameters with content to identify injection points. If the query parameter can’t be matched to content in the response, the auditor will not be triggered. Because the browser will never have insight to server-side code, an application that mangles an XSS payload will always render the XSS auditor useless in preventing attacks.
To take a quick look at the code behind Chrome’s XSS auditor, we can get an idea of the inner workings of the detection mechanisms:
Just by looking through the function names we can see that the auditor searches for script tags, valid HTML attributes, and other XSS injection vectors. Before rendering the response in the Document Object Model presented to the user, XSS auditor searches for instances of (malicious) parameters sent in the original request. If a detection is positive, the auditor is triggered and the response is “rewritten” to a non-executable state in the browser DOM. Chrome’s ‘view-source’ has a builtin component to highlight sections on code in red that caused the XSS auditor to fire.
A bypass of XSS auditor should not be considered a vulnerability. While the Chromium team does actively improve the auditor, there are likely to always be number of bypasses for the auditor. We will not go in depth about specific bypasses as they do change with time and are likely to be outdated fast. At the time of this writing two examples that are functional in the latest version of chrome can be found here and here.
There is one trend that has remained consistent across the internet over the last twenty years; attacks have become more sophisticated, more common, and more malicious every year. In 2013, the Cryptolocker virus became one of the first tools used by criminal organizations to extort money from victims on a mass scale. When the malware infected a machine, document files were encrypted with a unique public key, the private key was maintained on remote servers leaving victims with no way to decrypt their data without paying the ransom.
Organized cybercrime is a massive industry of its own, and has its own struggles of saturation, technical advancement, and economic problems like any other industry. As more career criminals enter the industry, criminal hackers must try harder to make the most profit from every attack. What we’re seeing now are criminals looking to monetize on breaches by extorting their victims.
To date, Cryptolocker has compromised almost a quarter million computers and has fetched over $27,000,000 in ransom payouts. Not only does this allow criminals to reinvest substantial funds into newer and more advanced attacks, it set a precedent for other “would be” criminals who may be looking to profit.
In June 2014 Code Spaces was notified by an unknown attacker that they had gained access to their Amazon EC2 admin tools. Along with the communication came a demand for a large sum of money. When Code Spaces did not deliver the payment, all data backups, virtual servers, and live virtual machines were wiped by the attacker. Code Spaces was forced to close their doors and cease all business operation.
Also in June 2014 was a compromise of Domino’s systems in France and Belgium. The group Rex Mundi claimed responsibility and threatened to publish stolen data of Domino’s customers unless a ransom of $40,000 was paid. Domino’s announced they had no intention of paying a ransom, and at the time of this writing there is no public resolution available.
We often hear IT staff dismiss potential threats because their data would not be useful to an attacker, people often say “Why would an attacker be after this information?”. Extortion is becoming a more popular way to monetize on data regardless of the direct usefulness to the attacker. Chances are, if the data is valuable to you, it is now inherently valuable to the attacker also.
Want to go to AppSec USA for FREE? We are giving away a FULL conference pass to AppSec USA this week in New York City. This is open to all security professionals, so please send us your linkedin if you win. The winner will be announced on our twitter tomorrow November 19th at 12pm EDT.
Step 1: Retweet the contest announcement
Can be found at: https://twitter.com/VirtueSecurity
Step 2: Tweet us your best bad joke
We love bad jokes, but we also like technical jokes as well. Be creative and good luck! We will grab an open mic at the conference and read the best jokes received.
CSP in Ethical Hacking
Many organizations rely on vulnerability assessments to provide a complete security review of their applications. In many cases, we (security professionals) are the only link between the w3c security communities and real world deployment of the technology. While raising an issue on an assessment because of a non-existent policy may not be appropriate, a note suggesting the application could benefit from it will often be well received.
Inevitably, security professionals should also expect situations where CSP is used in attempt to mitigate or lower the risk of XSS vulnerabilities. Many application owners may be tempted to use CSP in place of a costly code change. At the time of this writing it is our strong opinion that CSP is not a strong enough control to mitigate risk for XSS or any arbitrary content loading vulnerability.
The policy is delivered to the user via a ‘Content-Security-Policy’ header, however CSP 1.1 contains an experimental meta tag delivery method. Below shows an example CSP header:
Content-Security-Policy: default-src 'self'; script-src 'self' cdn.example.com
This header also takes two other forms: X-Content-Security-Policy and X-WebKit-CSP. As browsers mature, ‘X-‘ prefixes and WebKit-CSP will be deprecated. For best possible support, it is recommended a policy be delivered with all three headers. Below shows an ideal response using all three variations:
Content-Security-Policy: default-src 'self'; script-src 'self' cdn.example.com X-Content-Security-Policy: default-src 'self'; script-src 'self' cdn.example.com X-WebKit-CSP: default-src 'self'; script-src 'self' cdn.example.com
Ethical hacking professionals should be aware that if CSP is in use by an application, but is not delivered on particular pages, this likely indicates an oversight by application developers and should be raised as an issue. CSP is effective on a per page basis, so it cannot prevent an XSS vulnerability if the header is not delivered on a vulnerable page.
This article will not cover all CSP directives, but we will cover some new and important features which impact many applications.
Content-Security-Policy: default-src 'self'; script-src 'nonce-Nc3n83cnSAd3wc3Sasdfn939hc3' […] <script nonce="Nc3n83cnSAd3wc3Sasdfn939hc3"> alert("Allowed because nonce is valid.") </script>
connect-src – Controls where Websockets, XMLHttpRequests, and Server-Sent Events can connect. This could mitigate a parameter tampering vulnerability if these functions are generated dynamically.
reflected-xss (experimental) – This serves as a direct replacement for the X-XSS-Protection header.
reflected-xss allow reflected-xss filter reflected-xss block
Has the following equivalents:
X-XSS-Protection: 0 X-XSS-Protection: 1 X-XSS-Protection: 1; mode=block
Chrome – As of version 25 Chrome includes full support for CSP as well as a mandatory subset of features imposed on extensions. Mobile versions also include full unprefixed support.
Mozilla Firefox – The experimental header has been supported since version 4, however, version 23 includes full unprefixed support. Mobile versions support X-Content-Security-Policy only.
Safari – CSP is supported through the X-Webkit-CSP header.
Internet Explorer – IE 10 supports (very) limited subset of CSP is supported via the X-Content-Security-Policy header.
CSP 1.1 introduces reporting capabilities. When a violation of your policy occurs, the user’s web browser will send the violation details in JSON format to a destination of your choosing. It should be understood that this does open the door to new abuse cases and should be used with the same caution as any other functional component of your application.
CSP can also operate in “report only” mode, where policies are not enforced, but reports of violations will still be sent to you. This can be very useful to test out a policy before deployment. CSP can be difficult to determine just how it will affect a large application. To use CSP in this mode, the policy should be delivered via the following header:
Policies will often be best generated by hand, but using a generation tool will give you something to start with. Mal Curtis has a very useful CSP generation tool that can get you started quickly making a policy.
Up to date details for browser support – http://caniuse.com/contentsecuritypolicy
HTML5 Rocks Tutorial – http://www.html5rocks.com/en/tutorials/security/content-security-policy/
Full CSP 1.1 specification working draft – https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html