I am exploring ways in which I could/should use CSP and SRI for my website.
For my site I’m looking to use a CSP, something like this.
script-src 'strict-dynamic' 'nonce-RandomValue' 'unsafe-inline' https:;
NOTE: I used recommendations as per https://www.websec.be/blog/cspstrictdynamic/ so dont give me a hard time about the ‘unsafe-inline’ directive or not using whitelisted uri’s.
My site also uses Sub Resource Integrity checks for all externally sourced scripts, so I am thinking that I probably should also include the CSP directive.
Content-Security-Policy: require-sri-for script
My site also contains some inline scripts (which is why I want to use the CSP ranndom nonce but means I have to use it for all )
Does it make sense to use SRI as well as implementing CSP random nonce?
And if not then what is a good practice?
Say I have the following webpage:
document.write('querystring=' + location.search.substr(1));
I open it at a URL like this:
In all browsers tried (Chrome 57, Firefox 52 and Safari 10) the result is:
Because angle brackets
<> are not valid URL characters they seem to be automatically encoded by the browser.
This leads me to believe that simply rendering the querystring directly on the client using
document.write is safe, and not a possible XSS vector. (I realize that there are many other ways in which an app can be vulnerable of course, but let’s stick to the precise case described here.)
My question: Am I correct in my assumption? Is the encoding of unsafe characters in the URL in some way standardized or mandated across all reasonable browsers? Or, is this just a nicety / implementation detail of certain (modern?) clients on which I shouldn’t rely?
Not relevant to the question, but an interesting aside. If I decode the URI first then browser behavior is different:
document.write(decodeURI(location.search.substr(1)));. The XSS Auditor in both Chrome and Safari blocks the page, while Firefox shows the alert.
so basically I want to know how an attacker would try to steal or break / destroy data on the server. What would they do to test the security of the app and the server? My main concern is that meteor seems to require a little more attention to detail when securing it.
My understanding is that:
- Removing the insecure and autopublish packages
- Adding rules to deny updates for all collections (including and especially users)
- Using methods with client stubs and server side counterparts that check user is validated (and any other business rules)
should be all that is needed.. but I wanted to check with you guys for my own sanity and for the record so anyone else out there who loves this framework but isn’t 100% sure how to achieve server and data security can get an easy guide and peace of mind going in..
Our product is running into an issue specific to Java 8. Java 6/7 runs fine.
We have a package of Java applets that multiple customers use, so the domain this package is deployed to is always different. The package is properly signed with the certificate from Verisign.
When the end user launches a page in the browser, the expected dialog with our application name and publisher appears, and asks the end user to accept the security warning. The end user accepts and clicks ‘Do not ask again’ and the page runs fine.
Then also sometimes the popup appears with the application name and publisher set to UNKNOWN. There does not seem to be any reason for this, the applet package is confirmed to be signed correctly and with a valid certificate from Verisign. Yet, it occurs.
I recognize the initial popup is unavoidable but all of these downstream popups, especially those where the application/publisher are UNKNOWN don’t make sense to me, and I’m not sure how to debug this further. The java console trace logs do not clearly show any more details.
Any ideas? Please feel free to ask me more detail if there is something here unclear.