Hacker News new | past | comments | ask | show | jobs | submit login
Cross-Site Request Forgery is dead (scotthelme.co.uk)
320 points by edward on Feb 20, 2017 | hide | past | favorite | 76 comments



The "SameSite" cookie parameter is only supported in Chrome, but you can vote for it in other browsers' issue trackers.

Firefox has an open bug https://bugzilla.mozilla.org/show_bug.cgi?id=795346

Microsoft does, too https://wpdev.uservoice.com/forums/257854-microsoft-edge-dev...

And so does WebKit https://bugs.webkit.org/show_bug.cgi?id=159464 WebKit allows voting, too, by filing duplicate issues in Apple's private "radar" issue tracker.

Which is to say, the WebKit bug is already filed in Radar as rdar://problem/27196358

Apple has said publicly that if you want to "vote" for a given Radar issue, you should file duplicates for that Radar. (I find that weird, but that's the way they do it.) To do that, go here: https://bugreport.apple.com/

You can copy and paste the data from OpenRadar, a community tool where people share Radar issues that they want people to be able to search for and/or duplicate. https://openradar.appspot.com/radar?id=4963174633701376

Be sure to mention in the bug description that you're filing a duplicate of rdar://problem/27196358.

EDIT: And while you're in there voting for browser security features, consider voting for Subresource Integrity on Apple WebKit and Microsoft Edge.

https://openradar.appspot.com/radar?id=4980317458792448

https://wpdev.uservoice.com/forums/257854-microsoft-edge-dev...


> Apple has said publicly that if you want to "vote" for a given Radar issue, you should file duplicates for that Radar. (I find that weird, but that's the way they do it.)

Given that Radar is a private store with a public write-only channel (bug report submissions), the only way it could work for non-Apple-employees to vote for something is to request that they describe it again themselves and then merge all the duplicates on the Apple-private side.

Not saying that Radar being private is not itself kind of weird, but the submission policy necessarily follows from that.


Radar sucks yes, but thats why open radar got created, to help out coordinating exactly this kind of thing.

https://openradar.appspot.com/page/1


Seems like "The Cross-Site Request Forgery killer" might be a better title. It sounds like I can probably stop using anti-csrf tokens 8-10 years from now. I still support IE8.


Not supported on most browsers, as opposed to what the headline suggests: http://caniuse.com/#feat=same-site-cookie-attribute


Even worse, if a browser doesn't support it I guess it will default to ignoring the policy!


I think this is a smart enough idea that I think most of the browsers will implement this soon, and then since most of the browsers that matter now are evergreen, most people will have it eventually.


depends what he means by "most". depending what source you look at, chrome has the majority market share so even if it's the only one to implement it, it still counts as "most".


most : the majority of, nearly all of.

"most browsers" would become "the majority of browsers". If there were 5 browsers, "the majority of 5 browsers" would be "3 browsers or more". It doesn't make any difference which ones are most used, the sentence (apparently now changed or removed) wasn't "most of the requests will be protected", or even "most users will be protected"


This is the same "type vs instance" issue that plagues discussions about copyright. In the sense of how effective something is at preventing errors, "most browsers" seems to map most reasonably to "most browser instances", rather than "most types of browser software".


At the same time, though, you can pretty easily make the argument that the majority of browsers is Chrome. After all, if there were five internet browsers and four of them had one user each, it would be a bit silly to say "Well, most browsers haven't implemented this feature".


But if "the majority of browsers" means "Chrome" (following your reasoning) it would be more than a little silly to ever use that comparatively convoluted phrase. You would just say "Chrome".


It's this kind of mentality that led to 'This site works best in Internet Explorer'.


Do you mean innovation and driving forward the web? How else do things improve?


The problem is that the sites that had that moniker tended to be old, badly written and in ignorance of the actual web standard. Firefox's tracker suggests that standardization work on the feature is ongoing. So if you implement it now, you're going to be stuck with a Chrome-only version that may or may not end up being the standard. (Usually it does, because Chrome marketshare beats objections over not well thought out features in it, aka the "IE effect")


Most browsers != the most frequently used browser in the context of an audience that develops software for a living.


In the US, sure: https://clicky.com/marketshare/us/web-browsers/

In Germany, nope: https://clicky.com/marketshare/de/web-browsers/

Germans sure love their Firefox. Now, that may not matter for you depending on what market you're targeting but not everyone is as lucky.


Chrome, aka the IE of 2017!


Clickjacking is dead, because we have X-Frame-Options! (said no one ever).

Opt in solution is not a solution. But still useful.


XFO is a pretty solid response to "clickjacking", which is nowhere nearly as prevalent as CSRF as a result. If you can trivially mitigate a vulnerability from a single location in your code, that vulnerability doesn't have much life left in it.


Hm, okay, except 90% of my clients still have it :D


Because they don't enable XFO? Do they think they need arbitrary frames?


Nope, because they, their frameworks or something in between have no XFO. Which proves it being ineffective, even after so many years.


What framework support are we talking about? Isn't the beauty of XFO as a countermeasure is that it doesn't require framework support? Unlike CSRF protection, you could add XFO in an nginx conf if you really wanted.


Last time i checked Express.

It is trivial to add, as long as you remember to do it.

Actually it is silly nginx or apache don't serve those by default.


And SQL injection is dead because we have prepared statements!

Yeah, the headline is certainly exaggerating quite a bit.


Other comments have mentioned the obvious issue that you can't use this feature across multiple browsers. So you still need CSRF tokens, and the title is just wrong.

He also mentions checking the origin/referrer header. I would strongly recommend against this strategy; as he says, it doesn't work everywhere. Specifically, regular form submissions will not include the origin header in most browsers, and the referrer header is simply not reliable.

More importantly, multiple strategies for CSRF protection is bad. You need to fall back on tokens anyway, so the "check origin first" method is basically just an extra bypass for attackers to abuse. Two checks in this case are significantly worse than one, because if either is broken you are insecure.


Sounds like a great tool. But saying "CRSF is dead" is a sure sign you aren't taking security problems seriously enough. The post itself describes how the feature has built-in self-weakening features. So CRSF is dead... so long as you use this feature on any appropriate cookies, and work around it only sparingly meanwhile keeping in mind these very common use cases where it badly breaks expected behavior in a way that will encourage workarounds that reinstate CRSF risk. But totally dead.


If I understand well. If the user is not using a browser that respect this new policy an attack can occur. Even if 99% of my users have a browser that support SameSite cookies. 1% of the users are still vulnerable. A bank app can not afford that kind of risk.

So CSRF is not dead after all.

edit: Typo


I don't know, at some point not supporting this feature would be equivalent to a running a browser with a flawed TLS implementation. The burden is on the user to keep their software up to date.


Yep. >at some point In the meantime, I think it's better mitigate this kind of danger at the server side.


If your app is a Single Page App and it doesn't rely on HTTP/REST (e.g. it uses WebSockets instead), then you can easily use localStorage instead of cookies for holding session IDs/JWTs on the client side (and adding your own logic to pass them to the server) - That's another way to sidestep the issue and it works in all modern browsers.

It makes a case for companies to use WebSocket-based APIs.

Also localStorage works better on mobile when using frameworks like Cordova, React Native, PhoneGap and others - That is because your .html files are usually sitting on the mobile device itself (so the local domain for the file won't match the one where your REST API/backend is hosted; so cookies won't be sent and don't work).


The big issue using localStorage for authentication token storage is that values are accessible by any JS from the same origin as the JS that sets the value. Example: I bundle my JS and it's dependencies into a single file (bundle.js) using webpack. One of the dependencies in the bundle has malicious JS that sends values in localStorage to a remote server, or uses the authentication token to make requests impersonating the user.


If you're shipping malicious JS on your origin, you're fucked anyways.


Exactly, they can still make requests to your back end by hijacking your session with your cookie; even if you use the httpOnly flag.

The only advantage of the cookie (with httpOnly) in this scenario is that the malicious code can't access your session ID and use it for later (but they can still hijack the session in-place without knowing what your session id is)...

Since sessions expire anyway, there is a sense of urgency; because of this, an effective XSS attack would typically be carried out in-place on the page (while the session is active). So in practice, there is very little added security value in the cookie approach.

In my opinion, XSS mitigation is the last barrier of defence.


Isn't that true for cookies as well?


No, you can set cookies to HTTP only, so Javascript can't access them. JS injected into a domain can do something with your permissions for that domain by making more requests to the target domain that have the cookie attached, but at the moment that's basically how the Web security model works, so that is in some sense not a hole. But the injected JS can't steal the cookie and send the contents somewhere else.


Why would an XSS attacker need to steal a temporary session ID from a cookie (which will probably expire soon), if they can just highjack the session right there on the spot from the user's own browser?


Because an actor might save the session to his/her server for reuse from his own machine anytime he/she wants ( spy purposes ).

Sessions usually implement `touch` functionality, which will extend the session every time a request with it has been made.


Not exactly 'anytime' because the session will expire as soon as the user logs out. Even if the user doesn't log out, the session will typically timeout on its own anyway (at least if the auth is implemented correctly).


"You may be tempted to use JWT instead of a database of session cookies. Please don't. Here's why: http://cryto.net/~joepie91/blog/attachments/jwt-flowchart.pn... "

source: https://twitter.com/j4cob/status/831286673644216320


I may be wrong, but I don't think JavaScripts from a CDN can access the same localStorage as some JavaScript from another origin. (Source: "Cross-origin data storage access" https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...)


You are wrong. The 'origin' of a script is the domain which loads it, not the domain where it is hosted. (Those can be the same, though.)


TIL. Thanks!


It's not the protocol you use to communicate with the server that allows CSRF, it's the practice of authenticating based on a value in a cookie. Most of the time this is because you want to use built-in anchor tags and forms to communicate with the server instead of JavaScript/AJAX. The default behavior is for your server's cookies to be sent along with every request to your server, no matter where the request came from, so hello CSRF.

If your JavaScript adds a custom header to every HTTP request with a secret that you keep in localStorage, and your server always authenticates requests by checking that header instead, you can prevent CSRF attacks without switching to WebSockets.

Always be careful to avoid serving or running untrusted JavaScript, of course.


There's a big difference between mostly dead and all dead..


With all dead, well, with all dead there's usually only one thing you can do. Go through his clothes and look for loose change.


There's no news here, though, right?

The initial RFC draft had been submitted in April 2015 [1]. It was updated multiple times [2] but eventually expired in December 2016 [3], unfortunately.

Luckily, that draft has been revived this month and updated again today [4], so that's probably the only news.

For people who need immediate support in PHP, I published a small library last June [5].

Still, the major problem today is the lack of browser support. Chrome (desktop and Android), Opera (desktop and Android) and the Android WebView have long supported this attribute, but Mozilla, Apple and Microsoft did not ship this (yet).

[1] https://tools.ietf.org/html/draft-west-first-party-cookies-0...

[2] https://tools.ietf.org/html/draft-west-first-party-cookies-0...

[3] https://tools.ietf.org/html/draft-ietf-httpbis-cookie-same-s...

[4] http://httpwg.org/http-extensions/draft-ietf-httpbis-cookie-...

[5] https://github.com/delight-im/PHP-Cookie


SameSite works if I use the same domain for my API as for my CDN, and don't integrate third party APIs.

Doesn't seem to actually solve the real security problems with the web, which is that people don't know how it works, and there is no security model that works that matches physical security that people do understand.


This is a handy defence-in-depth mechanism, somewhat like the other cookie flags (httpOnly and secure).

Once browser support gets a bit better, it would seem like a good idea to start making use of it on that basis, but I don't think I'd ever rely on it as the only form of CSRF protection on a site...


This guy explains how to add that to a Rails application https://gist.github.com/will/05cb64dc343296dec4d58b1abbab7aa...


Help me understand: Strict means I break the basic way in which the web works, links? It seems to me a useful feature (Ironic!)

I could get that in certain case it is a good idea to be strict (e.g: banking, e-commerce), but it is a relatively small percentage of the web. And the Lax policy doesn't solve the issue because, well, someone could always screw and forgot they don't accept submission on Post.

Don't get me wrong, it is a useful and powerful tool. What I'm simply saying is that CSRF is paraphrasing Mark Twain right now: "The reports of my death are greatly exaggerated".

P.s: also the other comments are right...


Strict doesn't break links, it breaks sites that set "expected" cookies to strict. Those expected cookies won't be available in some situations that might have been expected, such as navigating to the site directly.

So if news.ycombinator.com cookies were set to strict, they will not be sent when I open the site, or indeed at all unless I was navigating from within the site (thus stopping cross origin attacks, but also stopping them being sent on first page load). If these cookies were used to identify a logged in user, the user will not be logged in as the cookie would not be sent. The link still works, but the behaviour is perhaps unexpected.

One solution is to have a trusted, strict cookie, and it is required for any actions that originate from the origin site - upvoting, new posts, comments, etc. You then have a second, untrusted, non-strict cookie that is used to identify a user. As long as this cookie is not used for any trusted operations, you have restricted the potential attack surface a lot.

None of this breaks links, it only breaks user experience expectations if utilised naively (like hacker news using strict cookies for their persistent login tokens).


Well, from a user perspective it breaks links meaning that they work in an unexpected way. For sure it is a much-needed improvement, but it is not a fix it all feature.


The feature itself doesn't break links in that way at all.

If, and only if,

(1) a website uses strict cookies on a cookie

(2) the website assumes that cookie (1) is always available for 'valid requests'

(3) the website assumes that 'valid requests' (2) include top-level navigation events (such as typing the website's url into the browser bar)

then any behaviour that relies on the existence of that cookie will break in some situations.

That is, only a naive use of this feature will cause breakage. Worthwhile knowing about for developers, but in an ideal world would never cause issues for users.

In an ideal world there would be a solution without this shortfall, but it seems like an almost necessary feature due to how some cross origin requests are made (by spawning a new window etc).


So if the user refreshes after landing on the strict site, cookies will be sent right? Kind of breaks the idempotency of GET.


Correct, strict cookies will be sent with a request that originates from the same origin as the cookie.

It doesn't really break the idempotency of GET, as the idempotency assumes the cookies sent with the request are the same. That is, the expectation remains that if you send the same GET request with the same headers (including the same cookies) you should get the same response back.

Note that this is a client-side feature. The browser is choosing to not send cookies based on this policy. If you're crafting a request yourself it is up to you to include the correct cookies, just as it always has been.

All this is doing is providing a way for websites to ask browsers "never send this cookie unless the user is already on my site" (strict) and "never send this cookie unless the user is already on my site, or is performing a top-level request with a safe method" (lax).

The protocol still expects idempotent behaviour, but the user may be surprised that the browser didn't include their auth token with a top-level request, if the site had requested it be a strict cookie. If that happens it's a shortfall of the site, not of this addition to the protocol.


Thank you for the detailed information, Cogito.


I do not see Same-Site Cookies as a great solution. If you use a framework like Laravel CSRF tokens are setup out of the box. I cannot imagine it being any simpler.

The two cookie solution needed to fix problems with `SameSite=Strict` is more complicated than just using CRSF tokens.

And `SameSite=Lax` solution creates a new way for developers to screw up. The lax setting gives you no CSRF protection on GET requests. It is too easy to accidentally accept GET requests on a critical form that should be POST only.


I agree that it's still too easy to screw this up using lax same site cookies.

I do think that using them does provide some additional defence in depth, and specifically provides use that CSRF tokens can't. These are listed under 'additional uses' in the post, and essentially boil down to the fact that cookies are not sent at all.

In the wild, this would help today with any timing attacks looking to expose info from if/when a cookie is included in the request.


Not supported in most browsers as said in this thread, but there many other mitigation strategies that do work. Here's a complete writeup: https://dadario.com.br/what-is-csrf/


This seems so obvious. Is that just due to hindsight? Why wasn't this implemented 3 years ago?


As I mused elsewhere in this thread, why isn't: secure, HttpOnly and SameSite a default for cookies that you have to take action to disable. Gravity and backwards compatibility mostly.


It's too bad SameSite wasn't built many, many years ago. It's good that we are moving there now. However, it'll be many years before browser adoption will allow us to make use of this.


Sane web frameworks already solve this in some way, as long as you don't mutate any sensitive state in non-POST, so this is really a solution in search of a problem.


In my experience until developers have to work extremely hard to create a security bug, disabling numerous defaults, it is never going to be a "solved" problem.

Prepared statements fix SQLi. Encoding HTML entities on output fixes XSS. Etc. etc. Even SPA frameworks that default to encoding entities get hit with XSS.

Unless it is completely stamped out by default or simply no longer possible I encourage solutions like this. And this is mostly just an XSS problem, but XSS that leverages simple CSRF vulnerable end points is still a problem. To be safer you now need 3 flags on a cookie (HttpOnly, secure, and SameSite).

This is a pain for developers and almost ensures weird edge case CSRF type vulns will still creep up on occasion. Similar to how memory corruption mitigations often force an exploiter to combine multiple vulns to achieve the effect of one old school vuln, this raises the bar, but does not eliminate the possibility.


No-one is going to set a strict SameSite cookie, it just breaks too much, it's the same level of hubris as thinking developers are going to care about CSP. It just breaks too much shit.

So we're left with Lax SameSite cookies, which is pretty much the same as what existing web frameworks do.

The best you can say is that this is less likely to have bugs in the implementation.


I'd say surely banks will use this for their online banking systems.

Then I wonder, because of some of the delightful vulnerabilities I've heard about.

So I'll just say banks certainly should use Strict SameSite cookies, and in fact they seem designed to suit banking workloads (especially as banks don't usually offer persistent logins, because that's too dangerous).


Many frameworks have had and probably still have bugs in their CSRF handling. E.g. a quick scan on the Django security issues [1] shows 5 issues involving CSRF, the latest one in September 2016.

[1] https://docs.djangoproject.com/en/1.10/releases/security/


Sure, the fact that browser vendors have better security sense than framework developers is a plus, but it's not going to be an improvement on the rest.

Most people aren't getting hacked via an obscure Google Analytics + Django CSRF interaction. Not people are getting hacked via client-side webapp vulns of any sort anyway.


Note: 5 issues including advisories since 2008


This is an old idea that is finally coming to fruition.

See a Microsoft paper from 2011 that prominently features cookie isolation [1]. Or a 2012 proposal in Mozilla's bug tracker [2] which resulted in some proof-of-concept code early on, a writeup in 2013 hosted on Github [3]; blogged about independently and contemporaneously by others [4], which hit HN [5].

I've always taken a dim view on cross-domain requests in general [6] and the sprawling set of specifications (like most security headers) us developers have to learn and implement properly to stay one step ahead [7], and am not particularly enthused that this is opt-in instead of a heavy-handed mandate like some other recently-introduced features, the default opt-in is the more-secure but essentially session-destroying version meaning it's guaranteed to encourage a long and impassioned debate about whether Strict or Lax is the preferred balance.

It's fascinating to go way back to ~2006-2008 and read about when CSRF was first starting to be recognized by mainstream evangelists, commentators, developers and decision-makers as a problem instead of a feature of just how the web works.

This article on DarkReading from 2006 [8] was soon after cited by the OWASP wiki [9], Jeff Atwood first wrote about it in 2008 [10] and admitted its sublteness yet seriousness took him by surprise, and yet it's amusing that going back to 2003 you can find references to CSRF by that name and instructions on how to protect against it [11] -- the author of the 2003 article, Chris Shiflett, is credited in the announcement about the 2008 Felten & Zeller paper [12]: "On the industry side, I'd like to especially thank Chris Shiflett and Jeremiah Grossman for tirelessly working to educate developers about CSRF attacks."

[1] https://www.microsoft.com/en-us/research/publication/atlanti... [2] https://bugzilla.mozilla.org/show_bug.cgi?id=795346 [3] https://github.com/mozmark/SameDomain-cookies/blob/master/sa... [4] http://homakov.blogspot.com/2013/02/rethinking-cookies-origi... [5] https://news.ycombinator.com/item?id=5183460 [6] https://hn.algolia.com/?query=niftich%20crossdomain&type=com... [7] https://hn.algolia.com/?query=niftich%20another%20damn%20hea... [8] http://www.darkreading.com/risk/csrf-vulnerability-a-sleepin... [9] https://www.owasp.org/index.php?title=Cross-Site_Request_For... [10] https://blog.codinghorror.com/cross-site-request-forgeries-a... [11] http://shiflett.org/articles/foiling-cross-site-attacks [12] http://freedom-to-tinker.com/2008/09/29/popular-websites-vul...


Wouldn't an attacker just use an older browser that doesn't support 'SameSite' to launch a CSRF attack?

I feel like I'm missing something but relying on the browser to protect your site is leaving yourself wide open.


> Wouldn't an attacker just use an older browser that doesn't support 'SameSite' to launch a CSRF attack?

With a CSRF attack, it's the victim's browser performing the request - which the attack doesn't have control over.

> I feel like I'm missing something but relying on the browser to protect your site is leaving yourself wide open.

Sites can just add in the 'SameSite' attribute in addition to whatever CSRF mitigation measure they use.


Ha, that's a broad statement. I wouldn't trust the browsers handling of the cookies. And I wouldn't trust the browsers "private" feature either... Ever heard of evercookie?


evercookie isn't relevant here.

The feature is for site owners to put additional security/trust restrictions on their cookies.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: