11

Many times over the past N years, I've needed my own page (ABC.com) to get some data from a different origin (XYZ.com) and display it (all in JavaScript, no server fetching).

This doesn't work because XYZ.com doesn't have ABC.com in its Access-Control-Allow-Origin header. If the header did include ABC.com, then my browser's cookies (namely the auth cookie) for XYZ.com would be sent along with the request to XYZ.com. I totally understand why the browser would want to stop ABC.com from making authenticated requests to XYZ.com if it didn't have access.

But in all of my scenarios, the request made to XYZ.com are resources that are available to the public, there is no authentication/cookies needed, anyone can grab them. I know that there are workarounds for this (have the ABC.com server request the data from XYZ.com). Or XYZ.com can publish JSONP. But in my cases, sometimes I'm serving my file from the local file system so there is no server. Getting it from a server is a PITA. And lastly, I haven't been in control of XYZ.com and can't force it to also publish JSONP. n The meat of the question - if ABC.com isn't in the Access Control Header for XYZ.com, why wouldn't the browser allow ABC.com's JavaScript to make a request to XYZ.com BUT NOT send any of XYZ.com's cookies being stored in the browser for that user. If the browser makers did that, does that open up the user to some sort of other vulnerability? Because I can't think of anything. What am I missing? Is it just a manpower thing, will that take too much time to program that?

2
  • 1
    What about intranet sites? They might be available without authentication, but not want their data going out on the internet. Commented Feb 22, 2017 at 10:43
  • Minor clarification: you can normally send the request without cookies. What you can't do is read the response.
    – paj28
    Commented Apr 24, 2017 at 10:02

2 Answers 2

6

History

The original XMLHttpRequest implementations did not have the concept of "allow credentials" - all requests had credentials attached. Of course, this would be extremely dangerous to allow cross-domain, so XMLHttpRequest was originally allowed same-origin only. When CORS was added, they wanted to minimise changes to reduce the risk of unintended consequences, so keeping the same-origin behaviour for non-CORS sites made sense.

Unexpected authentication

When a request does not have "allow credentials" there is a risk that the browser inadvertently includes credentials. One example the browser can't avoid is network restrictions - such as an Internet site accessing an Intranet site. There may be other examples, such as client certificates. I wouldn't be surprised if some browsers still attach a client cert, even on a non-credentialed request.

Safety first

The above reasons are not that strong. CORS could have been designed to allow non-credentialed requests by default. However, that is the more risky design. When CORS was developed, browser designers were well aware of security issues and chose to implement the less-risky design.

1
  • I hope there is more development in this direction. Current design severely cripples the future of web apps and electron keeps thriving.
    – tejasvi
    Commented Jun 27, 2021 at 12:18
1

I believe it is part of the layered defense in the browser/server ecosystem to help fight Cross Site Request Forgery (CSRF)

References:

Wikipedia OWASP

Basically, the browsers don't know what web resources might, or might not need cookies or auth before they request them. Sending the cookies blindly, on the say-so of ABC.com, means the browser doesn't know if the user wants to auth in to XYZ.com or not. So, it errs on the side of caution, if not told otherwise.

From your first paragraph, you already understand that. So, now assume ABC.com hosts a message board (for example) and a malicious user posts some javascript. when the victim reads that message, their browser now reads and executes "just javascript" from ABC.com that ABC and XYZ neither endorse, nor know anything about. Basically, the browser can't tell what is "from" ABC.com's owner, and what might have been injected by some bad actor (perhaps via a hosted ad network).

In any case, cross domain requests can be made, but they shouldn't be auth'ed automatically.

For example, imagine you are logged into MyBank.com, and visit ABC.com, and BadGuy has stashed a javascript request in an ad you get served up which basically says "h.t.t.p://MyBank.com/xfer-money-to=BadGuy&amount=100.00" If your browser sent along the auth cookies, BadGuy could make your browser do something you don't want. Example obviously over-simplified, but you get the idea.

[Edit] Also, just found this already existing answer which, I think, reads better than mine. Adding as reference. Security Stack Exchange

3
  • 4
    He said it doesn't include authentication so bank example is not to-the-point, why doesn't browser make a sandboxed request? without cookies authentication or whatsoever. Just a plain request.
    – EralpB
    Commented Apr 24, 2017 at 8:12
  • Because the browser can't tell the difference between a bank, and a picture of a cat. It can't know, a priori, if it needs to send cookies/authinfo/etc to any particular url to have the request fulfilled. As the answer by @paj28 above points out, a browser might accidentally send info it shouldn't, so design is one "fail closed" and safety first.
    – JesseM
    Commented May 2, 2017 at 0:06
  • @EralpB This 100x. Every time the question is asked this argument pops up which basically equates to - "that's how it has been done"
    – tejasvi
    Commented Jun 27, 2021 at 12:21

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .