So this is something I wrote last week relating how I've come to believe that it's really not appropriate to rely on vendor code on public CDN's for anything more complex than a JSFiddle.
Basically, if you need to include something like jquery, you have to have code that makes use of it. Since by definition, you can't serve that code from a public cdn, you are going to need to serve those static assets somehow.
I now believe that you should serve the vendor code from the same 'place' as you serve your application's static assets, because by relying on external resources, you are adding more moving parts that all have to be in perfect working order for your app to actually be able to run.
This doesn't really have anything to do with how reliable the CDN itself is, but rather with how reliable the client's connection is.
I did read the jsdelivr post, and it actually looks like a really well thought out system. I just don't think I will use something like this for anything where I have the choice not to.
IMO, the possible benefits of using a public CDN, don't outweigh the fragility that gets added. It just feels like it is trying to optimize best case performance while worst case is far more important.
I'm not against CDN's as a concept though, I just think you should serve all the code that is needed for normal operation from the same one.
you could just have loaded the local file the first time and have it work every time, without all those ifs and maybes involved.
it can take several seconds for the request to fail, during which your application won't have been loaded or working.
the fallback adds complexity to your code, which ultimately didn't need to be there.
I just think that it's better to reduce the ways that your application can break, before trying to make it a little bit faster for the cases in which all the conditions are met.
On principle, even in the theoretical best case scenario, I think it's not the right approach to be taking.
I also don't think that the best case scenario can be reliably enough predicted to be something you should try to optimize for. There are many CDN's, many different versions of the many different libraries. Especially on mobile devices, cache size is limited.
You're overthinking this. There are benefits to using a CDN beyond caching; you exaggerate the risk of the CDN going down, and the effort it takes to create a fallback for that. For most applications, a widely-used (cached) free CDN is a no-brainer and is nice to your end users.
I see this daily, but then I don't have a sub-20ms ping time and a cable modem where I live. The CDN doesn't have to go down. The user's connection doesn't even have to go down. Things just have to get momentarily flaky _somewhere_.
I'm working on a project that uses google fonts and the page loads with fonts timing out multiple times per day. I get to sites where just the html loads and none of the css (or worse: just some of the css) all the time. Especially on 3G, but regularly on ADSL too. Or what if you're sharing a saturated line?
(And waiting for a timeout to kick in before you trigger a callback is just not an option. By then the user probably closed the page anyway. Rather just make it work in the first place.)
You should optimize the common path, not the edge cases. Most of the time the user would be benefited (cached content = faster load) from using CDN. On the other hand, CDN being unreachable is not a very common scenario, so it is acceptable to be a little slower.
The common case is that the user doesn't have the file yet and then has to do another dns lookup (for the cdn) and establish another http connection (to the cdn).
There are so many public cdns now and so many versions of libraries scattered about that the chances that the user already has the resource cached is rather slim.
Your build tool might as well have combined it all and sent the same "cache this forever" header. And in the common case that would have come from the same host as some other files, so one dns lookup, one http connection.
I think public CDNs optimise an edge case for little benefit. But maybe it is clearer to me because I always have high latency where I live and my connection speed can be described as "medium" at best. I see sites only load some of their resources due to splitting files across CDNs on a daily basis.
I can understand that if you have a sub-20ms ping time to your local CDN and a cable modem this might be difficult to understand.
Making everything work or everything fail together is a lesson I've learned many times.
I think your statement is a bit extreme. While relying exclusively on a cdn to serve your static assets is risky, doing it using require.js with a local fallback works very well. You still benefit from cdn caching, while preventing your assets to become unavailable (even in China).
Ultimately, the timeout issue will be problematic for countries forbidding the access to cdns servers (ie China for example).
For the rest of the users, there are three cases:
- the user has accessed the application in the past (or any other webpage that uses the cdn): a cached version of the library is served by her browser.
- the user has never accessed the page, the cdn is unavailable:
* either the Internet connection has failed, and our application won't be available anyway
* the cdn is indeed down, the user has to wait for the timeout before her browser fetches the local fallback.
The third case sounds extremely rare (look at the statistics of major cdns out there, they often have better uptime and response time than your own servers). And the advantages provided by the first case do more than compensate in my opinion.
It does not add complexity if your project already makes use of require.js, it is literally specifying an array of paths for a library instead of a simple string.
If my server is not available, my web application will ultimately fail anyway.
if you are worried enough about performance to start using external resources in this way, you are probably going to benefit much more by using r.js and a build step to build a single (or a few partitioned) javascript files.
We do use r.js and create modules (a main one that gets download systematically, and smaller ones for less used features). However as we update often, we don't want our users to download the large third party libraries we use at each occurence, so having them as separate downloads is actually much better.
For the large libraries we use that are hosted on a cdn, with a local fallback, we use the :empty value for the optimiser and they are exclude from our modules.
http://daemon.co.za/2014/03/from-trenches-js-cdn-point-of-fa...
Basically, if you need to include something like jquery, you have to have code that makes use of it. Since by definition, you can't serve that code from a public cdn, you are going to need to serve those static assets somehow.
I now believe that you should serve the vendor code from the same 'place' as you serve your application's static assets, because by relying on external resources, you are adding more moving parts that all have to be in perfect working order for your app to actually be able to run.
This doesn't really have anything to do with how reliable the CDN itself is, but rather with how reliable the client's connection is.
I did read the jsdelivr post, and it actually looks like a really well thought out system. I just don't think I will use something like this for anything where I have the choice not to.
IMO, the possible benefits of using a public CDN, don't outweigh the fragility that gets added. It just feels like it is trying to optimize best case performance while worst case is far more important.
I'm not against CDN's as a concept though, I just think you should serve all the code that is needed for normal operation from the same one.