Is anyone else having the same issue with these services as I am - namely that Rackspace Cloud (and Amazon Cloudfront) do not support gzip compression? That means that any advantage in latency that you gain from serving a file locally is offset by the fact that it must be served uncompressed. It seems like it should be an absolute requirement for any CDN provider (and the smaller ones like SimpleCDN and MaxCDN do support gzip). I've repeatedly asked Rackspace if they plan to support gzip and they haven't given any indication that it's coming; this e-mail confirms it's not even on their roadmap.
This is something that we are getting with Akamai. Content will be able to be served compressed or uncompressed based on the Accept-Encoding header. So you will be able to store your uncompressed content in Cloud Files and serve it compressed to users whose clients send the Accept-Encoding: gzip header.
I'll check with the team at Rackspace working on CloudFiles + CDN with Akamai to see if gzip compression will be supported. (You also can request it here: http://feedback.rackspacecloud.com where it can be voted upon by others). I do see Akamai itself supports "Content-Encoding: gzip" if used with the "Vary: Accept-Encoding" header. I'll post what I learn here, or you can email me directly at robot AT rackspace DOT com to follow up.
More details on the relationship between Rackspace and Akamai will be posted at this link: http://www.rackspace.com/akamai . The partnership is just beginning; your feedback and requests are and will be appreciated!
There is a hack to get around this. You can manually gzip your files before uploading them (or have a process that does this automatically). You can have 2 folders in your bucket (e.g. bucket/gzip/ and bucket/nongzip/). You include one Javascript file that is gzipped at the top of your web pages and in this file set a variable (e.g. var gzipEnabled = true). If the browser supports gzip, this variable will be set on your page. If not, the file will be gibberish, and the variable will not be set. You can then check for the value of this variable when including other assets on the page, and request them from the appropriate folder. Of course you have to weigh whether this approach has other performance drawbacks and whether the additional dev time is worth it.
That's exactly how MaxCDN and SimpleCDN work, but Cloudfront and Rackspace Cloud don't pass through requests from your own server; rather, they require you to manually upload the files you want to serve via CDN in advance.
But Cloudfront now supports custom origins which I believe allows gzip by forwarding the Accept-Encoding header and caching different versions of the file depending on the value of that header.
This is great to know, but it seems like a lot of unnecessary work on our part to make it work. That means for each file we have to manually create and upload a compressed version, ensure that the compressed and uncompressed versions are always in sync, and properly set up custom origins for the files.
Instead, Amazon's front end should just check the incoming accept-encoding header and automatically compress as needed.
He wants the transfer to be compressed, which is based on the capabilities of the browser, not the content. Admittedly, the names of the headers that control all this kind of make it confusing.
This has been a big issue in the Rackspace Cloud user community. There has been no support for CNAME, SSL, or invalidation, so lots of customers (including me) have looked to other providers. I moved all of my projects to Amazon Cloudfront for this reason. Hopefully with the move to Akamai, Rackspace can play some serious catch-up feature-wise.
Interestingly, I was just brainstorming yesterday on how it could be possible to serve my jekyll blog entirely on CloudFront due to the default root object and cname support among others. http://paulstamatiou.com/amazon-cloudfront-cdn-origin-pull
CNAME, SSL, Edge Purge (invalidation), and more are coming with the addition of Akamai to the Rackspace Cloud Files product. We're excited, too. A big reason for this new partnership is feedback received from the Rackspace Cloud user community received here: http://feedback.rackspacecloud.com.
I agree, this had was showstopper for a previous project. We had briefly looked into a stop gap solution of a SSL proxy but that really negated the CDN features. I'm glad we choose to go with Amazon, because they have continued their lead in the cloud services space.
I am very glad to see Rackspace (and others) continue to expand their offering into this market.
Not on any public projects, but I have been experimenting with Cloudfront RMTP streaming. I'm interested to see if there are any big projects using this.
We've got 500K objects on Amazon and any one of them needed to be accessible via SSL. That and the price made Amazon an easy choice from a myriad of competitors.