Hacker News new | past | comments | ask | show | jobs | submit login
Rackspace Cloud switching CDN from Limelight to Akamai (rackspacecloud.com)
35 points by jread on Jan 12, 2011 | hide | past | favorite | 21 comments



Is anyone else having the same issue with these services as I am - namely that Rackspace Cloud (and Amazon Cloudfront) do not support gzip compression? That means that any advantage in latency that you gain from serving a file locally is offset by the fact that it must be served uncompressed. It seems like it should be an absolute requirement for any CDN provider (and the smaller ones like SimpleCDN and MaxCDN do support gzip). I've repeatedly asked Rackspace if they plan to support gzip and they haven't given any indication that it's coming; this e-mail confirms it's not even on their roadmap.


I'm a dev for cloud files at Rackspace.

This is something that we are getting with Akamai. Content will be able to be served compressed or uncompressed based on the Accept-Encoding header. So you will be able to store your uncompressed content in Cloud Files and serve it compressed to users whose clients send the Accept-Encoding: gzip header.


Thanks for your response; this is fantastic news!


I'll check with the team at Rackspace working on CloudFiles + CDN with Akamai to see if gzip compression will be supported. (You also can request it here: http://feedback.rackspacecloud.com where it can be voted upon by others). I do see Akamai itself supports "Content-Encoding: gzip" if used with the "Vary: Accept-Encoding" header. I'll post what I learn here, or you can email me directly at robot AT rackspace DOT com to follow up.

More details on the relationship between Rackspace and Akamai will be posted at this link: http://www.rackspace.com/akamai . The partnership is just beginning; your feedback and requests are and will be appreciated!


Using AWS CloudFront's Custom Origins, the Accept-Encoding header is passed and cached separately on the same filename.

$ curl -I http://statics.sodahead.com/js/generated/sodahead/utils/main... HTTP/1.0 200 OK Date: Wed, 12 Jan 2011 04:34:51 GMT Server: Apache/2.2.11 (Unix) Last-Modified: Wed, 12 Jan 2011 02:11:26 GMT ETag: "14d09-4999cb7e48f80" Accept-Ranges: bytes Content-Length: 85257 Cache-Control: max-age=5184000 Expires: Sun, 13 Mar 2011 04:34:51 GMT Content-Type: application/javascript X-Cache-Lookup: HIT from images.sodahead.com:80 Vary: Accept-Encoding Age: 53417 X-Cache: Hit from cloudfront X-Amz-Cf-Id: 493c86c13ceff5001d1a55c596153dca84009e4f34c67a565fd329760f8418f6b3fc35d1ac2980cb Via: 1.1 squid.wap-lax-106.sodahead.com:80 (squid/2.7.STABLE7), 1.0 b67f54b549c6579a21be3a5a67642d7a.cloudfront.net:11180 (CloudFront), 1.0 cc184e2737613cc16ea7b900e5384df1.cloudfront.net:11180 (CloudFront) Connection: close

$ curl -I --compressed http://statics.sodahead.com/js/generated/sodahead/utils/main... HTTP/1.0 200 OK Date: Wed, 12 Jan 2011 04:34:37 GMT Server: Apache/2.2.11 (Unix) Last-Modified: Wed, 12 Jan 2011 02:11:26 GMT ETag: "14d09-4999cb7e48f80"-gzip Accept-Ranges: bytes Cache-Control: max-age=5184000 Expires: Sun, 13 Mar 2011 04:34:37 GMT Content-Encoding: gzip Content-Length: 26799 Content-Type: application/javascript X-Cache-Lookup: HIT from images.sodahead.com:80 X-Cache-Lookup: MISS from images.sodahead.com:80 Vary: Accept-Encoding Age: 53439 X-Cache: Hit from cloudfront X-Amz-Cf-Id: e1c3dc2910370a51bdd8ffb9b590cbdd4c7e4f155d381b3783c09649b7cd2b8517eb2746e2d783ed Via: 1.1 squid.wap-lax-106.sodahead.com:80 (squid/2.7.STABLE7), 1.0 squid.wap-lax-104.sodahead.com:80 (squid/2.7.STABLE7), 1.0 b67f54b549c6579a21be3a5a67642d7a.cloudfront.net:11180 (CloudFront), 1.0 107edf28374a08a9e88792cfd1fdd16b.cloudfront.net:11180 (CloudFront) Connection: close


There is a hack to get around this. You can manually gzip your files before uploading them (or have a process that does this automatically). You can have 2 folders in your bucket (e.g. bucket/gzip/ and bucket/nongzip/). You include one Javascript file that is gzipped at the top of your web pages and in this file set a variable (e.g. var gzipEnabled = true). If the browser supports gzip, this variable will be set on your page. If not, the file will be gibberish, and the variable will not be set. You can then check for the value of this variable when including other assets on the page, and request them from the appropriate folder. Of course you have to weigh whether this approach has other performance drawbacks and whether the additional dev time is worth it.


The CDN should pass through whatever you give it. If your servers return gzip content for a request, the CDN will cache it.

It sounds like you want a configurable distributed web server instead of a CDN proxy cache.


That's exactly how MaxCDN and SimpleCDN work, but Cloudfront and Rackspace Cloud don't pass through requests from your own server; rather, they require you to manually upload the files you want to serve via CDN in advance.


But Cloudfront now supports custom origins which I believe allows gzip by forwarding the Accept-Encoding header and caching different versions of the file depending on the value of that header.


This is great to know, but it seems like a lot of unnecessary work on our part to make it work. That means for each file we have to manually create and upload a compressed version, ensure that the compressed and uncompressed versions are always in sync, and properly set up custom origins for the files.

Instead, Amazon's front end should just check the incoming accept-encoding header and automatically compress as needed.



He wants the transfer to be compressed, which is based on the capabilities of the browser, not the content. Admittedly, the names of the headers that control all this kind of make it confusing.


This has been a big issue in the Rackspace Cloud user community. There has been no support for CNAME, SSL, or invalidation, so lots of customers (including me) have looked to other providers. I moved all of my projects to Amazon Cloudfront for this reason. Hopefully with the move to Akamai, Rackspace can play some serious catch-up feature-wise.


Interestingly, I was just brainstorming yesterday on how it could be possible to serve my jekyll blog entirely on CloudFront due to the default root object and cname support among others. http://paulstamatiou.com/amazon-cloudfront-cdn-origin-pull

Really impressed with AWS these days.


CNAME, SSL, Edge Purge (invalidation), and more are coming with the addition of Akamai to the Rackspace Cloud Files product. We're excited, too. A big reason for this new partnership is feedback received from the Rackspace Cloud user community received here: http://feedback.rackspacecloud.com.

(See this link for today's announcement: http://bit.ly/RackCloudAkamai and for ongoing updates on the partnership between Rackspace and Akamai, see http://www.rackspace.com/akamai ).

Robert J Taylor Sr Systems Engineer Rackspace Hosting robot AT rackspace DOT com


I agree, this had was showstopper for a previous project. We had briefly looked into a stop gap solution of a SSL proxy but that really negated the CDN features. I'm glad we choose to go with Amazon, because they have continued their lead in the cloud services space.

I am very glad to see Rackspace (and others) continue to expand their offering into this market.


Are you guys only delivering VOD content? Any live streaming?


Not on any public projects, but I have been experimenting with Cloudfront RMTP streaming. I'm interested to see if there are any big projects using this.


it was an ecommerce system for photographs. To be able to store and display the assets from an ssl page with out that pesky warning.


If you are interested in CDN performance comparison as seen by end-users take a look at our Charts: http://cedexis.com/data/charts.html?country=223&provider...

Gabriel (yegg) wrote about our services on his blog -- http://www.gabrielweinberg.com/blog/2010/12/testing-cdn-perf...


We've got 500K objects on Amazon and any one of them needed to be accessible via SSL. That and the price made Amazon an easy choice from a myriad of competitors.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: