Hacker News new | past | comments | ask | show | jobs | submit login
Google Storage for Developers (googlecode.blogspot.com)
91 points by mcantelon on May 19, 2010 | hide | past | favorite | 25 comments



I just compared the pricing of S3 and Google Storage and at this stage, S3 wins hands down

http://www.manu-j.com/blog/amazon-s3-vs-google-storage/490/


I keep trying to find a cloud service that will let me back up all my computers (3TB) for a reasonable price ($<500/yr) and let me manage it (no Backblaze, Mozy, etc.)

It continues to amaze me that nothing comes close to simply colocating a NAS.

This has to be a problem that affects absolutely every computing professional (and even a lot of nonprofessionals--gamers, ad agencies, etc.). How can the only viable solution be roll-your-own?

And if you think about it, even colocating 3TB is horribly inefficient. Splitting a Backblaze Pod 20 ways might cut the cost in half.


It continues to amaze me that nothing comes close to simply colocating a NAS.

If you colocate a NAS, there's a significant chance that you'll lose all your data.

Amazon and Google, quite sensibly, don't want the bad publicity which would come with losing their users' data, so they replicate across multiple datacenters -- even Amazon's reduced reliability storage replicates to two datacenters -- which obviously increases costs.


[edit: figures were wildly off]

By definition, it's already the second copy of my data (since it's a backup).

For the price of S3 (~$5500/yr, not even including bandwidth), I could colocate 7 RAIDed NASes ($700 NAS, 4-year life, $50/month colocation fee).

If me and a couple of friends agree to exchange NASes to save on colocation fees, I could have 31 NASes for the price of S3.

For maybe 50% the price of S3, I could colocate a Backblaze pod and have 67TB of data storage, 22x my need.


Amazon S3 just announced a "Reduced Redundancy" version of S3 priced about 33% cheaper.

http://aws.amazon.com/s3/#pricing


I like CrashPlan. It can back up to multiple destinations, both their own data center and other machines (such as friends etc.) that you have an agreement with (e.g. you can act as reciprocal remote backup destinations). I don't know if that's hands-on enough for you, but it works for me.

Since the guts of it are in Java, it runs on Windows, Mac, Linux and even my Solaris (Nexenta) box.

Not directly applicable to your case, but I'm using the data center plan (CrashPlan Central), which under the "family plan" is 180 USD / 3 years for all machines in your house. It says it's unlimited; we'll see :) I have around 10T of storage across all the machines here at home (mostly 7T in a ZFS raidz pool), but only about 500G of the most important stuff is currently backed up since I signed up, which was a couple of months ago. I upload about 10G a day, sometimes more, sometimes less. All encrypted, data deduplication, etc. - I'm using the Plus client.

I did at one point consider continuing with my home-grown rdiff-backup approach, with some chunking (to reduce object count) and sync to S3 with s3sync, but S3 is quite expensive for larger amounts of data.


It's no cloud but I've used rsync.net as an offsite backup. Unlimited bandwith and you just pay for storage with discounts for volume. Cheapest they show is $0.32/GB/mo.

All though for 3TB that would come out to roughly $983/mo a far cry from your wanted $500/yr.

The problem with co-locating a NAS is that you have to also support it.


I'm working on a backup web service that might meet your needs. When you say 'and let me manage it' what exactly do you mean? I'd love to hear from you and anyone else on HN: leonhard attt backupwebservice dotttttt com.


Mozy is a piece of shit. Avoid it like the plague.

My favorite bit of their crap software is freaking out at odd times on my mac and sucking all IO rendering the computer unusable. And no, not at scheduled backup times. Perhaps these morons are reading the entire backup set and rehashing it instead of watching for changes? Dunno. This behavior persisted even after I told it to stop backing anything up at all.

What I do know is the client sucks ass and the web has many stories of people unable to recover their data when they needed it.


Earl, I've heard this repeatedly - do you happen to have a pointer or an explanation as to why Mozy is a piece of shit ? Just interested as from a non-specialist, it looked like an interesting offering


I'll echo the sentiment except from the windows side. I had the mozy pro deployed on 5 users laptops that were frequently connected to 100mbit internet connections. I never was able to get all 5 machines to continue to backup on their own. I constantly had to force the backups, add more resources (even though it should have auto purchased more storage), and deal with users complaining their computers were much slower.

The worst part was the outlook archives, large multigigbyte files it would try to compress and encrypt without splitting.

We use jungle disk now, nobody notices it and it works.


It's a labs release - they don't want to get every S3 customer on day one. This pricing is a good way to get some beta users and soft launch. I'm sure they wouldn't be in this if they weren't going to match s3 pricing at some point.


I'm really sad there aren't some price wars going on here between S3 and Google. Maybe S3 should fire the first shot.


Would you lower your prices if your new competitor already costs more?


Decreasing pricing is not a competitive strategy. They can compete through the quality of their service, the quality of their customer support, their interface simplicity, additional services bundled with the storage such as the prediction api etc. But never compete on price.


Looks like their 'waitlist' is a google docs spreadsheet+form. Which, incidentally isn't loading. I'm not sure how well that bodes from a capacity planning standpoint.


Confusing.. why would I use this instead of S3 which costs less, has proven itself over the last four years (OK, not 100% but damn close), and has countless client programs and libraries?


You might want to use this if you were building an application on top of other Google infrastructure. But aside from that, I can't see anything here aside from some 20% googlers saying "Amazon S3 is cool, let's build our own version".


overtime it will integrate with other google stuff. We already see it with their other announcements building on top of this.


I think this makes more sense if you look at it in combination of one of the other things they just released - AppEngine for business. Sure on its own this is priced higher than S3 but in the context of a company running google apps + enterprise app engine + google storage it starts to make a lot more sense just from a convenience standpoint


That's a dangerous road to go down, though. Nobody should try to be the inferior option that piggy-backs on other, more competitive products.


....but does it support a root index.html file? This is like the only thing that keeps me from hosting static sites entirely on S3.


You could do that on AppEngine already. Just put everything in your static_files directory and set it as the default responder in app.yaml

For more see http://code.google.com/appengine/docs/python/gettingstarted/...


You can use App Engine "supercharged" free quotas plus S3 or Google Storage. Just place a minimal static html file.

http://izuzak.wordpress.com/2009/08/27/how-to-supercharge-yo...

The problem I find with S3 and CloudFront is no restriction based on referrer. It can be addressed with some moving of the files and dynamic HTML but that would kill caching and might break on some browsers.


I was trying to do this last week, and found this: http://drydrop.binaryage.com/ It updates Google App Engine from a github repository. Push your changes, and they're online. It's very easy to setup.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: