I still think he’s missing the point here. Android’s ability to run native apps IS its weakness. ChromeOS, believe it or not, employs the Apple method of success. Apple imposes all kinds of restrictions on its developers but the end result is applications that are remarkably uniform. By making ChromeOS web only Google is imposing “use the cloud” on its developers and in doing so creating an environment that fulfills the promise of portability (Android and iOS come close but aren’t anywhere near seamless)
As for the two points that will lead to Chrome’s demise…
ChromeOS will lose because it will lack native apps – That seems short sighted to me. If Android is brought to the point where it supports web apps built for ChromeOS there will be very little reason to develop native Android apps. Assuming ChromeOS is not dead on arrival it makes all the sense in the world to target 2 platforms instead of one (especially when the development tools are the same ones everyone uses on the web). Especially given those apps also have a good chance of running on Palm’s OS, Blackberry and Windows Phone 7 with very little modification (and even the iPhone in a crippled way)
Touch vs Mouse - I wrote a post after working on a touch application so I’ll just direct you there: http://tomstechblog.com/post/A-Quick-Indictment-Of-The-Finge.... The gist was that touch really doesn’t replace the mouse in practical situations (and I wrote that before owning an iPad so you can add “your screen gets dirty much quicker” to the list of reasons).
"Native" really isn't that significant. When Android first came out, it's "native" applications were all apps that were written more or less from scratch using the Android APIs. When Android first came out, there were essentially zero apps. "Native" usually isn't raw ARM binaries either, it's in the form of dalvik vm bytecode, optimized not for speed, but for executable size. The only time native is useful is with truly performance critical applications, and the vast majority of apps aren't performance critical. Javascript runtimes are getting faster with each release and the huge amount of competition between browser vendors. Chrome also has Native Client, which would be the equivalent of Android's NDK.
If by "native" you mean the user interface consistency, acknowledging that web apps will be significant pretty much destroys that anyway. Lots of iOS/Android apps don't even use the native UI widgets anyway, so it's not a really worthy goal.
Android started with a huge disadvantage: not having any apps. Yet only a few years later, the platform hundreds of thousands. Chrome and the web as a platform is much more approachable, and for every existing android or iOS app, in a short time, there will be rough equivalents that can be made.
From the start, Chrome has millions. All of them can be accessed with a consistent interface, where there's a single notion of an application. If android takes over, then you'll be constantly switching contexts from the browser and the "native" applications.
This rather assumes that the browser could not be improved upon.
I kind of feel like Tom... Chrome OS is actually what Android will turn into, not the other way around. The downloadable app paradigm is working fine for Apple, but it's been pretty miserable for Google. Their marketplace is pretty atrocious, and the method of selling them has been a constant headache.
I think Chrome OS is a stopgap to an OS where the browser doesn't appear as a bunch of tabs, but as something that probably looks more like iOS anyway. I think it shows Google thinking a little too small and releasing too early. Had it baked a couple of years, and it became branded as Android 4 or something ("Android... everywhere"), I think that would have been better. I wonder if they're not launching early in order to push things like Web Print, so by the time they are ready to do something groundbreaking, the technology is already in place.
"If Android is brought to the point where it supports web apps built for ChromeOS there will be very little reason to develop native Android apps."
From what I've seen, the gap between a native Android app and web apps is getting bigger, not smaller. Among other things, the Android API includes:
1. A SIP-based VoIP protocol stack
2. Near Field Communications
3. Gyroscope and other sensors
4. Multiple cameras support
5. Speech recognition engines
Even if these things were available tomorrow to web apps, developers might still choose the native option because a) There would be millions of 'old' devices that only supported the native API; and b) There would be thousands of developers who were familiar with the native API
There's accelerometer support in Firefox, Chrome, Opera, and Safari (at least mobile Opera and Safari) now. It's only a matter of time until we have useable APIs for NFC, camera, etc. I'm wondering when there's going to be a browser USB API. I can think of some things I'd like to do with that.
But they don't have to use the cloud. Meaning native apps can (and in my experience do) leave data on the device which in turn means the user experience isn't seamless from device to device. You're so focused on the technical aspect you're missing the philosophical side of this which is the important part. By forcing app developers to use the cloud you create a guaranteed seamlessness and that's the whole point.
Web apps don't "have to use the cloud" either. (there are APIs for local storage)
I don't dispute that the "web apps only" approach his more "pure". My prediction is simply that the Android approach will win out because users value functionality over purity.
Part of the reason web apps aren't saving much data locally is because they don't have anything to save (where's that API for accessing the camera?). BTW, the fix for native apps saving locally is to make all of the local storage automatically sync into the cloud, much like dropbox does (and like iPhone does with all of the app databases when it syncs). I think that's how the Sidekick worked (which was also created by Andy Rubin).
Web Apps don't have to use the cloud - I suppose it's possible for a web developer to decide to locally cache all an application's data but it seems like your grasping at straws here. It would require a developer to specifically disregard the overall philosophy of a web based apps and a Cloud OS. Don't get me wrong. In my perfect world Google would force developers not to do this but as it is I don't think it's something you're likely to see much of (my understanding is that Google is focusing on SD usage for media just to prevent this)
On users value of functionality - Again the question comes back to one of "what functionality will they really be losing". Particularly with WebGL and Flash both available to developers I really don't see a functionality gap.
Forcing Android to the Cloud - If you just desperately want to save Android you could absolutely do this but what's the point. If you're going to force everyone to the cloud and you can build your development environment around web based tools than why wouldn't you? More to the point why would Google spend a bunch of time and money turning Android into a Cloud based OS when they already have a Cloud based OS
I'm not trying to "save Android". I'm simply predicting that Google will continue developing and improving it because they are already shipping millions of Android devices and it has immense strategic value to them.
The one thing I haven't seen much discussion on with regards to cloud based operating systems is the handling of local peripheral devices. Printers, scanners, video cameras, etc. These devices are used by a wide range of users, from someone who emails pictures to their family, to an amateur who wants to shoot their next film project.
Right now, these devices are not very cloud friendly. In general you plug your printer, scanner, etc. into a USB or Firewire port. From there the operating system needs to work with the printer through some kind of driver. With the sandboxing that I've seen with Chrome OS, it appears to be a rather daunting task to support a wide array of such devices.
Another issue is the bandwidth cost. Let's say I somehow make a video camera "web enabled". Instead of interfacing with the PC through USB, it connects through wifi or wired ethernet. From there you could upload your video to a cloud app that does video editing, or upload it straight to youtube. However, what happens though when it's an hour long video of a family wedding? Depending on the camera's storage format, this could turn out to be a giant gigabyte file, and will take a ridiculously long time to upload given standard broadband connections.
Until this issue has a good solution, I see it extremely difficult to achieve this "live in the cloud" philosophy. This is not to say it's impossible, just that it needs to be well thought out.
I think the answer to both your questions is that the only way Google will succeed with ChromeOS is if they consider it a long-term play. The goal should be to have the most refined cloud based system 5-10 years from now. When looking at your questions from that perspective I’d give you two answers…
On Peripherals – Google is already addressing that problem with Cloud Print right now. The same concepts could easily be translated to other peripherals. Anyone whose tried to use it will tell you the concept is very rough right now but it has promise and several companies have agreed (in theory) to support it in their hardware with future versions.
On Large Files – I don’t know of anyone who doesn’t think our current bandwidth is just a fraction of what will be available to us 10 years from now. For video editors who need to use raw files I don’t think Chrome is appropriate but Google’s said as much by targeting the OS at the netbook/low-end notebook market to start. Again the point is to refine the OS through iteration (as Google does with all their products) so when editing video in the cloud is a possibility they'll already have the most refined OS for it.
re: Large Files... there's no reason the files have to be transmitted to the server anyway, the app could access local storage just fine. This probably indicates that the "save" paradigm is broken, and instead we'll need to start thinking about how to present "local save" to "cloud save" to users.
ChromeOS needs to expose the hardware capabilities of its host machine. That is really the only reason for native apps these days -- access to camera, accelerometers, various coprocessors, etc. Perhaps that will become viable if NaCl takes off, I don't know.
But speaking as a developer, the ChromeOS development model is much saner than those of its mobile counterparts. iPhone/Android development is antiquated and miserable in comparison. We are stifling ourselves by perpetuating a development model that worked in the 1990's.
That is really the only reason for native apps these days -- access to camera, accelerometers, various coprocessors, etc
Or the CPU running at its full potential rather than interpreting Javascript, or storage that doesn't have 100+ millisecond network latency, or UI frameworks that were designed for applications rather than text documents...
The web is great for lots of things. The insistence that it should be the One True Way for applications strikes me as a case of man-with-a-hammer syndrome.
But poll any dev and ask which is more pleasant to write UI-heavy apps in: web tools, or Java? The forced decoupling of the presentation (HTML) from the styling (CSS) and application logic (JavaScript) is really a nice way of doing things.
I don't think this is man-with-a-hammer syndrome, merely that the approach to building web apps translates well for 90% of native apps. For everything else, there's a FFI or NDK.
The major failing of ChromeOS, in my opinion, is that it needs an internet connection to work. (WebOS, for example, does not). The closest analog to "local" apps are browser extensions, and that is still a reasonable approach; the capabilities of such just need to be expanded a little. HTML5-style offline functionality is another option.
FWIW, most of my work is done in C these days. Different tools for different tasks.
And my point still stands. The majority of apps are either front ends to web services, or can be constructed using a similar approach that we use for the web today. They are UI-heavy. I did say you should be able to drop to native code if you really need the power.
> For example, interacting with my FriendFeed page involves the coordination of thousands of individual processors and disks owned by a dozen different entities, including you, Facebook, Amazon, Google, your ISP, and many intermediate ISPs.
Put that way, the entire idea of "the cloud" sounds scary. I have a sudden urge to replace all the webapps I use with native apps.
Good post, but there are two important things you overlooked here.
First, dumb terminals are about control. Not speed. Not behind the scenes smart shenanigans. Control at the user level. By that definition, web browsers and IPhones are Dumb. Jailbroken IPhones, Android phones (not locked), and PCs, are Smart. That's what cloud OS detractors talk about when they say "dumb terminal".
Second, viewing the internet as a secure vault that you can summon at will can be dangerous. Like, get-your-identity-stolen dangerous. (You know this, but many people don't.) There's also systemic risks associated with the centralization encouraged by the (re-)rise of dumb terminals. And you wouldn't want anyone to look at your private data (say your e-mail). Or doing automatic semantic analysis on it so it can directly extract the good stuff… and use it to his advantage.
Now, I agree the couple "cloud + dumb terminals" is extremely convenient, and has an unmatched potential for ease of use. Now if only it didn't require the user to relinquish his right for privacy (an even other freedoms in some cases), that would be great. Personally, I'm looking forward to see Eben Moglen's FreedomBox.
So true: This global super-computer enables us to do things that would have been impossible not long ago, such as instantly search billions of documents, .... leak embarrassing diplomatic cables, etc.
I agree completely. Having all your stuff accessible wherever you are is a worthy goal. Ripping out the ability to use local storage and run native apps when appropriate isn't.
You can still use local storage in a cached form in ChromeOS. Given that let me ask you this: If technologies like WebGL give Web apps the ability to work and act like native ones would you support ChromeOS then?
Once "web" apps are using local storage and running offline and using the full power of the hardware, it's not clear what distinguishes them from "native" apps. Other than having to be written in the often-less-than-ideal HTML+JS+CSS, and even that isn't necessarily the case with NaCl.
I can think of at least one thing. When you are developing a web app, you don't have to worry about people using out-of-date versions of your app. On the flip side, when you use a web app, upgrades tend to be mandatory.
It's a completely different computing paradigm. The current model is "work offline unless you have some reason to be online" the cloud model is "work online unless you can't".
You're not going to see me using an OS where I have to use a SSH-client written in Javascript where I have to upload my keys to a random site on the internet.
By not supporting local applications, lots of the "cloudy" stuff I do, for instance by hooking up to remote sessions via SSH and RDP, is not viable or is inherently much more complicated and less smooth. The "everything is in the cloud" goes both ways. Prevent me from accessing it and your OS is actually less "cloud-capable". Like ChromeOS is.
Edit:
To come off a little bit less rantish, what I think is the key point here is that the proponents of ChromeOS, who wants it to succeed, has their vision somewhat clouded (pun intended).
I don't think anyone here questions the potential of the web as a platform. For certain kinds of applications. We have all seen the amazing stuff happening on the web, while the desktop has been at a relative standstill. We know what this technology can do, and it is finally getting some momentum. Change is happening at a fantastic pace and we're moving forward towards a new web with even more capabilities.
But in all this excitement it's easy to forget that just because a platform can do something doesn't mean it's the best fit for the task. Certain things just works better as local applications. Remember the catch-phrase "use the right tool for the job"?
Why do you want to shoehorn everything into the web, even where it doesn't provide you any benefits? Why should we scrap all out working applications, just because someone out there has written the same application, somewhat limited, in HTML and JS? And why are you so eager about losing control of your data?
Lots of common computer use-cases depends on reasonable IO-capabilities. Until everyone has at least a 100mbps synchronous internet-connection and the internet as a whole has a backbone to support this, these kinds of applications will be severely impaired.
The web is nice. Being forced to go web only when it's not practical is less so.
I think the question is one of what should be mainstream. I think most people will value portability over all else once it can come close to matching local apps. Because while it might not take you or me much effort to move our data from computer to computer your average user dreads that. Your average user lives in fear of their computer crashing because they'll lose access to their data and have to pay someone $300+ to get it fixed. Or that they'll get a virus and lose access to everything. Or whatever else.
For them the right tools is one that gives them the ability to walk over to another system, put in their info and be back up and running.
For people like you I think there will always be Linux versions out there and who knows what else. The cloud isn't right for everyone. Those of us who support ChromeOS just realize most people aren't like you.
I have to agree with the author. There's nothing super unique and powerful about Chrome OS that can't be replicated and done better on top of Android. Further, iOS and Android have incredible momentum right now, and it's hard seeing Chrome OS picking up much traction, especially once Android 3.0 tablets start coming out.
As for the two points that will lead to Chrome’s demise…
ChromeOS will lose because it will lack native apps – That seems short sighted to me. If Android is brought to the point where it supports web apps built for ChromeOS there will be very little reason to develop native Android apps. Assuming ChromeOS is not dead on arrival it makes all the sense in the world to target 2 platforms instead of one (especially when the development tools are the same ones everyone uses on the web). Especially given those apps also have a good chance of running on Palm’s OS, Blackberry and Windows Phone 7 with very little modification (and even the iPhone in a crippled way)
Touch vs Mouse - I wrote a post after working on a touch application so I’ll just direct you there: http://tomstechblog.com/post/A-Quick-Indictment-Of-The-Finge.... The gist was that touch really doesn’t replace the mouse in practical situations (and I wrote that before owning an iPad so you can add “your screen gets dirty much quicker” to the list of reasons).