>If the client is partitioned from the majority of replicas, it's game over. At that point, the system has to give up either consistency or availability from the perspective of that client.
It sounds like the article's author simply shifted the problem from "eventually consistent" to "eventually 100% available to 100% of the clients" -- e.g. when fiber optical cable is repaired.
That can be a perfectly fine tradeoff but it would be clearer if the author spelled that out explicitly.
I believe his more notable point is that the current crop of immature databases push the logical reasoning about "eventually consistent" too far up into application layer which in turn causes unnecessary pain for developers. I think focusing on this one concept would be a better blog post because it addresses many real-world requirements of "eventual consistency".
Even under that analysis, a consistent system is completely unavailable (0% of clients can get service) when a majority of replicas suddenly go down completely.
The point about eventual consistency pushing inconsistency too high in the stack makes sense to me though. The spanner paper makes a similar argument.
I thought that the original justification for eventual consistency was that, for some kinds of data, availability is much more important than consistency. It could be that, in reality, that kind of data doesn't exist, because any data worth keeping is worth the trouble of keeping consistent. Or, it could be that the use case exists, but it's uncommon and seldom worth maintaining a separate database system if you are not Facebook. The article didn't make those arguments, though. It argues that weak guarantees by the database make client code harder to write, which ought to be kind of obvious, if consistency is what you need.
> It could be that, in reality, that kind of data doesn't exist, because any data worth keeping is worth the trouble of keeping consistent
Not necessarily. What is important is not "dropping" the data on the floor in an unpredictable fashion. The sane eventually consistent models will present the conflicted versions to the user. (Optionally all sides picking the same one value, as winner, but never throwing away others).
That is what Riak does in its correct (sadly non-default) configuration and that is what CouchDB does.
This bubbling up of eventual consistency to the very top layer is the correct behavior. The database might find that both you and your friend withdrew $100 from the same account. Now that account is in a negative balance perhaps. But the important thing is it keeps both transactions. So something above can decide to pick a winning one, not pick any and cancel both, to use maybe a timestamp. Or to cancel the account because of possible fraud.
It sounds like the article's author simply shifted the problem from "eventually consistent" to "eventually 100% available to 100% of the clients" -- e.g. when fiber optical cable is repaired.
That can be a perfectly fine tradeoff but it would be clearer if the author spelled that out explicitly.
I believe his more notable point is that the current crop of immature databases push the logical reasoning about "eventually consistent" too far up into application layer which in turn causes unnecessary pain for developers. I think focusing on this one concept would be a better blog post because it addresses many real-world requirements of "eventual consistency".