Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, the CAP requirements are not at all as absurd as that article claims.

For the specific quote you gave, that is an obvious assumption. A client only has access to some of the nodes in the distributed system. Of course we want any node to give the correct answer - the whole purpose is to reduce the burden on the client. The client is not responsible for searching all of the nodes. And note that the proof doesn't actually require that all nodes return the right answer - the contradiction is reached as long as all the nodes that the client has access to return the wrong answer.

Another bad claim in the article is that the proof of CAP requires that the partition is permanent. Maybe it's written like that for simplicity, but it obviously only requires the partition to be longer than the client's bound on response time. If the client is willing to wait an hour for a response, then any partition event that's two hours long will lead to the same conclusion. Since clients never have unbounded time to wait, and since partition duration is unbounded even if not permanent, then the argument still holds.

Also, major network outages that disconnect whole regions of the internet for hours from the rest of the world happen somewhat regularly (more than once a year). Whole AWS regions have become disconnected, ~half of Japan was disconnected for a few hours, Ukraine has been disconnected several times, etc. If you run a large distributed system for a significant amount of time, you will hit such events with probability approaching 1.



I can only repeat what I told you earlier. Our distributed consistency model meets the SQL-standard's requirements for consistency and tolerates such outages. This is a fact.

CAP is a bad model for more reasons than the ones listed in that article. My favourite one is that it requires Linearizability, which nobody does in SQL. The disconnect when saying "SQL is not consistent" to me is just too much. CAP is based on a badly defined idea that comes from a presentation that was wrong in what it said.

That you need to tolerate outages of entire regions is a good argument to make in itself, there's no need to point at CAP. My answer to that is that as there's a way to define consistency in a way that allows for it to manage partition problems more gracefully, and that is the model we show. If you require communication to establish consistency and stream the changes associated with the specific timeslot at the same time, partition means that the global state will move on without the changes from the partitioned areas and that they will show up once the region rejoins the global network. While separated, they can still do (SQL-) consistent reads and writes for (intra-region) records they can modify.


Are you saying that it's possible for an SQL server to allow to successfully commit a transaction where you modify a record, and then in a subsequent query, return the old value of that record? I very much doubt this is true of any single-node SQL database.

In contrast, any distributed system has this property (or it may refuse the query entirely) in the face of partitions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: