Okay, so we can move beyond CAP. So we don't talk about implementation in either the science-paper or in the intro for it (which is what this thread is about). I mostly write about implementation, Mark mostly writes about the science. So yes, redundancy will only make latency-spikes less likely. Notice however, that the way we establish strong consistency is based on communication also. There's a part in the essay I keep linking where I talk about what happens to nodes with flaky connections, it's called "Stability.
There are three separate mitigation-solutions that go into how total-order strong consistency can keep marching on even if a specific child-group is intermittently isolated.
The first one is redundancy by deterministic replication, so there will always be many replicas which aren't just shards, but full copies of the _consistency ledger_. It's not a database, not a cache, just the thing that establishes the consistency between nodes. These instances all "race each other" to provide the first value of the outputs to other nodes.
The second one is the latency-mitigation we talked about earlier, I don't think we need to waste more breath on that.
The third one is that since the consistency-mechanism requires an explicit association-instruction to interleave the children's versions into its parent's (so that these versions can be exposed to nodes observing it from afar). If the child goes AWOL, it won't be able to associate its versions to its parent, so it won't keep up everyone else either. In this case the total-order won't be affected by the child-group's local-order, which is still allowed to make progress, as long as it's not trying to update any record that is distant to it.
1. I don't quite understand what a "consistency ledger" is and how it's not a database, but it sounds like a log. Many distributed systems solutions have logs, i.e. Raft. The question you haven't covered is how you keep the ledger consistent. I believe you're using Kafka underneath (correct me if wrong), but it doesn't matter so much, any queuing system is much under the control of CAP as anything else, so it's not clear to me how this resolves anything CAP related.
2. Redundancy in network connection. Yes this can help. Again, it doesn't resolve anything CAP related, it just reduces the likelihood of a certain class of distributed systems fails. Note, there are LOTS of ways to have unbounded latency that this does not resolve. Anything from misconfigured routers to disk drives dying. Again, not resolving anything CAP states, just attempting to reduce the some probabilities.
3. If I understand this correctly, you are saying some data is "homed" in certain regions, and if that region becomes partitioned from the rest of the world, they can still modify the "homed" data there. They own it. This doesn't address anything CAP related.
Assuming I understand the architecture correctly, then yes, some architecture's will benefit from this. Some might not. But all of these decisions align with existing understanding and decisions architects make in the face of CAP.
It feels to me that you have believe CAP defines a very particular database architecture (something like a PostgreSQL) database, and your architecture addresses limitations in that, and thus your architecture solves limitations of CAP. But that's just not true. Take Riak, Cassandra, BigTable, Spanner, CockroachDB, all of these represent architectures defined in the face of CAP and they all have different trade-offs. They don't look a lot like PostgreSQL. But they cannot get around the simple laws of physics for how information is communicated.
The reason you don't understand why my claims are different from being just another solution with clocks and mutexes is because you haven't actually engaged with the essence of it. I'll give you a hint: the consistency mechanism is decoupled from collision resolution and that makes the consistency ledger both deterministic and inherently non-blocking. I don't want to go into more specific questions around the implementation, but I can tell you with certainty that we provide much better promises around progressing global state and sustained write-latency than anything else and our client-centric consistency guarantees local-only reads.
But this is about the solution, not the science. The science is basically unlimited, its only real restriction is around our budget.
If you want to understand how the client-centric consistency mechanism takes care of these things, I write about it on medium and Twitter all the time.
But again, I don't feel like you owe me your time or attention
I have read everything you've given me. If you really believe in what you're selling then I am absolutely the person to convince because if you can convince me you can convince anyone. All you have done is post links to the same two articles which multiple people have told you don't answer the question they've asked, and your only response is to accuse me of not engaging. C'mon, this is ridiculous. You have several people with their undivided attention and you aren't threading the needle.
There's a mental leap you need to make before it clicks into place and I can't do it for you. I have several people who understand what I say and why I say it, but I get that this isn't an easy step to make. If you read the essay, you understand how we turn time into data. You also understand how we can construct partial and global hierarchy of orders, how these are both strongly ordered and still move independently and how observers can construct their own consistent view from the data available to them. There's formal proof in the science-paper that we can do consistency all the way to snapshot.
To summarise: we have a consistent system that works via one-way streaming, via redundant channels. Best possible cache-invalidation. Reads are both consistent and locally available. Upper-bound time of writes is predictable and doesn't suffer from blocking. I'm not sure what other improvements anyone could want from such a system, this is the best such systems can ever be. These improvements go well beyond the limits CAP sets out and it basically makes the whole argument moot.
https://medium.com/p/5e397cb12e63#373c
There are three separate mitigation-solutions that go into how total-order strong consistency can keep marching on even if a specific child-group is intermittently isolated.
The first one is redundancy by deterministic replication, so there will always be many replicas which aren't just shards, but full copies of the _consistency ledger_. It's not a database, not a cache, just the thing that establishes the consistency between nodes. These instances all "race each other" to provide the first value of the outputs to other nodes.
The second one is the latency-mitigation we talked about earlier, I don't think we need to waste more breath on that.
The third one is that since the consistency-mechanism requires an explicit association-instruction to interleave the children's versions into its parent's (so that these versions can be exposed to nodes observing it from afar). If the child goes AWOL, it won't be able to associate its versions to its parent, so it won't keep up everyone else either. In this case the total-order won't be affected by the child-group's local-order, which is still allowed to make progress, as long as it's not trying to update any record that is distant to it.