As I understand it, Omniledger retrofits highly available multi-master writes, across your existing SQL tables on existing databases, in a way that is transparent to application devs, by intercepting JDBC API calls. This approach of intercepting JDBC calls allows an enterprise to unify 100s of siloed SQL instances across the enterprise into a single globally consistent enterpriser view, without changing any application, simply by intercepting JDBC. To do this without requiring all txns to go through a single master authority, which would obviously be too slow, with OmniLedger the sysadmin must supply a configuration that maps a single authority (i.e. single database), for each topic (i.e. table). Therefore, "wide" transactions (e.g. "change email for user", where user table is repeated 100s of times for each database in your enterprise) may need to coordinate with many databases before committing and the txn will be delayed as it negotiates with the user table authority (which OmniLedger coordinates transparently to the application via JDBC interception). And "narrow" transactions, say to a single application-specific topic/table, are routed to that single authority, which is likely co-located with the application, giving local performance for this narrow transaction (i.e. no routing it through a central master db). The difference between this and say Spanner, is that you don't have to migrate/rewrite your entire enterprise of applications to integrate Spanner – OmniLedger intercepts the JDBC calls your applications are already making, with no application source code changes needed!!
Source: I've met with the founder. Andras, how did I do?
Almost there Dustin, thank you. This article is about the mental model behind scaling consistency across arbitrary geographical distances and how this model allows us to communicate time-information the same way we now communicate data. It's an intro to the science-paper, not the implementation.
With regards to the implementation (which is omniledger.io): We basically make SQL scale via Kafka by piggybacking on the semantics of both technologies.
There's no central authority for any of the tables. The version-ledger is totally schema-agnostic, only the clients understand it. In fact, it federates schemas the same way it does records. Tables are not topics either, in fact, we scale by importing namespaces via a command-line interface, which specifies a schema. Any database can register to this namespace after which they become as much of the "master" to the namespace as any other. There's no hierarchy between them, the ledger does everything for them. Since the ledger itself is deterministically replicated, a separate instance of it can be co-located with each instance that runs the JDBC-connections to the database, raising availability of it to the availability of the whole system. Each component in the setup scales with the number of partitions created for it.
Source: I've met with the founder. Andras, how did I do?