If scale is a function of read/writes, very large. In fact with relatively minimal (virtual) hardware it's not insane to see a cluster doing around 1M writes/second.
I was talking more about large file storage like HDFS, and the MapReduce model of bringing computation to data. HBase does the latter, and it's strongly consistent like FoundationDB, though FoundationDB provides better guarantees. As a K/V I understand what you and OP say.
Truthfully at Wavefront we've taken the json status directly into telegraf. Plus a bunch of python tooling to massage additional telemetry on a clusters health (coordinator reachability for example).
Plus even more tooling (mostly Ansible) for managing large fleets.