I've been staring to write a paper on bcache/bcachefs's b-tree design, which is also a sort of combination b-tree/trie (it uses binary trees in eytzinger layout). Pretty cool to see the same idea coming up in multiple places.
Running index-microbenchmarks now to see which is faster :)
Yeah, Masstree has settled into the standard set of comparator systems for most research since it was published - and not as a strawman "this system was crap, so let's pretend we've done good work by beating it!" but as a real challenge to do better than.
So what are the advantages over adaptive radix trees or good old judy-dict/array?
Apart from judy being too damn complicated, and too old to be optimized for vector-compare instructions (I think the fancy hand-coded x86 vector-comparisons are the main reason for ART being competitive with judy, considering that it misses at least key compression and the clever allocator, and that ART is not as optimized for using every byte out of every fetched cache-line).
Cool, looking forward. Please put Judy in as well :)
Seriously, the 17 year old judy is still pretty good, despite it's lack of use of cool vector load / compare / etc instructions for quickly traversing tree nodes that are too small for a full radix search (ART does "simultaneous search", i.e. compares to all stored keys in a single instruction, while judy afaik runs a linear search).
It would be pretty cool if someone vectorized that in judy, and replaced null-terminated strings by a binary-safe representation. Unfortunately, all implementations are old and very hard to read.
My super high-level partial understanding of Judy is the following:
Start with ART, which is pretty simple. Then do the obvious improvements: First, we want fast access to "element number N in sorted order", i.e. you also store number of descendants. Next, you do key compression: Storing the portion of the key that can be reconstructed from the tree traversal is silly. Next: an 8-bit tag for signaling one out of 4 node types? Are you, like, filthy rich? More node types it is. Next: Spending 64 bit for a node pointer? Are you crazy? Put the type-tag into the parents pointer (earlier resolution of the branch!), and use a custom allocator to get smaller pointers (you allocate big segments and your node pointers are offsets).
You end up with an unintelligible monstrosity. Give it a cute name, forget about SIMD because it is 2001, and you end up with something like Judy.
I don't know of one, but it's such a natural idea that I'd guess it's been studied. There are standard implementations of LRU caches that use e.g. a hash map and a linked list to get both fast lookup and ordering, but for real performance I think you'd want to try and minimise the number of data structures to avoid having competing cache behaviours.
Running index-microbenchmarks now to see which is faster :)