A few months back I tried using io_uring for some performance-critical network I/O and found it was slower than epoll. A bit sad because epoll has a notoriously janky API and the io_uring API is much nicer. (This part is also sad for me as a Unix/linux fanboy because io_uring’s API is very similar to what Windows was doing 20 years ago).
I've spent essentially the last year trying to find the best way to use io_uring for networking inside the NVMe-oF target in SPDK. Many of my initial attempts were also slower than our heavily optimized epoll version. But now I feel like I'm getting somewhere and I'm starting to see the big gains. I plan to blog a bit about the optimal way to use it, but the key concepts seem to be:
1) create one io_uring per thread (much like you'd create one epoll grp)
2) use the provided buffer mechanism to post a pool of large buffers to an io_uring. Bonus points for the newer ring based version.
3) keep a large (128k) async multishot recv posted to every socket in your set always
4) as recvs complete, append the next "segment" of the stream to a per-socket list.
5) parse the protocol stream. As you make it through each segment, return it to the pool*
6) aggressively batch data to be sent. You can only have one outstanding at a time per socket, so make it a big vectored write. Writes are only outstanding until they're confirmed queued in your local kernel, so it is a fairly short time until you can submit more, but it's worth batching into a single larger operation.
* If you need part of the stream to live for an extended period of time, as we do for the payloads in NVMe-oF, build scatter gather lists that point into the segments of the stream and then maintain a reference counts to the segments. Return the segments to the pool when it drops to zero.
Everyone knows the best way to use epoll at this point. Few of us have really figured out io_uring. But that doesn't mean it is slower.
> Few of us have really figured out io_uring. But that doesn't mean it is slower.
seastar.io is a high level framework that I believe has "figured out" io_uring, with additional caveats the framework imposes (which is honestly freeing).
It's also worth noting that io_uring has had at most 10-15 engineer-years worth of performance tuning vs. the many (?) hundreds of years that epoll has received. I work with Jens, Pavel, and others and can confidently say that low-queue-depth perf parity with epoll is an important goal to the effort
As an aside, it's great to see high praise from an spdk maintainer. One of the big reasons for doing io_uring in the first place was that it was impossible to compete in terms of performance with total bypass unless you changed the syscall approach.
I'd be very interested to read that blog post. Besides your tips for maximum performance, I'm curious about the minimum you have to do to get a significant improvement. I can easily imagine someone basically using it to poll for readiness like epoll and being disappointed. But if that's enough to benefit, I'd be surprised and intrigued. More likely you need to actually use it to enqueue the op, but folks have struggled with ownership. Is doing that in a not quite optimal way (extra copies on the user side) enough? Or do you need to optimize those away? Do you need to do the buffer pooling and or multishot stuff?
Do fixed buffers help for network I/O? In August 2022 @axboe said "No benefits for fixed buffers with sockets right now, this will change at some point."
I had a similar result with storage I/O to NVMe SSDs. io_uring was slightly slower than my optimised Linux thread pool at 4k random-access I/O at about 2.5M IOPS in my benchmarks, and this despite the syscall overhead in the thread pool version being measurable.
io_uring was only a little slower, and there are some advantages to io_uring with regard to adaptive performance (because Linux doesn't expose some information to userspace that's useful for this, so userspace has to estimate with lag - see Go's scheduler), but I was hoping it would be significantly faster. Then again it was good to have an alternative to validate the thread pool design.
IOCP certainly was ahead of its time, but it only does the completion batching, not the submission batching. io_uring is significantly better than anything available on Windows right now.
It comes with its own set of challenges. In the integration I've seen, it basically meant that all the latency in the system went into io_uring_enter() call which blocked then for far longer than any individual than any other IO operation we've ever seen. Your application might prefer if it pauses 50 times for 20us (+ syscall overhead) in an eventloop iteration instead of a single time for 1ms (+ less syscall overhead), because that means some IO will just sit around for 1ms and will be totally unhandled.
The only way to avoid big latencies on uring_enter is to use the submission queue polling mechanism using a background kernel thread, which also has its own set ofs pro's and con's.
This sounds abnormal, are you using io_uring_enter in a way that asks it not to return without any cqes?
I don't have much of a feel for this because I am on the "never calling io_uring_enter" plan but I expect I would have found it alarming if it took 1ms while I was using it
For many syscalls, the primary overhead is the transition itself, not the work the kernel does. So doing 50 operations one by one may take, say, 10x as much time as a single call to io_uring_enter for the same work. It really shouldn't be just moving latency around unless you are doing very large data copies (or similar) out of the kernel such that syscall overhead becomes mostly irrelevant. If syscall overhead is irrelevant in your app and you aren't doing an actual asynchronous kernel operation, then you may as well use the regular syscall interface.
There are certainly applications that don't benefit from io_uring, but I suspect these are not the norm.
You need to measure it for your application. A lot of people think „syscalls are expensive“ because that’s repeated for your years, but often it’s actually their implementation and not the overhead.
Eg a UDP syscall will do a whole lot of route lookups, iptable rule evaluations, potential eBPF program evaluations, copying data into other packets, splitting packets, etc. I measured this to be fare more than > 10x of the syscall overhead. But your mileage might vary depending on which calls you use.
As for the applications: these lessons where collected in a CDN data plane. There’s hardly any applications out there which are more async IO intense.