We chose Rust not just for the expected speed and safety but also because we needed to create a shared object that could provide the API (written in Go) with exactly the same parsing and matching engine as our edge (initially Nginx for web traffic written in C and Lua).
The key was to produce consistent behaviour in the way we work with filter expressions such that there is no difference in behaviour that can be leveraged by malicious users later. i.e. if a customer used a filter to create a security rule and that filter behaved even slightly differently later then that would be a security incident in the making and we would have failed our customer.
Rust stood out for being a safer language than C (we had that bug) that could produce a shared object we can use in our API (unlike Lua which does not make this easy), and didn't come with the garbage collection.
We already have some other small bits of Rust in part of our dev pipeline so were comfortable selecting it, but this is the first time we would be shipping Rust globally and running it at the edge.
Our main expectation and hope is performance.
The matching engine that applies the filters is on the hot path for handling requests, it's early in the request lifecycle and all traffic on a zone (customer domain) would need to be evaluated to see whether traffic applies to any existing filter. So the numbers we are looking to gather relate to the time it takes to execute expressions similar to what we have already, as well as more complex expressions, and what this does to CPU load - those two things will dictate how this new project affects the throughput.
The hope is that a more powerful matching engine that is fast and doesn't increase CPU load will allow us to remove code from our system whilst providing customers with fine grained control of all features. Today a lot of features implement their own matching on paths, headers, etc... and these are not always efficient and are implemented inconsistently.
Performance is therefore what we are focused on measuring and improving, and we hope that if the numbers are good Rust will provide us the opportunity to remove other code and increase throughput whilst not giving us fear that such a change has opened us up to other more fundamental risks.
Once bitten, always shy, eh? Not saying you are overreacting because I remember the bug and it was bad news. I use C pretty much exclusively (and have for 20 years) but I would rather see new development for handling user input and frequent memory (re)allocation done in Rust.
As a daily systems language Rust is not quite there (for me) yet.
Rust's allocation API is actually making great progress! The RFC process really speeds these things along so that their merits can be tested before being stabilized (through the unstable API). https://github.com/rust-lang/rfcs/pull/1398
We chose Rust not just for the expected speed and safety but also because we needed to create a shared object that could provide the API (written in Go) with exactly the same parsing and matching engine as our edge (initially Nginx for web traffic written in C and Lua).
The key was to produce consistent behaviour in the way we work with filter expressions such that there is no difference in behaviour that can be leveraged by malicious users later. i.e. if a customer used a filter to create a security rule and that filter behaved even slightly differently later then that would be a security incident in the making and we would have failed our customer.
Rust stood out for being a safer language than C (we had that bug) that could produce a shared object we can use in our API (unlike Lua which does not make this easy), and didn't come with the garbage collection.
We already have some other small bits of Rust in part of our dev pipeline so were comfortable selecting it, but this is the first time we would be shipping Rust globally and running it at the edge.
Our main expectation and hope is performance.
The matching engine that applies the filters is on the hot path for handling requests, it's early in the request lifecycle and all traffic on a zone (customer domain) would need to be evaluated to see whether traffic applies to any existing filter. So the numbers we are looking to gather relate to the time it takes to execute expressions similar to what we have already, as well as more complex expressions, and what this does to CPU load - those two things will dictate how this new project affects the throughput.
The hope is that a more powerful matching engine that is fast and doesn't increase CPU load will allow us to remove code from our system whilst providing customers with fine grained control of all features. Today a lot of features implement their own matching on paths, headers, etc... and these are not always efficient and are implemented inconsistently.
Performance is therefore what we are focused on measuring and improving, and we hope that if the numbers are good Rust will provide us the opportunity to remove other code and increase throughput whilst not giving us fear that such a change has opened us up to other more fundamental risks.