While langsec can't easily mitigate spectre because the processor is trying to hide where the performance comes from, it's worth noting that several new languages are working on ways to write code where you can actually assert and have the compiler check that the timing of the code you write is bounded and uniform.
It's very easy, I think, to throw up our hands and say, "Well gosh all this language stuff is useless because timing attacks are so scary!" But in reality, they're pretty well studied and many of them are actually pretty simple to understand even if they can be hard to recognize.
Both hardware AND software sides of our industry need to start taking correctness, both at compile and runtime, seriously. The days where we can shrug and say, "But that's too slow, don't run code you don't trust" are dead. We killed them by ceding the idea of hardware ownership to big CSPs. The days where we can say, "This is too complicated to do!" or "This doesn't deliver customer value!" are also going away; the threat of combination attacks easily overshadows any individual attack, and small vulnerabilities tend to multiply the total surface area into truly cataclysmic proportions.
But also gone is the day when we can say things like, "Just use Haskell or OCaml!" We've seen now what these environments offer. It's a great start and it's paved the way for a lot of important understanding, but even that is insufficient. Our next generation of programming environments needs to require less abstract category theory, needs to deliver more performant code, and needs to PROVE properties of code to the limit of runtime resolution. The hardware and OS sides of the equation need to do the same thing. And we as engineers need to learn these tools and their techniques inside and out; and we shouldn't be allowed to sell our work to the general public if we don't.
> several new languages are working on ways to write code where you can actually assert and have the compiler check that the timing of the code you write is bounded and uniform.
I'm interested in how that would avoid the halting problem. Let's say I write code, compile it, and run some "timing verifier" on it. That verifier either runs my code and verifies that it's timing is correct on that run, or inspects the machine code against a known specification of the hardware I'm running it on right then and ensures all of the instructions obey my timing constraints. How would you check that the code's timing is bounded and uniform on subsequent executions? Or on other hardware? Or in the face of specifications that are incorrect regarding the timing characteristics of machine code instructions (CPU/assembly language specs are notoriously incomplete and errata-filled).
I suspect something fundamental would have to be changed about computer design (e.g. a "CPU, report thine own circuit design in a guaranteed-accurate way") to make something like this possible, but am not sure what that would be, or if it's feasible.
It avoids the halting problem the same way literally all sound static analysis does. With false positives. The java type checker will reject programs that will not have type errors at runtime. And a system that verifies timing assertions with SMT or whatever will reject some programs that will not fail the assertion.
The halting problem has never actually stopped static analysis tools. Static analysis tools that check timing assertions have been around for a very long time.
Sure, but this isn't a static analysis tool in the same way a type system is. This is an analysis tool which checks for mostly platform-unrelated, entirely runtime behavior which can vary based on a lot of external factors.
When you say "Static analysis tools that check timing assertions have been around for a very long time.", what are you referring to? I've used analyzers that check for potentially-inescapable loops, do assembly/machine code operation counts, and look for the presence of interruptions that are known to take a potentially infinite/nondeterministic amount of time (waiting for I/O), or ones whose lower bound in time is known (scheduler interactions like sleep). How would you analyze for instructions which have theoretically-constant times, but which in practice are vulnerable to "constant plus or minus"-type timing attacks like Spectre? How would that analysis yield results we don't already know, a la "in this section of code, you should obfuscate/prevent the gathering of timing data, and/or try to defeat speculative execution"?
There is no one here saying that there is a runtime that can protect against Spectre. Spectre is just one of extreme example of timing attacks which have been troubling our industry for the better part of a decade.
It's entirely possible to prove that some types of code do not significantly very they're running time based on the character of their input. A classic example of this is hashing algorithm whose execution time is only dependent on the length of the input.
I'm not sure if people recall password oracles but they're still valid attacks today. We can only eliminate these by starting at the bottom and working our way up.
If your response to Spectre is to give up on your computer security, I don't think I want you writing software for me. These are the challenges our industry has to face. Failure is not really an option.
The Halting problem as a burden for this kind of code analysis is very overstated. What you do is instruct the compiler how to infer timing inductively on a language of all basic internal and I/O ops.
And of course it's 2018, nothing stops us from saying "Your software needs training." We can go from a somewhat conservative binding that is then informed by runtime metrics to be very precise for a given deployment and as long as we hold the system static for that measurement, it's pretty fair.
As for Hardware verification, I agree it's a harder problem in some ways, but since there is no obfuscation of the important timing quantities it also gets easier.
It's very easy, I think, to throw up our hands and say, "Well gosh all this language stuff is useless because timing attacks are so scary!" But in reality, they're pretty well studied and many of them are actually pretty simple to understand even if they can be hard to recognize.
Both hardware AND software sides of our industry need to start taking correctness, both at compile and runtime, seriously. The days where we can shrug and say, "But that's too slow, don't run code you don't trust" are dead. We killed them by ceding the idea of hardware ownership to big CSPs. The days where we can say, "This is too complicated to do!" or "This doesn't deliver customer value!" are also going away; the threat of combination attacks easily overshadows any individual attack, and small vulnerabilities tend to multiply the total surface area into truly cataclysmic proportions.
But also gone is the day when we can say things like, "Just use Haskell or OCaml!" We've seen now what these environments offer. It's a great start and it's paved the way for a lot of important understanding, but even that is insufficient. Our next generation of programming environments needs to require less abstract category theory, needs to deliver more performant code, and needs to PROVE properties of code to the limit of runtime resolution. The hardware and OS sides of the equation need to do the same thing. And we as engineers need to learn these tools and their techniques inside and out; and we shouldn't be allowed to sell our work to the general public if we don't.