Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like there is an impedance mismatch between what CPU designers think memory protection guarantees and what developers think memory protection offers. For production code, I never got much higher level than “C with classes” and if you asked me 15 years ago if leaking bits of information through the branch predictor or memory buffer was a failure to conform to the x86 memory protection model I would’ve said no. Memory protection is mainly there to keep one app from crashing another. If you’ve got untrusted machine code running on your system, you’ve already been compromised. I feel like CPU designers are still in that mindset. They still measure their success by how well they do on SPEC.

Maybe instead of slowing everything down for the sake of JavaScript, we need a special privilege mode for untrusted code that disables speculative execution, sharing a core with another hyper-thread, etc.



> If you’ve got untrusted machine code running on your system, you’ve already been compromised.

You are exactly correct. However, the advent of "The Cloud" changed that.

"The Cloud" by definition runs untrusted machine code right next to yours. IBM and other players have been screaming about Intel being insecure for decades--and it fell on deaf ears.

Well, suck it up buttercup, all your data are belong to us.

While I hear lots of whingeing, I don't see IBM's volumes rising as people move off of Intel chips onto something with real security. When it comes down to actually paying for security, people still blow it off.

And so it goes.


Why should people move to IBM? Remember, their POWER processors were also vulnerabile to both Meltdown and L1TF - IBM systems are probably the only non-Intel servers that were. (Note that I really do mean non-Intel here, since AMD wasn't affected.) Their z/OS mainframes were probably vulnerable too but they don't release public information security about them. The only reason no researchers had discovered this is that IBM hardware is too expensive to test.


Red Hat released info about z/OS mainframes, they're also vulnerable to Meltdown as well as Spectre. ARM also has one new design that's vulnerable to Meltdown, and everyone has Spectre vulnerable designs.


I kind of blame Google for creating the whole trend of hosting all this mission critical stuff on x86 PCs: https://www.pcworld.com/article/112891/article.html. (Altavista was run in DEC iron. Google pioneered running these services in disposable commodity hardware.) That being said, POWER got hit with some of this stuff too.


Speaking about Google, how vulnerable are they, and how many CPUs will they need to replace?

Did someone already demonstrate that speculative-execution bugs are observable in Google's cloud stack?


[flagged]


I think you're being a little harsh.

These things end up having unintended consequences, it isn't about 'fuck Google', its about identifying the root cause of a problem. X86 PCs come from a long line of non multi user, non multi tasking computers. Whereas DEC mainframes are perhaps the more natural choice for what Google wanted to do.


So, let me get this straight.

You're saying Google should have foreseen spectre et al 20 years ago and therefore should have used DEC mainframes as it's infrastructure?

And further it's all google's fault that the cloud uses x86 infrastructure?

Wat?


No I said its not about 'fuck Google', its about identifying the root cause of a problem.

I didn't forsee this, and I don't recall anyone else predicting it, so no I don't think Google should have foreseen it either, but nevertheless it has happened, so we should endeavour to understand why, so it doesn't happen again. It isn't about blame, it isn't about pointing fingers.

Now we've identified an issue, then the next time an industry moves from big iron the commodity X86 PCs we can ask the question, is this going to be a problem?


I think he or she is drawing an analogy between Intel and Google both "cutting corners" to save costs, which worked well for them in the short term but had unforeseen consequences for everyone else over the longer term. This could be an instance of the famous "tragedy of the commons".


If DEC had won, would we have the same issue?

"In some cases there’s non-privileged access to suitable timers. For example, in the essentially obsolete Digital/Compaq Alpha processors the architecture includes a processor cycle counter that is readable with a non-privileged instruction. That makes attacks based on timing relatively easy to implement if you can execute appropriate code."

I still call bullshit on his entire hypothesis.

https://hackaday.com/2018/01/10/spectre-and-meltdown-attacke...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: