Here's a question I've been pondering: suppose you are a program delivered from an origin (trusted) to a client machine (untrusted.) You start without any credentials, but you have the option of dialing out and asking the origin for a shared secret (e.g. a private key.) Is there any useful way for the origin to require that you, the delivered program, prove that the client machine you're running on can be trusted with the shared secret? If it's possible at all, I'm guessing it involves a "secure boot" with a TPM chip.
Why would I want a machine, which I own, trust someone else than the owner?
And no, I do not trust the TPM in its current iteration. We mere owners are prevented from knowing its private key. Nor can we generate and store away a private key (or buy a known private keyed chip).
So you can, for example, participate in a distributed computing project where the results sent in by your machine can be trusted.
(An online game that calculates physics client-side is a special case of a distributed computing project ;)
It doesn't require you to give up ownership over your entire computer, mind you. If your own OS ran in a hypervisor that was in one TPM "domain" (you have the key to this domain), but then applications could request to be run directly on the hypervisor with a separate TPM domain (and thus keep keys your own OS wouldn't ever be able to touch), that'd be good enough to allow for any secure distributed computation you might want to do. At any time, you'd still be able to wipe out those domains (and thus kill the apps running in them)--but you wouldn't be able to otherwise introspect them.
Basically, it's like the duality of "OS firmware" and "baseband firmware" on phones--except it would all be being handled on the same real CPU.
"you wouldn't be able to otherwise introspect them"
Can't implement it until you define its behavior. If you define its behavior you can emulate it (which, outside this discussion, is really useful). If you can emulate it, you can single step it, breakpoint it, dump any state of the system including memory, reboot it into "alternative" firmware...
Your only hope is playing games with timing. So here's a key, and its only valid for one TCP RTT. Well if they want to operate over satellite they must allow nearly a second, so move your cracking machine next door and emulate a long TCP path and you've got nearly a second now. On the other hand if instead of runnin over the internet you merely wanted to prove bluetooth distances or google wallet NFC distances, suddenly you've gone from something I can literally do at home easily to a major lab project.
Another thing that works is "prove you're the fastest supercomputer in the world by solving this CFD simulation in less than X seconds". Emulating that would take a computer much faster than the supposedly fastest computer. So this is pretty useful for authenticating the TOP500 supercomputer list, but worthless for consumer goods.
This is inane. My question was about mathematically provable secure computation, not kludges that any old advanced alien civilization could bypass by sticking your agent-computer in a universe simulator. :)
Let's ignore the computers. You are a spy dispatched from Goodlandia to Evildonia. You want to meet with your contact and exchange signing keys. You can send a signal at any time to Goodlandia that will tell them to cut off all contact with you, because you believe you have been compromised. (A certificate revocation, basically.)
Your contact, thus, expects one of three types of messages from you:
1. a request for a signing key with an attached authentication proof;
2. a message, signed with a key, stating you have been compromised and to ignore all further messages sent using that key;
3. or a message, signed with a non-revoked key, containing useful communication.
Now, is there any possible kind of "authentication proof" that you could design, such that, from the proof, it can be derived that:
1. you have not yet been compromised;
2. you will know when you have been compromised;
3. and that, in the case of compromise, you will be allowed to send a revocation message before any non-trusted messages are sent?
You can assume anything you like about the laws of Evildonia to facilitate this--like that it is, say, single-threaded and cooperatively multitasking--but only if those restrictions can also carry over to the land of Neoevildonia, a version of Evildonia running inside an emulator. :)
It might be possible to exclude enough realistic current day threats to eventually end up with something that "works" but
I don't think that's useful in any way.
None the less, if you want to exclude computers, the human equivalent of "stick it in an emulator" is the old philosopher's "brains in a vat" problem. That's well traveled ground no there is no proof you're not in a vat.
There is no way to prove you have not been compromised because there is no way to prove no theoretical advancement will ever occur in the field in the future. (or not just advancement but NSA declassification, etc) So you're limited to one snapshot in time, at the very least.
You're asking for something that's been trivially broken innumerable times outside the math layer.
Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.
> Its hard to say if you're asking for steganography (which isn't really "math") or an actual math proof or you just want a wikipedia pointer to the kerberos protocol which is easily breakable but if you add enough constraints it might eventually fit your requirements.
None of those; I know the current state of the art in cryptography/authentication, and that it doesn't quite cover what I'm asking for. I'm basically just waiting for you to say that the specific kind of designed proof I asked for is impossible even in theory, so I can go and be sad that my vision for a distributed equivalent to SecondLife[1] will never happen.
My own notion would be that the Goodlandian agent would simply request that his contact come and look at the machine itself, outward-in, and verify to him that he's running on a real, trusted piece of hardware with no layers of emulation, at which point the contact gives him an initial seed for a private key he will use to communicate with from then on. The agent stores that verification on his TPM as a shifting nonce (think garage-door openers), so that whenever the TPM is shut down, it immediately becomes invalid as far as the contact is concerned--and must be revalidated by the contact again coming and looking at the physical machine. All we have to guarantee after that is that any method of introspecting the TPM on a piece of currently-trusted-hardware fries the keys. Which is, I think, a property TPMs already generally have?
Besides being plain-ol' impractical [though not wholly so; it'd be fine for, say, inspecting and then safe-booting military hardware before each field-deployment], I'm sure there's also some theoretical problem even here that renders it all moot. I'm not a security expert. :)
---
[1] More details on that: picture a virtual world (technically, a MOO) to which any untrusted party can write and hot-deploy code to run inside an "AI agent"--a self-contained virtual-world object that gets a budget of CPU cycles to do whatever it likes, but runs within its own security sandbox. Also picture that people who are in the same "room" as each AI agent are running an instance of that agent on their own computers, and their combined simulation of the agent is the only "life" the agent gets; there is no "server-side canonical version" of the agent's calculations, because there are no servers (think Etherpad-style collaboration, or Bitcoin-style block-chain consensus.)
Now, problematically, AI agents could sometimes be things like API clients for out-of-virtual-world banks. Now how should they go about repudiating their own requests?
Eh, either way, it's the same problem. Imagine you're an agent for BigBank, thinking you're running on Alice's computer. If you authenticate yourself to BigBank, BigBank gives you a session key you can use to communicate securely with them--and then you will take messages from Alice and pass them on to BigBank.
But you could also be running, instead, on an emulator on Harry's computer--and Harry wants Alice's credit card info. So now Harry reaches in and steals the key BigBank gave you, then deploys a copy of you back into the mesh, hardcoded to use that session key. Alice then unwittingly uses Harry's version of you--and Harry MITMs her exchange.
In ordinary Internet transactions, this is avoided because Alice just keeps an encryption key (a pinned cert) for BigBank, and speaks to them directly. If you, as an agent, are passed a request for BigBank, it's one that's already been asymmetrically encrypted for BigBank's eyes only. And that works... if the bank is running outside of the mesh.
But if the bank is itself a distributed service provided by the mesh? Not so much. (I'm not sure how much of a limitation that is in practice, though, other than "sadly, we cannot run the entire internet inside the mesh.")
There is no way to prove you have not been compromised as it's possible to be compromised without knowing it. EX: Listing to the EM leakage as information is sent from one chip to another.
Without regressing into plato's cave and brain in a jar emulation type philosophical discussions....
The standard way of handling distributed clients is with trust calculations. A simeon trust calculation is to duplicate a packet of work multiple times and/or process questionable ones yourself. If they match, you can bump up the trust calculation on that client.
Public key crypto is all fine and good. It's great for executables so you know who they came from (hmm, trust again..).
So why can't I have the private key to a tpm I buy or have integrated in my motherboard?
I do not think their is a way of doing this for a general purpose computer, but I remember hearing a talk about doing this with small embedded devices (like, say, the microcomputers monitoring the coolant system for your power-plant). The idea is that you send the device enough random bits to completely fill its read/write memory, then require it to send that same stream of bits back. After this, you can re-send it the actual program you want it running.
"prove that the client machine you're running on can be trusted"
Well your remote server prover, is going to need a bunch of unit tests and some kind of load testing "system" for scalability testing, so rather than piling dozens of physical clients on your desk you may as well make a client emulator that can run in a virtual image... Oh whoops. That means anyone can run the emulator until it authenticates, then hit pause and start dumping memory to see your "secret"
Basically if you can define "it" to run on a turing complete machine, "it" can be run on any other turing complete machine. You may have some interesting games to play with timing and speed but that's usually not too hard to work around if the emulator is more powerful than whats being emulated, or if the protocol is sloppily implemented (which is the norm)
> That means anyone can run the emulator until it authenticates, then hit pause and start dumping memory to see your "secret"
I would presume that "semantically-correct" emulation of a TPM-based authentication protocol would require that the VM software runs in ring 0 and uses the host's TPM for guest storage. Anything else would be emulating "something that acts like TPM, but offers none of the guarantees of TPM."
I'm just not sure how an emulated TPM would be able to figure out that it's not fulfilling its semantic function.
That seems to be the key, here: is there anything a TPM chip could investigate before saying "yep, I'm not running emulated--I can see the Real Hardware I was created for right there--so I'll let programs trust me"?
I do apologize for using "secret" in a context where it could apply to either a public key encryption scheme or to the content which is being restricted by the whole scheme.
Another issue is "tamper resistant" is just a big wrapper around a security thru obscurity design. Its not "mathematically provable" its just here's a secret number we hope doesn't show up on wikileaks.
But the TPM is just a chip on the LPC bus, right? Couldn't you do a man-in-the-middle and have the TPM think it's talking to real hardware when in reality it's talking to an emulated system?
I think the idea is if TPM is enabled, the ROM bootstrap code only gives control to a signed trusted bootloader, which only gives control to a signed trusted kernel, which carefully prevents untrusted code from making requests to the TPM hardware. Like DRM, it's Game Over when the first vulnerability in this trusted code is found, though it'll continue to inconvenience legitimate users (because vendors have little incentive to ensure the machine is practically usable with TPM disabled or trusted signers according to the user).
Yeah, the "remote attestation" part of the trusted computing stuff was aiming to do that. Didn't really take off.
Nowadays people have dreamed up applications for it in server hosting, where you can talk to the DRM in your rented server and get some assurance your software running there hasn't been comproimsed.
What's your definition of trust? Trust to execute code faithfully or trust to be who you think it is? In either case, what's required is a base unit that you trust on faith. In the former case the base unit is the signed firmware and the latter case, the root CA.