""What stops the parties from ignoring the messages I send, thus recovering my secret by simply pretending I died?" They would have to control enough of the network and know who you are and how you are interfacing with the network to stop you."
OK, but if the assumption is that out of the n parties in the network no more than k parties are malicious, why not just use a k+1 out of n secret sharing scheme? You broadcast a signed message once per month, and if the message does not arrive for some number of months the parties all broadcast their shares and recover the secret.
At best, the role of "proof of work" systems here is in combating sybil attacks, which is only relevant if you want to remove the requirement that I know the people I am issuing shares to. If that is truly advantageous, the system might look like this: first, I broadcast a public key for some non-malleable encryption scheme. Each party willing to participate will then use that key to encrypt a randomly generated ID string that they keep secret. Once I have received the IDs, I broadcast a random string, and each party will use their chosen ID and the random string as a "starting point" in a proof-of-work scheme. The output of the proof of work is then used as the seed to generate a keypair for a symmetric cipher (using an appropriate key derivation function). The parties encrypt the proof-of-work outputs and send the ciphertext to me; I check the proofs and generate the keys locally. Then I encrypt each party's share using the party's symmetric key and send the encrypted share. Then I proceed as before, sending a periodic message.
I suspect, though, that such a construction is overkill; also I have not really evaluated the security of it.
"I think big computational networks with incentives unlock some interesting doors"
Maybe so, but right now I see a solution in search of a problem.
"If you can assume that the computational majority"
Why should I need to assume anything about the computational resources about the participants? We can have threshold secret sharing with unconditional security, and we only need to trust one of the parties for the switch to be secure regardless of the computing power of the rest of the parties.
"At best, the role of "proof of work" systems here is in combating sybil attacks, which is only relevant if you want to remove the requirement that I know the people I am issuing shares to."
That seems pretty fundamental to making the mechanism accessible. If are talking about switches as a service if there is a "fixed" pool of switches and an exploit is found that allows you to compromise each switch component you are out of luck because you didn't actually make materializing the secret difficult.
By requiring actual work to be done and allowing the difficulty of the work to be tuned based on the capacity of the network you make an adversary go up against the math instead of against the people.
"If are talking about switches as a service if there is a "fixed" pool of switches and an exploit is found that allows you to compromise each switch component you are out of luck because you didn't actually make materializing the secret difficult."
If an exploit is found that allows you to compromise each component, then the adversary can just have the components ignore your messages and open your secret. It makes no difference how the system is structured at that point.
"By requiring actual work to be done and allowing the difficulty of the work to be tuned based on the capacity of the network you make an adversary go up against the math instead of against the people."
By using a threshold secret sharing scheme, you ensure that the adversary cannot get the secret regardless of the their own computing resources. You also avoid wasting electricity for the sake of your switch. You also have the advantage of having a well-defined security model that can actually be analyzed formally.
The only reason you would ever want to burn through some CPU cycles is to thwart sybil attacks. Unlike Bitcoin, you do not need to keep doing proofs of work after that, because once the shares are distributed, there is nothing more to do. If the adversary increases his computing power after that, he gains nothing by it, because he will not be given any more shares. Hence the suggestion in my previous post: have the proof of work be coupled to the generation of a public key, and just have the public keys be generated when someone needs to set up a switch.
OK, but if the assumption is that out of the n parties in the network no more than k parties are malicious, why not just use a k+1 out of n secret sharing scheme? You broadcast a signed message once per month, and if the message does not arrive for some number of months the parties all broadcast their shares and recover the secret.
At best, the role of "proof of work" systems here is in combating sybil attacks, which is only relevant if you want to remove the requirement that I know the people I am issuing shares to. If that is truly advantageous, the system might look like this: first, I broadcast a public key for some non-malleable encryption scheme. Each party willing to participate will then use that key to encrypt a randomly generated ID string that they keep secret. Once I have received the IDs, I broadcast a random string, and each party will use their chosen ID and the random string as a "starting point" in a proof-of-work scheme. The output of the proof of work is then used as the seed to generate a keypair for a symmetric cipher (using an appropriate key derivation function). The parties encrypt the proof-of-work outputs and send the ciphertext to me; I check the proofs and generate the keys locally. Then I encrypt each party's share using the party's symmetric key and send the encrypted share. Then I proceed as before, sending a periodic message.
I suspect, though, that such a construction is overkill; also I have not really evaluated the security of it.
"I think big computational networks with incentives unlock some interesting doors"
Maybe so, but right now I see a solution in search of a problem.
"If you can assume that the computational majority"
Why should I need to assume anything about the computational resources about the participants? We can have threshold secret sharing with unconditional security, and we only need to trust one of the parties for the switch to be secure regardless of the computing power of the rest of the parties.