interesting counter argument. but you can still use the "trillion hashes" as a limit to how parallelizaeble it is, and then use m to increase the average amount of work done, but which however can be done in parallel. You are right that this increases unpredictability. You can balance a trade-off between the figure of "trillion" hashes and the value of m, to strike a balance between how much work you have to do to compute it, and how predictable and non-parallelizable the work will be (by increasing m).
e.g. you can work for an hour and increase it by a factor of 10,000, but the ten-thousandfold work will be parallelizable and slightly unpredictable; or you can work for 1000 hours (41 days) and increase the work by a factor of just 100 in the worst case - but the increase will be parallelizable and the unlocker might get lucky.
so you can really balance how much work you're doing with the level of parallelizability/predictability of the reverse.
Make your value of m different each time in the hash and then choose a distribution of m's such that you get your desired output time at least probabilistically. You can balance parallelizability and the level probabilistic trust you need by setting the distribution of m and number of hashes.
- They might find the key earlier or later in their search (it adds volatility to the amount of time required).
- Searching for the right value of m is highly parallelizable.