Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm completely novice at this matter, but from the sounds of it even if LC4 does not completely remove the biases but only reduce them, wouldn't it still be improvement? I'm kinda assuming that the remaining biases would be smaller, and thus harder to exploit in an attack?


Particularly if we're talking hundreds of characters here, not megabytes to work with or gigabytes that you can bounce off of an oracle. Humans will notice if you ask them to decrypt 20 near-identical messages and report which ones give a MAC error.


I mean, again, this is not a branch of cryptography I take especially seriously, but I'd assume that if you were defining any kind of new cipher, you'd want to avoid constructions that were known to have fatal flaws embedded in them.

Either way, I brought it up because the author brings this up in their paper, but doesn't seem to fully address the literature of the attack he's trying to defend again (I may have missed something, though).


It seems to me that LC4 key is essentially equivalent to RC4 state and thus RC4 early keystream bias does not apply as there is no key-expansion phase.

Edit to clarify: LC4 key has to be permutation of 36 elements, while RC4's state is bijection of 256 elements that is somehow construed from the byte-string key and the issue is in how this string->state transformation works (ie. you have to pump the function for >500 times to get unbiased output).


In other words, brute-force attacks depend on the ability to keep trying essentially indefinitely; it's a similar situation with payment card CVVs --- although they're only 3-4 digits, which makes for a tiny keyspace, any attempts to bruteforce one will be quickly detected and blocked.


Of course, but that wasn't my point. Not only brute force attack are foiled by humans, also attacks like padding oracles will be impossible because the user will notice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: