Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's fine if they account for the number of tests they have made when they calculate their significance levels. If they just kept on trying different options until they ended up with p < 0.05 it's almost guaranteed to be noise.


They used p<0.001. It is not social sciences, there anti-noise filters are stricter.


Ah that's not too bad. Though to be fair you also need the data size. That's only what one in a 1k chance (or is it 10k? Too lazy to count it out). If their dataset is small or they automated testing cofactors there's still a decent chance of false probability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: