Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Care to clarify?

My understanding was that a 95% confidence interval implies the chance of a false positive of 5%.

From http://www.mit.edu/~6.s085/notes/lecture2.pdf

"In particular, we can ask for an interval around ˆp for any sample so that in 95% of samples, the true mean p will lie inside this interval. Such an interval is called a confidence interval."



The false positive rate has nothing to do with positive examples (hugely counterintuitively). A false positive happens when there's nothing there but you think there is. The rate of these is compared to all incidents where there's nothing there. So it's the fraction of negative instances where you think there's a positive. No actual positives have to come into the picture.


Ah you're right that makes a lot of sense thanks for the explanation


Suppose you test for 1 true positive out of 1001 cases.

You are likely to find the actual positive and 50 false ones.

However, that’s with a single test. If you run 5+ different tests you are likely able to distinguish the true positive.


To add to this, you can't just run 5+ tests. You have to run wholly independent ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: