"In particular, we can ask for an
interval around ˆp for any sample so that in 95% of samples, the true mean p will lie inside
this interval. Such an interval is called a confidence interval."
The false positive rate has nothing to do with positive examples (hugely counterintuitively). A false positive happens when there's nothing there but you think there is. The rate of these is compared to all incidents where there's nothing there. So it's the fraction of negative instances where you think there's a positive. No actual positives have to come into the picture.
My understanding was that a 95% confidence interval implies the chance of a false positive of 5%.
From http://www.mit.edu/~6.s085/notes/lecture2.pdf
"In particular, we can ask for an interval around ˆp for any sample so that in 95% of samples, the true mean p will lie inside this interval. Such an interval is called a confidence interval."