I found a major problem is that many people graduating with PhDs in biomed suffer from the misconception that rejecting the null hypothesis somehow allows them to conclude their favored explanation is the correct one.
This is a widespread, top-to-bottom, worldwide error. I was trained to think this in grad school, surrounded by people who thought it, and have read innumerable papers by those who pretty much explicitly write that down and publish it.
Since rejecting the null hypothesis is much easier than coming up with and deciding between multiple research hypotheses, these people appear to be much more productive (when measured in publication output) than those who try to do it right. I tried, and no longer believe it is possible to do a good job in the current academic environment.
It is just not possible to compete with people who only reject a null hypotheses and skip everything else, not according to the metrics being used to assess performance. It's a race to the bottom.
There is another problem with this null hypothesis focus as well. The literature simply does not include the information required to really develop and test quantitative models (do a good job). Instead, nearly every claim is of the form "A makes B higher/lower". But this is not the type of information we need, it is near useless!
Here is an example of someone trying to figure out whats going on, but being unable to really check. The information required to constrain the parameter values is not being reported in the literature:
>"Although the signals that transduce the external cues to the GTPase network are becoming clear (21), most of the chemical parameters remain unknown. Because many of the reaction coefficients in Fig. 1 B are also unknown, we allocated a number of possible parameter sets to qualitatively analyze the kinetics of these reactions."
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1366631/
Here is another, we don't even know how many cells are in the human body to within 8 orders of magnitude, let alone how quickly they are supposed to be dividing (which is related to mutation rates), etc. How can you expect a cure for cancer in the absence fundamental quantitative data like this:
>"First, we noticed that these data were typically mentioned
in the literature without citing a reference; second, we
observed wide ranges among data reported by different
sources, ranging from 10^12–10^20."
https://www.ncbi.nlm.nih.gov/pubmed/23829164
This is a widespread, top-to-bottom, worldwide error. I was trained to think this in grad school, surrounded by people who thought it, and have read innumerable papers by those who pretty much explicitly write that down and publish it.
Since rejecting the null hypothesis is much easier than coming up with and deciding between multiple research hypotheses, these people appear to be much more productive (when measured in publication output) than those who try to do it right. I tried, and no longer believe it is possible to do a good job in the current academic environment.
It is just not possible to compete with people who only reject a null hypotheses and skip everything else, not according to the metrics being used to assess performance. It's a race to the bottom.