Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Almost Everything You Hear About Medicine Is Wrong (newsweek.com)
76 points by gatsby on Jan 29, 2011 | hide | past | favorite | 42 comments


Negative results sit in a file drawer, or the trial keeps going in hopes the results turn positive.

Therein lies the problem. We know the solution: tweak the publishing process.

Currently: 1) Do study 2) Submit paper, including conclusions. 3) Journal accepts, based on correctness of methodology and interestingness of conclusion.

To prevent this result, just change the order of things.

1) Submit paper explaining methodology, and "alternate endings" (positive and negative). 2) Journal accepts paper based on methodology only. Methodology section of paper is published. 3) Authors do study and submit results. Journal publishes appropriate conclusion section. 4) If necessary, authors submit additional observational results.


_Negative results sit in a file drawer, or the trial keeps going in hopes the results turn positive._

There is a more unfortunate scenario that often plays out. That is that they get a different statistician in who has a goal of getting positive looking results out of a negative result set though various means and then often carefully manipulating the wording of the success criteria to match their best attempt at making the data work. This is quite common, my partner (in the girlfriend sense) work within medial journal publishing and sees it quite a lot.


...manipulating the wording of the success criteria to match their best attempt at making the data work.

While not impossible, it is very tricky to do this before you actually have the data. I'm advocating that the wording of the test criteria, statistical tests to perform, etc, are all decided before the trial.

Bringing in a different statistician post-hoc is pointless, all he could do is run the exact predetermined statistical test. In fact, you can do even better: submit code to the journal before the study, and submit data after the study. The journal runs the code on the data and decides which conclusion to publish.


Very good point. The "good" statistician then gets a bonus at the end of the year for being productive and helpful. The "honest" statistician gets slowly pushed out.

Accumulate this over the whole range of companies and studies and we end with a disaster.


It's not just the publishing process that's broken. The funding process is problematic as well. Among the competitors for grant money, it is the investigators that promise to do a lot with the minimal amount of funding that can reasonably be believed to be feasible for the project who end up winning the grant money. This optimizes the funding process to select for proposals that aim to collect the bare minimum of data that the statistician says is required to see a "significant" result (because data collection is the expensive part of doing a study). If the investigator's estimate of the effect size is off, then the sample size is insufficient and spurious results are more likely. Change the funding process to select for proposals that involve collecting a sample size that is larger than strictly necessary for the expected effect size by some margin of safety and you'll get research results that are more likely to actually be true.


The people commissioning these studies need to somehow be liable for the accuracy of the results.

If you were doing a private study where the results would only be for your own private usage, you would go the extra mile and pay the extra money for good, accurate science.

But the financiers here don't really care about whether the result is accurate. They usually are optimizing something else entirely and accuracy is just some minimum threshold to avoid being accused of fraud.


Liable seems too strong- there are lots of ways for studies to go wrong, that you may not figure out for years, through no particular malice or lack of competence.


The International Committee of Medical Journal Editors has done something fairly similar (http://www.icmje.org/urm_main.html), and, in the US the government backs it up with a mandatory prepublication database of all ongoing trials: http://clinicaltrials.gov/

Also see Ionnidis's paper on why most published research is wrong: http://medicine.plosjournals.org/perlserv/?request=get-docum...


> Negative results sit in a file drawer, or the trial keeps going in hopes the results turn positive.

I can't say a whole lot about it (I am pretty sure I am still bound by agreements and truthfully it wouldn't be polite), but I once was employed in clinical trials and this line is a load of crap. I have seen trials stopped. Some came back, but it wasn't the same trial.


I presume you are right, but this doesn't inspire confidence in the system. Of course it's not the same trial: if you have a weak effect, it's better to modify a superficial variable and start the trial again. Why try to climb out of the hole when you can roll the dice again? Do this 20 times and you might stumble on something statistically significant. From a 'true science' perspective, the right thing to do is to carry the trial through to conclusion and publish the negative results. But it's hard to make a living doing this.


Given that a FDA trial can last years and cost a lot of money, a company can't really afford to "carry on" if it is obvious to themselves and the people they have conducting the trial that something is very different from their predicted results.

They had a belief in what the outcome should be before the trial (else you would not go through with / spend money on the trial). A deviation from the expected results really needs to be looked at and you really need to find out why all the prelim testing is not accurately predicting what is actually going on.


Yes, it's not that the drug companies are actually running trials and random hoping for spurious correlations, but that the traditional measures of statistical significance can't be used in cases where trials might be stopped early. Even inspecting the results midway through a trial to see if the effect is being shown is problematic.

Yes, the deviation from expectation really needs to be looked at. The problem is that the financial reward for the companies might be greater to just 'move on' and never publish those unexpectedly negative results. But this means that the unexpectedly positive results for the next test are given undue weight.

I don't think the problem is really the science, or the publishers, or the drug companies, but the unrealistic weight placed on erroneously calculated statistics. This combined with a lot of money at stake forces the companies to "produce results" whether or not this is of long term public benefit.


If trials are stopped and data not published, this wastes the time of other researchers and drastically biases the results of studies that ARE published. If an experiment is abandoned 100 times by independent researchers who never communicate, but succeeds once due to a fluke, it could easily get published and no one would be the wiser.

This entirely destroys the assumptions underlying the supposed statistical power of a test.


Always relevant, the article from Ioannides - the man who dedicated this entire career on examining the way in which research is done rather than the actual research.

http://www.plosmedicine.org/article/info:doi/10.1371/journal...

#1 in downloads on PLoS for many years.


Thomas Kuhn is another one worth reading although more philosophical (none the less - relevant)

"The Structure of Scientific Revolutions"

http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Rev...


And none the less widely misused.


For a bit more detail:

The New Yorker

http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_...

The actual referenced article (PDF)

http://www.edwardtufte.com/files/Study1.pdf



I've been writing about health and medical topics for a long time and this piece does a good job of discussing how the science can get 'icky.'

First, let me say that in general, modern medicine has done wonders, increasing duration and quality of life in astonishing ways.

That said, the system is set up to promote positive studies/findings, and there are a lot of ways you can play with the data. Very competent statisticians/researchers are capable of helping studies, sometimes, be more positive than they might otherwise be.

That, plus the many millions of dollars you can make from a drug, has often resulted in questionable - but predictable - results.


right, less money to be made if people just ate much less and stopped smoking.


And there's the deeper trouble with method:

http://en.wikipedia.org/wiki/Paul_Feyerabend#Nature_of_scien...


Kuhn, Feyrabend and Lakatos all worth reading. Even tough they might be a tad to postmodern for some people.

None the less they are all valuable thinkers.


What do you mean by 'postmodern'? Is this a real criticism of what they are saying?


Oversimplifying things a bit, these writers are saying that scientific objectivity is impossible. This line of argument tends to reduce the credibility of science in the eyes of the public, with negative consequences for research funding. This is a bad thing because, even if not "objective", science has singlehandedly transformed life for the better in the past century by reducing disease, poverty, manual labor, etc, and should continue to be funded.


I don't think any would claim otherwise.

But you don't need objective for that all you need is useful.


Except if some people think science is not objective, they conclude that religion is just as valid, and that, therefore, Creationism (er, Creation 'Science') needs to be taught in schools alongside the 'fact' that gays and liberals are evil, diseased beings. After all, if it's all subjective, everything is equally valid!


Science isn't objective as such. It's testable.

That is the defining factor between religion and science.


The vitamin D claim is misleading. Sure, most people have enough vitamin D for bone health. But vitamin D does a lot more than make your bones healthy, so supplementation is still probably useful. Luckily, unlike vitamin C, too much vitamin D is not harmful so all you're wasting is money if the other claims for vitamin D also get disproved.


First of all: it's probably not a good idea to be dispensing (or receiving) medical advice on HN.

That being said-- Vitamin D is fat soluble, and too much vitamin D is definitely harmful. Take a look here:http://en.wikipedia.org/wiki/Hypervitaminosis_D

Vitamin C, on the other hand, is water soluble, and much harder to overdose on.


The biggest Vitamin D supplements I've seen have 1000 IU per tablet, and the bottle suggests one tablet per day. According to the article you linked, 10,000 IU/day is safe (we can get that much from sunshine in a day), and long-term overdose has been observed at 77,000 IU/day. You'd have to go pretty hog-wild with the Vitamin D supplements to overdose.


2000 and 5000 IU are quite common and can be bought at any US pharmacy and many grocery stores. It looks like 10000 to 50000 IU pills are easily available online: http://www.amazon.com/s/?url=index%3Dblended&keywords=vi...

For what it's worth, I supplement using the 5000 IU pills taken at about 7/week (irregularly, making up doses as necessary). I recently had my blood tested for D3 levels, and found it was close to but not exceeding the standard recommended level.


That article is just about Hypervitaminosis D, other harmful effects are possible (depending on the individual) when supplementing at high levels like 10,000 IU/day.


Did anyone else find themselves clicking on that link very cautiously? Wikipedia should really institute some sort of official warning system for articles with shocking medical images.

(this article does not have images)


Vitamin D can be over-dosed. One side effect is over calcification (kidney stones, etc) because D is involved in that process.


I should have phrased it as "the body has a mechanism to get rid of excess Vitamin D". That mechanism can be overloaded, though.


What mechanism are you talking about? The main mechanism for Vitamin D that I know of is to limit creation from sunlight. But if one takes supplements without any caution as you seem to suggest then one is bypassing it.

Supplementing vitamin D is a great idea (I do it), but it is dangerous for those that are naive about it and think there is no risk.


I agree that is seems misleading. I was diagnosed with vitamin D deficiency. Which maybe isn't that surprising that on the average day I might have gotten no sun at all,and don't eat foods like fish, eggs, butter.

Now I do make a point of a midday walk and a vitamin D supplement. Severe vitamin D deficiency can be pretty yucky.


While this article is correct and talks about Ioannides's work—which is good, respectable stuff by the way—it does paint a picture like this should be a shock to you.

Folks, it isn't just medicine; Science in general must be comfortable with the proposition of being wrong. The more complex the field, the more often replication will fail (and the tighter standards need to be). There are very few pursuits that our race engages in that are more difficult than understanding life itself.

One of the biggest problems is that physicians and the press tend to get very hasty with pushing new research. It's not their fault, mind you, they're trying to inform us about health. If a doctor could save a life with an experimental treatment that has yet to reach thorough replication, shouldn't s/he? And it's the media's job to get us interesting new stories (albeit they could be far better at putting their information into context).

The nature of science is such that refinements occur over time. Unfortunately, the nature of most people receiving medical treatment is to believe it works or doesn't work based of extremely subjective data. If we want to attack that problem head on, we should begin with education. All the improvements in methodology imaginable cannot stop science from being what it is, and as such cannot eliminate the sawtoothed slope of medical progress.


Also, Jaynes had a very interesting take on how to address these issues.

Probability Theory: The Logic of Science http://bayes.wustl.edu/etj/prob/book.pdf (draft)

http://www.amazon.com/Probability-Theory-Logic-Science-Vol/d...


One of the problems with drugs studies is that the very point is to develop a drug and that intent is generally commercial in nature. At no point does anyone seem to stop and wonder what might be the best solution. In addressing my health issues, I have had to infer a lot of things based on drug studies because there is a dearth of good studies on other topics (at least specific to my medical condition). Yet the general consensus is that diet and lifestyle play a major role in every major deadly medical condition. Encouraging people to live right just doesn't have the splash and dash and isn't generally as a lucrative.

It also gets a lot of resistance from the intended audience: An awful lot of folks don't want to do the hard work of changing their lives. In many cases, they just want a new drug, one that won't make them sick...etc (I'm sure someone can quote song lyrics better than I can).


Which just goes to show that you should learn about settled science before worrying about the latest findings.

http://lesswrong.com/lw/ow/the_beauty_of_settled_science/


A place to hear things about medicine that are more often than not correct:

http://www.sciencebasedmedicine.org/?p=8198

http://www.sciencebasedmedicine.org/?cat=11




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: