Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> TFA is extremely clear that the presence of citations (in the aggregate, as a count) on “weak” papers is something the author considers a problem and a perhaps a moral failure on the part of citing authors. The author also believes that citations should be “allocated” to true claims.

As I see it, there are two independent properties that the author is saying ought to be dependent. And I think you (and I) actually think the same. If citations are going to be treated as a metric, then the way they are written (without regard for quality or accuracy) is bad. If citations are not going to be written without regard for quality and accuracy, then they shouldn't be used as a metric. Either one of these models would be fine. What is not fine is the present reality: Citations are written without regard for quality and accuracy, and then still used as a metric ubiquitously! Impact factors, the most common method of ranking journals, are literally measures of citations.

> Yes, after extensively complaining about the fact that citations aren’t used by authors in a manner that reflects the way they’re used as a metric, then complaining further about the fact that authors do not use them this way and repeatedly urging them to change the way citations are used — the author then admits that their use of a metric is problematic and should be ended.

The crux of your point though seems to be that nobody uses them as a metric, and I'm just going to have to fundamentally disagree with that. It's true that authors, when writing papers, appear not to give them the care that a metric would deserve. What is not true is that citations aren't used as prima facie evidence of quality/importance throughout academia.



>The crux of your point though seems to be that nobody uses them as a metric

I didn't say that at all: what I said is right there in my post.

What I did say is that citation counts are a bad and noisy metric, one that is a side-effect of measuring a tool that has a very different purpose, and which researchers can use (usefully) for a variety of reasons that don't require them to validate the technical correctness of all cited works. Nor would it even make sense for citations to be used that way.

The author's criticism in TFA (which he delivers in the strongest and most explicitly moralistic terms) is that based on using this citation count metric, which he selected, the field is broken because bad work gets cited. That's his criticism and his choice of measures to make it. But since it is quite normal for people to cite work that they haven't carefully reviewed for technical correctness, this criticism is essentially bunk.

At a deeper level, the criticism fails to appreciate how people use citations as a measure for academic promotion. In most cases tenure committees care about aggregate statistics like total citations, h-index or i10-index. If a researcher publishes a work that receives hundreds citations for ten years and then fails to replicate, then it basically doesn't matter if the work stops receiving future citations. A retraction might matter. Reports of the failed replication might matter. But nobody is going to lose out on a promotion specifically because some random paper receives 8,000 citations in the first ten years and then zero citations after the failed replication.


> At a deeper level, the criticism fails to appreciate how people use citations as a measure for academic promotion. In most cases tenure committees care about aggregate statistics like total citations, h-index or i10-index. If a researcher publishes a work that receives hundreds citations for ten years and then fails to replicate, then it basically doesn't matter if the work stops receiving future citations. A retraction might matter. Reports of the failed replication might matter. But nobody is going to lose out on a promotion specifically because some random paper receives 8,000 citations in the first ten years and then zero citations after the failed replication.

This is his point, though. Not only are authors still getting tenure after failed replication, they're still getting citations! Citations that don't even mention the failure to replicate!

The fact that citations are used as a metric to get tenure is the problem. There are two solutions to that problem: Change the culture around citing things, or change the metrics people use. That is the whole point of the post.


> This is his point, though. Not only are authors still getting tenure after failed replication, they're still getting citations!

And his point is irrelevant. For two reasons that I've already explained multiple times now!

(1) As I've tried to point out (I've written three posts now to explain this, why are we still debating this!), there are plenty of valid reasons why non-replicating works might get cited. To prevent these works from being cited, you would need to fundamentally change citation practices in a manner that would harm researchers' ability to use citations for their intended purpose.

(2) As I also explained above: even if you somehow managed to force all other researchers to alter their citation practices, it almost certainly wouldn't matter for the purposes of promotion decisions anyway. For the purposes of promotion, the influence of each incremental citation drops off exponentially. After a few years of a work receiving citations, later incremental citations have at most a negligible influence on a researcher's record.

Unless replication failures happen extremely quickly, it doesn't really matter whether future researchers do or do not cite the failed work. The early citations will still exist and will vastly dominate later ones in promotion decisions. The only situation where citation-bans would help is one where citing authors could somehow intuit that a work would fail to replicate, prior to seeing an actual failed replication. (TFA claims this is easy. I think TFA is not credible.)

TL;DR: forcing the entire field to change the way they use citations is (1) harmful to researchers, and (2) despite that is unlikely to have any major benefit anyway, since later citations are not heavily weighted when promoting researchers, and replication attempts generally occur later in a work's citation lifetime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: