Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What's wrong with social science and how to fix it (2020) (fantasticanachronism.com)
138 points by pkkm on Dec 11, 2022 | hide | past | favorite | 142 comments


We absolutely need to change the way we do social sciences. My favourite pet peeve in social sciences is idea laundering.

"It’s analogous to money laundering. Here’s how it works: First, various academics have strong moral impulses about something. For example, they perceive negative attitudes about obesity in society, and they want to stop people from making the obese feel bad about their condition. In other words, they convince themselves that the clinical concept of obesity (a medical term) is merely a story we tell ourselves about fat (a descriptive term); it’s not true or false—in this particular case, it’s a story that exists within a social power dynamic that unjustly ascribes authority to medical knowledge.

Second, academics who share these sentiments start a peer-reviewed periodical such as Fat Studies—an actual academic journal. They organize Fat Studies like every other academic journal, with a board of directors, a codified submission process, special editions with guest editors, a pool of credentialed “experts” to vet submissions, and so on. The journal’s founders, allies and collaborators then publish articles in Fat Studies and “grow” their journal. Soon, other academics with similar beliefs submit papers, which are accepted or rejected. Ideas and moral impulses go in, knowledge comes out. Voilà!

Eventually, after activist scholars petition university libraries to carry the journal, making it financially viable for a large publisher like Taylor & Francis, Fat Studies becomes established. Before long, there’s an extensive canon of academic work—ideas, prejudice, opinion and moral impulses—that has been laundered into “knowledge.” (source: https://www.wsj.com/articles/idea-laundering-in-academia-115...)

I was one of the extreme "trust the science" people, until I joined a startup that worked with academia. The amount of pettiness, vindictiveness, cutthroat power games I have seen surpassed even the most hardcore of startups.


This has been one of one of the tragedies of the Covid pandemic, the way “defense of science” has identified science to the public as the institutions currently involved in science and then browbeaten anyone that doesn’t agree with their pronouncements. Science is not an institution, but a process and method of knowledge production that no institution holds monopoly on.


Slight offshoot.

Fear was made into a virtue. Really think about that.

The fear wasn’t rational for an extreme majority of people, but it was required - and that’s true for different reasons depending who you ask.

So, of course it was an immediate appeal to authority. You were a good person to be afraid, what an easy excuse that you didn’t need to question or understand anything that because “The Science” had spoken.

So I agree with you, but I think the reason it all happened was something other than a normal appeal to authority.


James Lindsay has a lot to say on this. The guy is probably the most intellectual commentator in conservatism. Here's the first part of his three part series on the death of the university.

https://newdiscourses.com/2022/10/strange-death-university-p...

He says that the decline of universities was all started by Herbert Marcuse in the 60s. James reads all of Marcuse's old books and deconstructs them and shows how all the crazy stuff going on today in universities and soft sciences makes perfect sense from the perspective of these leftist theoreticians and their goals. Their goals are not to improve scientific understanding, but to turn absolutely everything in higher education into a political weapon to transform society.


The first page of the referenced UNESCO document refers to the 'red thread', with the link you provided linking that to Communism. Now, whoever at UNESCO actually wrote that may well have been a communist and may well be pushing communist ideology, but googling the phrase brings up a traditional Chinese myth which would give a different sense to the phrase - that of an inevitable and inherently compatible match. I'm a bit unclear on whether or not this is the right interpretation, but it seems a bit more likely than obliquely referencing Communism when it comes to sustainability, and the link you provided seems to simply be implying the link to Communism rather than stating why this phrase is unambiguously linked to it (other than the colour red).

To me this might be like seeing the phrase 'red letter day' and implying a link to Communism.

Any clarity possible?


James Lindsay is a clown who confidently predicted that the Democrats were going to put people into reeducation camps, that LGBT people are part of a vast pedophilia conspiracy, that the World Economic Forum used COVID to engineer a "great reset" to eliminate American sovereignty, that Jewish people are responsible for anti-Semitism because it's a reactionary backlash against liberalism, and that Critical Race Theory is in actuality a conspiracy of heresy launched by the Black Southern Baptists. He is a limitless fount of the stupidest quips on the Internet, and given the Internet I'm referring to, that's a remarkable achievement.

Apparently, Lindsay suffered a serious head injury within the last few years (it came up in a Rogan interview). It would explain a lot. We're talking about someone who managed to get himself into a Twitter fight with the Auschwitz Memorial.

Your claim here is a grave insult to conservatism, which is an intellectually serious movement with plenty of serious thinkers. Thomas Sowell is a serious conservative. Arthur Brooks is a serious conservative. Patrick Deneen (gah). You can rattle off dozens more. Nobody is ever going to put Rufo and Lindsay in that lineup.

"I'm not kidding. The entire program is based on Hegelian alchemy. They actually think they're wizards. In this case, twerking, based on "black girl" identity, breaks open the rules of decorum and expectation to free the Divine shard of liberated nature from the confine of society."


The guy actually reads leftist literature and analyzes it and yes it says crazy stuff. For example, Klaus Schwab, president of the World Economic Forum, wrote a book entitled "COVID-19:The Great Reset" and then you say this guy is crazy because he thinks the World Economic Forum are advocating using COVID-19 to do some Great Reset because he read it in some book. Except it's the book that the president of the world economic forum wrote and published. I read it. It's a pretty radical book. I even did a close highlight of all the kooky stuff, but I'm not going to argue with you about it because you obviously didn't read it or have never heard of it.

If you read famous new left literature, its totally in outer space and he just says here's what it says. Then you say he's crazy for saying leftist are serious about what they wrote in their books.

That's the irony of your comment. All the stuff that you say is crazy is stuff he reads and repeats straight out of new left books. He sounds crazy because his method is to instead of just repeat traditional conservative principals in different ways like the conservatives you mention he actually engages with the lunatic left and tells people what they say and what they say is crazy so he sounds crazy just for repeating it and then critiquing it.


Yes. You got me. He read "twerking a trick to use 'black girl' identity to free the divine shard" out of a leftist book.


>Conservatism, which is an intellectually serious movement with plenty of serious thinkers.

Do you have any recommendations for good news sources or blogs that can act as a counterbalance to the more mainstream media houses like NYT or WaPo ? The issue I face is that it's very difficult to find good quality conservative content. Even something like National Review is only marginally palatable, and only when it is inward looking (e.g., as was the case with recent midterms).


Reason, the Washington Examiner, the WSJ, and, yeah, the National Review and Bulwark. I'm not a conservative, so my own reading is on the "lite" side of the movement.


https://www.weforum.org/focus/the-great-reset

"The pandemic represents a rare but narrow window of opportunity to reflect, reimagine, and reset our world" - Professor Klaus Schwab, Founder and Executive Chairman, World Economic Forum.

So it is at least partially true


Yep. I am constantatly baffled by people who make fun of people taking about the WEF's "great reset". The consensus on the internet seems to be that its a completely fabricated conservative conspiracy theory.

Its absolutely not. You can go to the WEFs website and read the great reset white paper. The main thrust is "stakeholder" capitalism. Which at first sounds good, then by the time you get to the end you realize what they are talking about is essentially making all existing large corporations into monopolies, seizing assets from the general populace by means of economic manipulation, and then once power is consolidated within these corporations they will turn around and rent you everything you need and the corporations will determine how goods are distributed amongst the populace. Its really fucking dark.

They actually produced commercials featuring the line "you will own nothing and you will be happy"

Sound fringe? Like it coyld never happen? Well do you happen to work for a fortune 500 company? Your CEO probably went to Davos last year to get shmoozed by the WEF. Its an enormously popular event with buy in from so of the largest companies in key industries like banking, health care, tech, etc.

Its seriously concerning but people are being gaslit into thinking its some sort of nonsense conspiracy.


It is a completely fabricated conspiracy theory.


How can you possibly think that, when it's right there on their website? Klaus Schwab wrote a book about it. These people were not elected, they don't represent the people, they are power hungry oligarchs, and don't appear accountable to anyone.


Nobody is reading this thread anymore, but you should start by reading more about what the World Economic Forum is. To be an oligarch you need more than the oli; you also need the arch. People try to hook various arch-isms up to these olis, but it's all six degrees of Kevin Bacon logic. "John Kerry, the nation's envoy for climate, agreed with Klaus Schwab that equity is important! The Great Reset is happening!"

Here's a fun fact: companies that attend WEF events or participate in the WEF on average underperform the S&P 500 by double digits percentages.


I dont get your point. Quote 1 line of my comment that is "fabricated". Everything I said is true and objectively so. So why are you hell bent on acting like this shit doesnt exist. Maybe you think its nit a serious organization that could effectively implement its goals? You could make that argument. But no, youre saying that what I'm saying is fabricated: and that is undeniably, objectively, complete bullshit. More of the same bizarre gaslighting



The thing about James is he reads all these old books cover to cover line by line and explains the context and ideas around it. Nobody reads academic leftists because they are eye-wateringly dull and they redefine many words. The people who wrote this Wikipedia article are trying to do a gigantic gaslight about 60+ years of academic left literature not really being serious or not existing or nobody actually taking it seriously. That and thousands and thousands of academics have been trained in this stuff and written doctoral thesis on it and the guy who wrote this article thinks it's all just a conspiracy theory that it had any effect on anyone. Yeah right.

James takes apart all their old works and cross-references to earlier works and explains how all their stuff ties together. The guy is an academic mathematician so he's precise and his reasoning is refreshingly clear. He even jokes about how most leftists today haven't even read any leftists to even know where their ideas even came from. In these old works they describe how the dumbed down activist techniques work and how to teach them, but people are ignorant of all this stuff or gaslight that it doesn't exist as if it's some organic will of the people rising up all by itself.


Are you really sure about where you're ideas come from?

I remember the Christian anti-Harry-Potter craze when I was a kid. Preachers would have hour-long television programs about how this, this, and this were in those books. But none of it was. I knew. I had read them.

I have never trusted what others wanted to convinced me a book was 'about' since then.

Education (so far as I see it) should leave one, as best is possible, with an understanding of a body of work. To agree or disagree with that body of work is a separate affair. What those preachers and what people like James Lindsay want, so far as I am concerned, is a so-called understanding that leads to an end other than understanding in and of itself. Quite simply, if someone were to read Lindsey's work and come away thinking that X, Y, or Z in fact had a point, I doubt he would be happy with that outcome.


I could see how some fundamentalist Christians could draw a line from the practices of the mystery cults and some Harry Potter stuff like secret knowledge, a secret society and an elite of magical lineage, but it's fiction after all, so why make a big fuss? If Harry Potter had an afterword that called on children everywhere to make secret societies with blood oath, initiation rituals, and all the stuff the people who wrote the bible more than 2000 years ago were talking about when they referenced Satan, then the fundamentalists Christians could validly criticize the books.

However, the academic leftist stuff is not fiction. It's totally completely serious endeavor, and that's why it's important to take it seriously and criticize its goals because they intend to foment a revolution and they are deadly serious.


There is no single "they". Half of leftist theorists argue with the other half more than three quarters of the time, the other quarter they argue with themselves.

And as for revolution, context is king. To the Southern slave holders, emancipationists were radicals intent on overthrowing the very foundations of civilization itself. The birth myth of the United States is revolution (though I admit we may have been better off eventually going the route that Canada took).

Of course it's serious stuff, people who try to figure out the world generally do take themselves seriously. A bunch of people sit around and try to figure out what the structures that we create are, how they actually operate, and if that is helpful or harmful to the human beings within those structures, serious stuff indeed. Their conclusions may be right or wrong or a mixture of both. Criticism, however, does not properly take the form of 'this idea is produced by an academic leftist, therefore it is bad and wrong'. Where's the marketplace of ideas in that?

Self-avowed conservative intellectuals have been wrong in the past as well. They have no less wanted to produce 'revolutions'. They have no less produced disastrous and failed revolutions.

And if we want to say that things shouldn't change from what they are now, at all, then that might be the most radical proposition of all, because it has never before happened in history, though many have tried it because they too thought their place in history perfect.


It's one thing to try and figure out the world. It's another to start with an ideological outlook and try to make the world fit that. To the extent people in academia and social science do that, it's a problem, because then it gets presented as scientific. Whatever the political slant might happen to be. Science isn't about how we want the world to be. Same with history. And yes, both right and left can be guilty of this.


Exactly. Some scientists are also activists, and that’s okay as long as they approach science as a scientist, with empirical mindset. In social scienses too often the activist mindset is all they have.


> Nobody reads academic leftists because they are eye-wateringly dull and they redefine many words.

FWIW Das Kapital is one of the most cited books in all of economics (and probably all of academia), so I can only assume it is also one of the most read of all academic works. There are also leftist authors such as Howard Zinn with People’s History of the United States and Stephen Jay Gould with Mismeasure of Man. Both best-sellers and the former probably the best selling history book of all time. Even in field of feminism bell hooks has a couple of best sellers that are widely read by the general public.

Seems to me that left-wing academics appeal plenty to the general public and their works get read and understood in record numbers.


If you take anything from Wikipedia seriously that is politically charged, then you clearly aren't paying attention. That, or you're part of the problem.


Always check the talk tab.


Maybe Cultural Marxism is just a conspiracy theory but the criminal Gang of Four was a very real conspiracy, and their "Great Proletarian Cultural Revolution" killed millions of people, led to destruction of cultural heritage on a scale unprecedented in modern times and plausibly set their country back quite significantly. The fact that these clearly problematic ideas are once again being taken seriously in the West (after a well-documented first flare-up of Western Maoism in the 1970s) should give us pause.


I'm not a communist, so I have little trouble in denouncing or disliking Stalinist Russia, Maoist China, etc.

I also, however, find these sorts of discussions completely disingenuous, however, in that it is conveniently left unmentioned how many millions might or might not be calculated to have died because of capitalism, cultures that have been destroyed, etc.

No one is actually willing to lay everything on the table for examination.


I think it's because capitalism is decentralised as much as possible, and decisions are made - and rewards given - as close to the people taking the risks/doing the work as possible, and so it's hard to make the case that "letting people get on with it" is to blame. Although there will be some emergent negative effects, there will also be vast emergent positive effects.

Whereas deaths due to Marxist ideology or socialist top-down policies are actually undeniably due to them, and they seem to happen pretty much everywhere it's tried.


"Idea laundering" is a great term for this. In a nutshell:

Adding a thin veneer of academic legitimacy to a concept that's divorced from reality, but politically expedient.


Grievance studies is where I first learned about this.

https://en.m.wikipedia.org/wiki/Grievance_studies_affair


The fraudulent "study" where the authors pretended to get super lurid papers accepted into high-impact journals but actually got much more boring papers into coin-operated journals?

What's so weird about this is, you can look at some of the higher-impact journals they submitted to and see ridiculous-seeming papers. I don't know if they are ridiculous, any more than I know how to read a high-end economics paper, but they sure seem that way. Like, their original premise that journals will accept silly-seeming papers seems obvious.

But then they had to go lie about it? How weak is that?


The point is that there is a process for laundering ideas. These papers are then used by people who say thing like "follow the science". But its clearly not science. That's the point.


That was not the point of the supposed hoax, whatever else you think it might demonstrate.


What was your conclusion from their findings / demonstration?


One might be forgiven for thinking that 'coin-operated journals' was already a pretty damning indictment.


There are coin-op journals in every field of academia. The grievance hoaxers are counting on their audience not to know that. The only people they have more contempt for than social scientists is their audience.


I mean, I agree that coin-op journals are a problem in their own right, but most of those coin-op journals aren't considered leading journals of their field:

https://en.wikipedia.org/wiki/Gender,_Place_%26_Culture "leading international journal in feminist geography"

That's the one that published the study claiming that dog parks in Oregon engage in rape culture that won an award from the journal. Inasmuch as the leading journal of the field is a coin-op journal that awarded a hoax paper, I think that is going to reflect poorly on the field.


That journal was, when I looked a few months ago, the #15th(!) ranked journal in gender studies, and the paper they got submitted was ostensibly based on 1,000 hours of logged field work; as I remember, the authors had to sign a contract confirming they actually did that work, which they lied about. Here, the Sokal hoaxers are counting on their audience not to understand another important academic distinction: the one between journal review and reproduction/verification.

Last time we had a thread about this, after I rattled off my own experience doing journal review (ACM and Usenix), someone else in a different STEM field jumped in to say that in their entire academic career they'd never even heard of someone requesting and verifying the raw data of an article they were reviewing for a journal. That's not what journal review is. Journal reviewers spend less than an hour, typically, with each submission.

Again: the only thing these people have more contempt for than the woke academics are the rubes they're selling this story to. It's comprehensively premised on their audience being ignorant of how science works:

* That there are low- and high- status journals in every field

* That there are pay-for-play journals of last resort in most fields

* That journal review and reproduction are radically different enterprises (reproductions are research projects in their own right!)

* That "review-and-resubmit" is not in fact an indication that a journal is primed to publish your paper, but rather a nice way of saying "no" (several of the papers they crowed about were exclusively review-and-resubmit)

* That 1,000 hours of field work is approximately 2/3rds of a full time job for a research assistant or grad student

* That getting published in a journal doesn't establish your ideas in the scientific firmament, but rather puts them on the stage to be engaged with and debated by other scientists (this note from Stefan Savage)

We could probably go on and on. It's a pretty sickening stunt, and the people involved should be embarrassed.


> That journal was, when I looked a few months ago, the #15th(!) ranked journal in gender studies, > That there are pay-for-play journals of last resort in most fields

Sure, but this is supposedly the #1 journal in feminist geography, at least if Wikipedia can be trusted. Seems bad for that field's leading journal to be coin-operated.

Honestly, the fact that peer review does so little is part of the problem. I do think that the journals should be responsible for minor details like whether the people doing the study actually exist and can be contacted, and reviewers should catch it if someone reports something about 33% of an n=2 sample.

The fact that there's no real review in journal publishing is itself part of the scandal, though you're right that this is by no means unique here. And I don't share your experience: I've yet to find anyone who has heard of this who isn't aware that the academic publishing model is pretty much broken everywhere.

> Last time we had a thread about this, after I rattled off my own experience doing journal review (ACM and Usenix), someone else in a different STEM field jumped in to say that in their entire academic career they'd never even heard of someone requesting and verifying the raw data of an article they were reviewing for a journal

I've yet to see nonsense of this type published in Cell, Nature, Science, ACM, or Usenix, though I won't claim it impossible.

Are you aware of hoax papers in anything close to a leading journal in other fields?


At the point where we're litigating what peer review is across all of academia, we've stopped discussing things productively. Each time this story comes up, somebody always wants to make the whole thing about peer review being intellectually bankrupt. That's only a live debate if you have a misconception about what a journal article is meant to represent in the sciences --- all of the sciences, not just the floofy ones.

It's telling that you rattled off Cell, Nature, Science, ACM, and Usenix: top-tier journals. Yes: top-tier journals do better than low-impact journals-of-last-resort at being selective! That's why they're top journals!

I just looked: Gender, Place, and Culture is currently the 21st-ranked gender studies journal. From what I can tell, it is the only journal dedicated to feminist geography. Also: we have now officially reached the point in this discussion where I took time to go figure out what feminist geography was, and where feminist geographers publish (not in G.P.C.! there are higher-ranked cultural geography journals, just not feminist-specific ones).


Yeah, I think most people have a misconception that Geography is memorizing maps or whatever they did as a kid in school, but it's a lot more than that.


> How weak is that?

Ironic, even.


I’ve always wanted a list of all social science concepts that have been laundered into existence by well-meaning academic activists, but that eventually quietly went away, either by falling out of fashion or by being proven to be bunk.

Trigger warnings, implicit bias tests, cultural appropriation, lived experience, micro aggressions and a few others come to mind.


> Trigger warnings, implicit bias tests, cultural appropriation, lived experience, micro aggressions and a few others come to mind.

All of those are now part of the public discourse, to the point South Park dedicated a season to it with PC Principal. I don't know whether they've been debunked and fallen out of favor in academia, but they sure are commonly used words today.


I don't think any of those have gone away or fallen out of fashion.


I struggle to see what part of social science is morales or quasi religious beliefs mascarading as science.

There is no cure.


At the very least we need to start demanding two things: falsifiability and reproducibility. We should stop calling "science" things that are unfalsifiable.


In order to even do science we must first make certain non-falsifiable assumptions, fundamentally. This is a basic point of metaphysics and epistemology.


Nevertheless we can easily point to the useful practical inventions of physics, chemistry, biology etc. So whether they are based on non-falsifiable assumptions is kind of an abstract philosophical point. And that is an interesting discussion but it does not follow that in the real world that hard sciences and social sciences are on equal footing.


Yes. But then we must also realize that science is to have a very narrow band in society.

To quote from Mike Alder's Newton's Flaming Laser Sword

> It must also be said that, although one might much admire a genuine Newtonian philosopher if such could found, it would be unwise to invite one to a dinner party. Unwilling to discuss anything unless he understood it to a depth that most people never attain on anything, he would be a notably poor conversationalist. We can safely say that he would have no opinions on religion or politics, and his views on sex would tend either to the very theoretical or to the decidedly empirical, thus more or less ruling out discussion on anything of general interest. Not even Newton was a complete Newtonian, and it may be doubted if life generally offers the luxury of not having an opinion on anything that cannot be reduced to predicate calculus plus certified observation statements. While the Newtonian insistence on ensuring that any statement is testable by observation (or has logical consequences which are so testable) undoubtedly cuts out the crap, it also seems to cut out almost everything else as well.


Sure, but that's why metaphysics and epistemology aren't considered sciences.

You wall off the non-falsifiable stuff in a box labeled "not science" and focus on the stuff that is falsifiable.


To do so is to take on metaphysical and epistemological positions inherently.

That is not necessarily a bad thing.

It's something that human consciousness seemingly has to do.


Yes, but that's a limitation of humans.

Science doesn't care about which ones you pick. You can be doing science with any set of initial non-falsifiable assumptions that humans are capable of holding.


>The amount of pettiness, vindictiveness, cutthroat power games I have seen surpassed even the most hardcore of startups.

can you elaborate on this? Is this across every department or is it in specific areas? A lot of people I know say this kind of stuff but I have not really experienced it, at least, not beyond a normal amount of pettiness and vindictiveness.

I don't disbelieve you. I am just curious about this problem.


To better understand this phenomenon, Helen Pluckrose and James Lindsay (both of "sokal-squared" fame) have written a book [0] in scholarly detail to trace its roots.

[0] https://www.amazon.com/Cynical-Theories-Scholarship-Everythi...


XKCD calls it "citogenesis": https://xkcd.com/978/

The slight difference between what you describe and the comic is that "idea laundering" is a deliberate and coordinated effort, while "citogenesis" is driven by carelessness.


It's like people have muddled the division line between subjective and objective, and if the latter can't really be grasped epistemologically, then it opens the door to activist pressure in the area of knowledge. The argument often heard by the latest generation in academic social science is: there is no escape from ideology / all is influenced by power dynamics / objectivity cannot exist, it's a relic of an old-school set of ideas that, by the way, oppressed people. Even (especially) hard science can't be objective, because it will always 'serve a purpose', in a power dynamic or social context.

Maybe less political is just the fact that social science can't be hard science, because it isn't existentially against making claims that can be neither proven nor disproven. That's not its fault, since a lot of fields in social science have value to society (psychology, economics, sociology, etc.). But it's overstepping its bounds, I think is your point.

What I don't like is how we're trying to establish some sort of priesthood, where people with (certain) degrees feel righteous in areas that probably they're being a little over-confident. I've had many conversations with people from this background where, they take what I've said and do not address the claim or rebuttal as a logician would, but instead, "Oh, you're wrong because you're misinformed, you didn't read this paper, you are such-and-such and such groups don't understand this fact I am stating, [or some other claim that isn't a rebuttal]." Without really tackling my point or defending their position. If only they could be that self-aware to state it like that...


> in this particular case, it’s a story that exists within a social power dynamic that unjustly ascribes authority to medical knowledge.

No it isn’t. It is asking a critical question on prejudice and other biases that exist among the medical profession and can affect their work. In this case a social scientist looks at population data and asks a question is obesity dangerous because of the lifestyle of the obese person, or is there some societal bias that prevails inside the medical profession which increases the chance of fat people being mis-diegnosed, i.e. a patient really has some underlying illness that can be treated, but a doctor will not see it because of their bias and rather prescribe a lifestyle change to the patient.

This is not an idea laundering but an important task of social scientists.

> I was one of the extreme "trust the science" people, until I joined a startup that worked with academia. The amount of pettiness, vindictiveness, cutthroat power games I have seen surpassed even the most hardcore of startups.

You need to apply this to medical scientists as well as social scientists, because it is equally applicable. In fact is a job of social scientists describe this dynamic and point out where it can be dangerous.

EDIT: Your source is actually one of the anti-woke camp who resigned from Portland State because apparently universities are a “Social Justice factory”. I’m not surprised that he has a problem with people taking a critical look on societal effects of diversity of body types. You should take what he says with a pinch of salt. https://en.wikipedia.org/wiki/Peter_Boghossian


That's a really nice question! We could address it by looking at places where obesity has significantly increased in recent decades, independently of any widespread access to the medical system or advocacy of "lifestyle changes". And it looks like we can actually see real dangers in such cases from obesity, that can't be easily explained by iatrogenic causes. The lesson is there's a difference between posing a question, and having a bias for answering that question in a prejudiced way (blaming "privileged" doctors for adverse outcomes).


> i.e. a patient really has some underlying illness that can be treated, but a doctor will not see it because of their bias and rather prescribe a lifestyle change to the patient.

Or you have those who won't even see a doctor about potentially serious medical conditions because they assume the doctor will put it down to excess weight and not bother diagnosing them properly. It's easy to be skeptical that's such a big issue if you don't struggle with your weight, but from what I've heard 2nd hand it likely at least partly explains why hospitalisation for particular conditions is more likely among overweight people.


Reading through this quickly I see several problems that make me wonder if the author has worked in academia.

1. Publication venues aren’t created equal. People outside of academia don’t understand that anyone can and will start a journal/conference. If I want to launch the Proceedings of the of Confabulatory Results in Computer Science Conference, all I need is a cheap hosting account and I’m ready to go. In countries like the US this is explicitly protected speech, and the big for-profit sites run by Elsevier et al. will often go ahead and add my publications to their database as long as I make enough of an effort to make things look legitimate.

In any given field there are probably a handful of “top” venues that everyone in the field knows, and a vast long tail ranging from mid-tier to absolute fantasist for-profit crap. If you focus on the known conferences you might see bad results, but if you focus on the long tail you’re doing the equivalent of searching the 50th+ page of Google results for some term: that is, you’re deliberately selecting for SEO crap. And given the relatively high cost of peer-reviewing the good vs not peer-reviewing the bad, unfiltered searches will always be dominated by crap (surely this is a named “law”?) Within TFA I cannot tell if the author is filtering properly for decent venues (as any self-respecting expert would do) or if they’re just complaining about spam. Some mention is made of a DARPA project, so I’m hopeful it’s not too bad. However even small filtering errors will instantly make your results meaningless.

2. Citations aren’t intended as a goddamn endorsement (or metric). In science we cite papers liberally. We do not do this because we want people to get a cookie. We don’t do it because we’re endorsing every result. We do it because we’ve learned (or been told) about a possibly relevant result and we want the reader to know about it too. When it comes to citations, more is usually better. Just as it is much better to let many innocent people go free rather than imprison one guilty one, it is vastly better to cite a hundred mediocre or incorrect papers than to miss one important citation. Readers should not see a citation and infer correctness or importance unless the author specifically states this in the text at which point, sure, that’s an error. But most citation-counting metrics don’t actually read the citing papers, they just do text matching. Since most bulk citations are just reading lists, this also filters for citations that don’t mean much about the quality of a work.

The idea of using citation counts as a metric for research quality is a bad one that administrators (and the type of researchers who solve administrative problems) came up with. It is one of those “worst ideas except for any other” solutions: probably better than throwing darts at a board and certainly more scalable than reading every paper for quality. But the idea is artificial at best, and complaints like “why are people citing incorrect results” fundamentally ask citations to be something they're not.

Overall there are many legitimate complaints about academia and replicability to be had out there. But salting them with potential nonsense does nobody any favors, and just makes the process of fixing these issues much less likely to succeed.


1. He addresses this repeatedly throughout the piece. Journal impact factor is (largely) uncorrelated to replication probability.

2. Yes, but this hardly seems like a defense of citing something false (without comment), or something that has literally been retracted years ago, which is a large part of his complaint.

He is not suggesting the use of citation count as a metric for quality. I have no idea how you could have possibly gotten that from reading this article. A bullet point in his "what to do" section is literally "ignore citation counts".


> He is not suggesting the use of citation count as a metric for quality. I have no idea how you could have possibly gotten that from reading this article.

TFA is extremely clear that the presence of citations (in the aggregate, as a count) on “weak” papers is something the author considers a problem and a perhaps a moral failure on the part of citing authors. The author also believes that citations should be “allocated” to true claims.

* “Yes, you're reading that right: studies that replicate are cited at the same rate as studies that do not. Publishing your own weak papers is one thing, but citing other people's weak papers?” Here citations are clearly treated as a bulk metric, and “weak” is a quality metric.

* “ As in all affairs of man, it once again comes down to Hanlon's Razor. Either: Malice: [the citing authors] know which results are likely false but cite them anyway. or, Stupidity: they can't tell which papers will replicate even though it's quite easy.” Aside from being gross and insulting — here the author claims that the decision to cite a result can have only two explanations, malice and stupidity. Not, for example, the much more straightforward explanation that I mention above (and that the author even admits is likely.)

* “ Whatever the explanation might be, the fact is that the academic system does not allocate citations to true claims.” The use of “allocate citations” clearly recognizes that citation counts are treated as a metric, and indicate that that the author wishes this allocation to be done differently.

* “This is bad not only for the direct effect of basing further research on false results, but also because it distorts the incentives scientists face. If nobody cited weak studies, we wouldn't have so many of them.” Here the author makes it clear that they see citation count (correctly) as a metric that encourages researchers, and believes the optimal solution is to remove all (but perhaps explicitly negative?) citations to those papers.

> A bullet point in his "what to do" section is literally "ignore citation counts".

Yes, after extensively complaining about the fact that citations aren’t used by authors in a manner that reflects the way they’re used as a metric, then complaining further about the fact that authors do not use them this way and repeatedly urging them to change the way citations are used — the author then admits that their use of a metric is problematic and should be ended.

We agree! The only problem here was that the author took a detour to a totally absurd place to get there.


> TFA is extremely clear that the presence of citations (in the aggregate, as a count) on “weak” papers is something the author considers a problem and a perhaps a moral failure on the part of citing authors. The author also believes that citations should be “allocated” to true claims.

As I see it, there are two independent properties that the author is saying ought to be dependent. And I think you (and I) actually think the same. If citations are going to be treated as a metric, then the way they are written (without regard for quality or accuracy) is bad. If citations are not going to be written without regard for quality and accuracy, then they shouldn't be used as a metric. Either one of these models would be fine. What is not fine is the present reality: Citations are written without regard for quality and accuracy, and then still used as a metric ubiquitously! Impact factors, the most common method of ranking journals, are literally measures of citations.

> Yes, after extensively complaining about the fact that citations aren’t used by authors in a manner that reflects the way they’re used as a metric, then complaining further about the fact that authors do not use them this way and repeatedly urging them to change the way citations are used — the author then admits that their use of a metric is problematic and should be ended.

The crux of your point though seems to be that nobody uses them as a metric, and I'm just going to have to fundamentally disagree with that. It's true that authors, when writing papers, appear not to give them the care that a metric would deserve. What is not true is that citations aren't used as prima facie evidence of quality/importance throughout academia.


>The crux of your point though seems to be that nobody uses them as a metric

I didn't say that at all: what I said is right there in my post.

What I did say is that citation counts are a bad and noisy metric, one that is a side-effect of measuring a tool that has a very different purpose, and which researchers can use (usefully) for a variety of reasons that don't require them to validate the technical correctness of all cited works. Nor would it even make sense for citations to be used that way.

The author's criticism in TFA (which he delivers in the strongest and most explicitly moralistic terms) is that based on using this citation count metric, which he selected, the field is broken because bad work gets cited. That's his criticism and his choice of measures to make it. But since it is quite normal for people to cite work that they haven't carefully reviewed for technical correctness, this criticism is essentially bunk.

At a deeper level, the criticism fails to appreciate how people use citations as a measure for academic promotion. In most cases tenure committees care about aggregate statistics like total citations, h-index or i10-index. If a researcher publishes a work that receives hundreds citations for ten years and then fails to replicate, then it basically doesn't matter if the work stops receiving future citations. A retraction might matter. Reports of the failed replication might matter. But nobody is going to lose out on a promotion specifically because some random paper receives 8,000 citations in the first ten years and then zero citations after the failed replication.


> At a deeper level, the criticism fails to appreciate how people use citations as a measure for academic promotion. In most cases tenure committees care about aggregate statistics like total citations, h-index or i10-index. If a researcher publishes a work that receives hundreds citations for ten years and then fails to replicate, then it basically doesn't matter if the work stops receiving future citations. A retraction might matter. Reports of the failed replication might matter. But nobody is going to lose out on a promotion specifically because some random paper receives 8,000 citations in the first ten years and then zero citations after the failed replication.

This is his point, though. Not only are authors still getting tenure after failed replication, they're still getting citations! Citations that don't even mention the failure to replicate!

The fact that citations are used as a metric to get tenure is the problem. There are two solutions to that problem: Change the culture around citing things, or change the metrics people use. That is the whole point of the post.


> This is his point, though. Not only are authors still getting tenure after failed replication, they're still getting citations!

And his point is irrelevant. For two reasons that I've already explained multiple times now!

(1) As I've tried to point out (I've written three posts now to explain this, why are we still debating this!), there are plenty of valid reasons why non-replicating works might get cited. To prevent these works from being cited, you would need to fundamentally change citation practices in a manner that would harm researchers' ability to use citations for their intended purpose.

(2) As I also explained above: even if you somehow managed to force all other researchers to alter their citation practices, it almost certainly wouldn't matter for the purposes of promotion decisions anyway. For the purposes of promotion, the influence of each incremental citation drops off exponentially. After a few years of a work receiving citations, later incremental citations have at most a negligible influence on a researcher's record.

Unless replication failures happen extremely quickly, it doesn't really matter whether future researchers do or do not cite the failed work. The early citations will still exist and will vastly dominate later ones in promotion decisions. The only situation where citation-bans would help is one where citing authors could somehow intuit that a work would fail to replicate, prior to seeing an actual failed replication. (TFA claims this is easy. I think TFA is not credible.)

TL;DR: forcing the entire field to change the way they use citations is (1) harmful to researchers, and (2) despite that is unlikely to have any major benefit anyway, since later citations are not heavily weighted when promoting researchers, and replication attempts generally occur later in a work's citation lifetime.


> Citations aren’t intended as a goddamn endorsement (or metric).

This. I may well write the equivalent of:

"So, those clowns of [18] thought they could sneak it past us to only model the near-trivial case which accounts for less than 5% of real-life occurrences (as per the distribution estimated in [5])."

The second citation is barely-endorsing, and the first citation is an anti-endorsement. Or I could write:

"A bunch of useless attempts have recently been published to achieve what I'm actually going to get right. [4, 7, 8]."

Now, I wouldn't use this language, I'd be more polite and much more circumspect, but still, those are legit and not-entirely-uncommon citations.


You are only making the author seem more correct. You have a system where citations act as cookies and endorse (administrators fault) but that is not the researchers intent.

> Whatever the explanation might be, the fact is that the academic system does not allocate citations to true claims. This is bad not only for the direct effect of basing further research on false results, but also because it distorts the incentives scientists face. If nobody cited weak studies, we wouldn't have so many of them. Rewarding impact without regard for the truth inevitably leads to disaster.

You also argue that high quality journals are good at filtering quality. The author presents evidence that this is specious.

You furthermore question whether the author worked in academia, which is answered.

Given your reading comprehension skills, I wonder if you are in the correct job.


> Just as it is much better to let many innocent people go free rather than imprison one guilty one

Why is that a "rather"? I'm guessing you mixed up "innocent" and "guilty" there, but if so, I'm not sure I could confidently agree with a claim that general (certainly not if the "many guilty" were serial ķillers that went on to commit many more homicides, vs the "one innocent" being given a community service sentence for a minor misdemeanor they didn't actually commit).


In fact, we even cite papers when we are saying that the work is wrong or deficient.


> Citations aren’t intended as a goddamn endorsement (or metric).

Works for GNU Parallel. :)


> for-profit crap

And of course we all know that making something for profit means it's bad.


For-profit in this case means "I pay you $2500 and you let me give you a list of the peer-reviewers I'd like to review my paper, all of whom happen to be my friends and business associates."


>>>> Increase sample sizes and lower the significance threshold to .005.

I'm a physical scientist in industry, so I don't publish, or use p-values. But if the familiar "square root of n" rule holds, than a 10x improvement in any kind of "noise" measure requires a 100x increase in sample size. This is probably impossible for many kinds of experiments. Even in my work, desiring such an improvement usually requires me to scrap an experiment and come up with a better one, rather than slogging away to collect more data.

Worse, we may suffer from the fact that for some kinds of studies, sample size is limited by the number of potential subjects per researcher. You can only ask undergraduates to do so much. More researchers in the field means weaker statistics.

A possible minor improvement is to eliminate the requirement of publishing a masters thesis project. My friends who were psych grad students all had to do a master's en route to the PhD, and it had to be published to fulfill their requirement. This might have seemed like a quality check, but instead created an incentive to load the literature with low quality research. And many of the students were not looking forward to research careers anyway. Their term for finishing the degree was "published and outa here."


Replication is a huge issue, but I've been wondering lately about refusal to publish.

Even if all of the published papers were peer reviewed and replicated, there's a lot of science that never sees the light of day.

Even non-results are important - if not glamourous, and (even worse) publishing is disincentivised if the results go against what the funders wanted or expected.


The replication crisis can not be overstated though because "science" that can not be replicated isn't science at all. The very base definition of science is that it's a method that produces testable and predictable propositions about our world. If a theory or paper does not deliver that it's guesswork, a superstition or religious belief in extreme cases. But certainly not science, no matter how many people or institutions refer to it as such. Just as the Democratic People's Republic of Korea isn't actually democratic.

Do not trust the science. Verify the science.


Science is about the process, not the results. Somebody discovers an phenomenon, studies it, fails to take something relevant into account, and publishes a faulty result, and that's science. Somebody else tries to replicate the study, fails it, and can't figure out what went wrong, and that's science. Then somebody discovers the flaw, gets a different result, and publishes it, and that's science.


When there is no reliable review mechanism, then the process is inherently flawed. It may be science in your book but following that definition, everything anyone studies and publishes is science. Homeopathy is science by that definition. Flat earth studies too.

There needs to be proper scrutiny to ensure minimal quality standards are at least followed, else the whole thing is bunk. A study that can not be replicated by anyone is not research, it's monkeys hitting keys without understanding what they're doing. Sadly monkeys with PhDs in some cases but that doesn't change anything.

The distinguishing feature of what "real science" is in the eye of the believer seems to be someone holding an academic title, i.e. an argument from authority, not verifiable standards that exclude randomness. There is nothing scientific about belief.


Mechanisms and rules are the problem, not the solution. The reason we got the reproducibility crisis in the first place is because too many people believe in rules. They believe that if you discover something by following the accepted rules of science, you have a scientific result.

But when you have a system, you always have people gaming it. People who believe that it's ok to bend the rules a bit. People who believe it's ok to break them, as long as you don't get caught. If you don't get caught, people who believe in the rules are inclined to accept your results, as you followed the rules.

By changing the rules, you deal with specific symptoms, but the underlying problem remains. The people gaming the system always win in the end.

And sometimes the rules are simply insufficient.

When science works, it works because people go beyond the rules. It works because people are skeptical of their own ideas and willing to show their best efforts to test them. You can't cultivate attitudes like that with rules and regulations.


Sure, but it's a visible problem; refusal to publish concerns me (in a large part) because its impossible to even estimate how large the problem is.


> refusal to publish

My impression is that in social science, this happens when the results are "wrong". Maybe compounding the replication crisis if only the "right" results get published but "right" was actually wrong.


Even in the "hard" sciences, this bias can show up. In biology, for instance, this can be a big problem. For example, the Columbia professor who studied rapid-onset gender dysphoria and its apparent social contagion was told to retract her paper on the matter.


It's a great article which unfortunately makes it sound like the problems are restricted to social science. That isn't the case, similar problems crop up in many other fields (or perhaps you could argue the scope of social science is wider than appreciated).

A good example of a field with the same problems is epidemiology. I wrote a brief article with many examples a couple of years ago [1]. Unfalsifiable claims, objectively false claims, non-replicable work, citations that were invalid in various ways (e.g. citing a paper in support of a claim which didn't actually support the claim), and most surprisingly, a lot of circular logic. This field seems to routinely lose track of what comes from assumptions vs simulations vs reality. Papers would make claims that sounded like factual statements about observed data, but when the citation was checked the number would turn out to be a simulation output or even an arbitrary assumption used as simulation input. Validation had gone AWOL long ago with the substitutes being unscientific, e.g. claiming a model was validated because it roughly matched the output of other models.

You could argue that epidemiology is actually a social science rather than a hard science. Maybe the distinction isn't interesting though.

[1] https://plan99.net/~mike/epidemiology.pdf


Needs (2020)

100 comments at the time: https://news.ycombinator.com/item?id=24447724


If I had a bunch of money, I'd setup a grant specifically for funding replication studies. Real science can be replicated and the only way to replicate is to do it.


If the incentive is to get the grant and not to actually produce replicable studies, then what impact would you expect this to have? If the inputs are mostly garbage, then all this might do is confirm what we already know.

I wonder if you'd be better off having a two-phase grant mechanism. One smaller payment for the research, and a second larger payment if it is replicable. Actually incentivize the "market" to produce good work. As it is, this incentive doesn't actually exist.


Also, replication studies could be a great way for young scientists to get started in their career.


I mean, the incentives do seem to blow the other way. "We have invalidated the well-paid and instrumentally useful papers of a dozen other scientists! Please hire us!"


> Hey we found those issues in so and so papers, we identified mistakes of a dozen other scientists and we do our due diligence. Please hire us!

I think it's a lot about the phrasing.


It may seem that way, but there will always be a PI involved, with different incentives.


When I was doing my physics masters (at a pretty good school ie had a couple of nobel prize winners) there was at least a handful of students whose phd projects involved "testing" theories of gravitation and pushing them to their limits.

Not quite the same as experimental replication, but along the same lines I'd say.


Given that they are testing theory, yes. What theory underlines psychology? How does one relate the findings?


Yes, and this is already how many PhD students spend their first year or two


There needs to be a prestigious award with a large monetary reward for scientists who definitively disprove the most previously published studies each year. Then track scientists how have their names attached to the most garbage studies and publicly drag them.

Best way to clean house on an untrustworthy institution that's been politicized. Do science for the science, not to fit some shit narrative.


"Yes, you're reading that right: studies that replicate are cited at the same rate as studies that do not ... Astonishingly, even after retraction the vast majority of citations are positive, and those positive citations continue for decades after retraction."

The academic influence of social science papers is uncorrelated with their scientific quality.


In some communities, notably AI, there are attempts to fight this which are quite successful:

e.g. https://paperswithcode.com/ and the recent phenomenon of people hosting their system demonstrations on huggingface.


The worst part is when these studies lead to policy changes.



A few of the suggestions are about making new financial incentives. But the problem in the first place is the financial incentive structures! It's much better to let people do their jobs and not have a bunch of incentives. We know this is true for programmers, why do people expect it's not true about everyone else?


Because if you don't add money as incentive the only other strong incentive is for people who join the social sciences because they want to push some kind of policy that they feel strongly about (In this case, their personal beliefs and values serve as their incentive). There is even a joke among psychologists that people who join psychology departments are mostly those who themselves need psychotherapy (and they choose to study psychology instead).


There are lots of incentives to be a programmer today, and incentives are known to be involved in engineering decision making (hence the common criticism of resume-driven development).


The incentive is "have a job". Idiotic incentives are "paid per number of lines written", "paid by bugs solved", etc. That's what I'm talking about.


Generally people do work for money.


Yes. But not per line of code. Or per bug fixed.


The actual title is "What's Wrong with Social Science and How to Fix It". Not exactly a neutral title edit...

Especially as this comes from "a part of DARPA's SCORE program, whose goal is to evaluate the reliability of social science research"


> The actual title is "What's Wrong with Social Science and How to Fix It".

Yeah. I couldn't fit the whole thing into the hard 80-character limit so I decided to drop the word "social" because I thought the subtitle was important to keep. I guess it's moot now that the title has been edited, I assume by a mod.


The original title is too long and I assume op thought the findings can be extended to all science anyway.

On the other hand, removing the later part as the previous submission did makes the title sound overly presumptuous, imo.


This has always been the part of academia I have not understood. Academics are supposed to "know" more than the public. They are paid to have some understanding of reality in a way that the lay public does not so they can have tenure - that is they can't be fired (or it is very hard) for researching controversial subjects with payoffs that are hard to define by someone not in the field.

Everyone else in academia is working towards this end result or drops out somewhere along the path and works in industry. The exception being those specialist degrees that are quasi-academic such as law and medicine, but which have more applied (and therefore measurable) results.

And yet, we don't hold these people to the standards they set for themselves as a cohort? I wouldn't expect every paper to be replicable, or every researcher to always be right. But I see things like the https://en.wikipedia.org/wiki/EmDrive and wonder how many billions of dollar were put into this thing. It's essentially a state backed way of defrauding the public out of tax dollars to pay people with doctorates that made friends with other people with doctorates. What if the US spent the money that was used on the EmDrive on housing or social services for instance? Or building libraries or bridges?

All it would take would be for a few academic journals to demand that the statistical replication of papers be required, but it would mean that the editors would have to be guided by principles other than maximizing revenue.


lol is this the first right wing thread on hacker news? Are the plebs finally waking up?


[flagged]


This middlebrow dismissal is way too broad and makes me suspect that your only interaction with social science is via popular media. You'd have a good point if you said "poorly done science" rather than "not science" and restricted your dismissal to the kind of n = 30 social psychology studies that get paraded around by journalists when they confirm their biases. But it's false as stated; even in psychology alone, there are many findings which can withstand scrutiny and replication, e.g.:

  - Spaced repetition (you remember things longer if you space out your learning over time than if you cram).
  - Primacy/recency (you usually remember the first and last items in a sequence better than items in the middle).
  - Stroop effect (you respond slower when there are incongruent stimuli distracting you).
  - Fitts's law (a model of human movement that can be used to develop better UIs).
  - Strongly negative influence of sleep deprivation on mental performance.
And that's not to mention other social sciences such as economics.


It's interesting that your first three items are memory related. Is research into individual memory capability actually 'social' science at all? Technically it falls under the umbrella of Psychology, but the examples you cited seem like they're researching things closer to Psychophysics than anything socially influenced. For that matter I think that general response applies to all items on your list. Unless you can make a compelling case that sleep deprivation leading to poor performance is the result of anything socially related, or that the forgetting curve is cultural or something which is unlikely.


This is a matter of definitions but my impression is that in common usage, the whole field of psychology is considered a part of the social sciences.


Right, it's definitely technically under the umbrella of 'social science' but the examples you listed are certainly far more towards the side of neuroscience and psychophysics than they are to social or personality psychology, let alone fields like sociology. I can't think of a single social effect on how sleep deprivation negatively effects us for example - that seems to be straight up individual biology that's essentially invariant across all individuals and societies.

I'm happy to grant Psychology those wins, since they are good examples, but realistically Psychology has to be broken down further before you can critique it or praise it. (To be clear, I'm not saying that everything else in Psychology is worthless, there's good research in personality psychology, for example, but it's definitely on shakier ground).


Well, that’s just plain wrong. There is such a thing as social psychology, but psychology generally focuses on the person.


Experimental psychology and economics are not normal among the social sciences. Having a hypothesis and testing it, as experimental psychologists do, is not present in any other field of social science. Neither is the mathematical rigor that some fields of economics apply.

These fields are the "physics" of social science. Most of the rest of the social sciences involves very little science (the process) or math.


This is some sort of a reverse no true Scotsman fallacy. There are plenty of more fields inside social science where hypotheses driven research and testing is the norm. Anthropology comes quick to mind, so does behavioral psychology. Urban engineering, population control, and transportation design use research from social scientists which gets tested all the time with their work, which in turn feeds back in to theory of social systems.

In fact I would say that economics is probably the least “scientific” of the social scientists, since it is based more on mathematics more then discovering and applying properties of the real world. But that is just my definition of science which IMO shouldn’t involve assumptions and derivation of truths from those assumptions, the latter being philosophy.


Anthropology is not a beacon of the scientific method, no. The "hypothesis driven" nature of the field is about as hypothesis driven as literature analysis. If you are treating the process of "I have a hypothesis, I look for evidence, and I find it" as scientific, then History and English are sciences too. The difference that makes a science is that the process of looking for evidence is about systematically trying to falsify your hypothesis - and once there are no alternative explanations left, you have a good theory about what is going on. Anthropology, literature studies, history, and even "harder" disciplines of social science (like linguistics and several other kinds of psychology) don't really work this way. They can produce useful knowledge, sure, but that doesn't make them sciences.

As to the other fields you mentioned, all of them are engineering disciplines, which are very different than sciences. Engineering is about producing useful work from the results of science. The results that those particular disciplines use largely come from psychology.


Economics is as unfalsifiable as every other social science, they’ve just had longer to construct a heap of math to hide behind.


I like to think of economics as being similar to the string theorists in physics. Their work, too, is also practically unfalsifiable but mathematically beautiful.


I definitely don't object to calling out particular subfields when they're non-empirical and heavily influenced by ideology, which I know happens. What I object to are sweeping generalizations that the whole thing is "not science" and "entirely populated by ideological activists". That's intellectually lazy and unfair to the many people who do good work trying to unravel the workings of attention, memory, perception, and reasoning. We should try to talk with a higher level of detail and distinguish rigorous work from low-quality work rather than throw out the baby with the bathwater.


1. What’s science? This is a surprisingly hard question. Im going to guess physics is your model and you’re going to cite popper about falsifiability to me. A lot of activities which don’t look at all like physics are science.

2. Citation needed??? The professional incentives around publishing are bad - I don’t dispute the author’s claim there. But to say we are ideological activists suggests you haven’t met very many social scientists.


This attitude is dismissive and reductive

At the same time, a couple years ago I read a bunch of social sciences crap and realized at the end of it that none of the ideas were actually falsifiable. It was just a bunch of ideas and nothing more. At least with computer science, you can actually apply the crap you read in books to real world stuff

So with respect to social sciences and how much time and money they've wasted so far, I'm completely on board with being both dismissive and reductive


"Science" is the name of how you work with something, not of a subject.


And there is no science in social science.


Yes, which I believe is why people rightly dismiss social science, as not being scientific at all.


What is the science process that social scientists do not follow?

Hypothesis refutability?

Replication? (probably their weakest point btw)

I mean, in this thread i've read about successes in sociology and economics, probably the most math-based social sciences, but i'd say linguistics are probably the second most successfull science in the last 40 years, and that's considered a social science. History have also made a lot of new discoveries recently (by criticizing older works and getting past them). Archeology have slow successes, but still, we understand Neandertal way, way better than we did ten years ago.


> the most math-based social sciences

Please do not confuse "with papers using math which is cavalierly assumed to model reality well" with "math-based".


Sigh. This over-the-top reaction is very common among people who don’t know better. I’ll just link my answer from the last time (here, about economics, but it works fine for soc sci in general): https://news.ycombinator.com/item?id=31411593


Science is not an organization, business, or any single entity and discussing it as such is ridiculous.

How would you implement changes?


It doesn’t appear that you read the article. Social science journals definitely are organizations, businesses and/or single entities that could change.


Journals just publish studies and things like that


Whether or not that's the case, it's obviously possible for loose-knit organizations to change. That's basically the role of philosophy, or more specifically, convincing people to do something different with argument.


> How would you implement changes?

It's pretty easy: you add requirements to accepting federal grant money.


It is an institution though. With very strict rules regarding the scientific method.


These rules are not as strict as one might think, especially when they’re tied up with people’s careers—consider the reproducibility crisis for instance.


I am actually following a course right now. Self paid. It costs roughly USD 1000 to follow and is done by a renowned University.

The lecturer just released an exam which is utterly unreadable. He is not a native english speaker. But there are several grammatical errors. The progression is bad, etc. He introduces non-existing terms (Like innovation application) and is incoherent.

Obviously the teacher did his best. But i really think it begs a question to whether that is good enough.

People should not be doing things they are not competent at.


they ought to be, though not all people have the integrity.


No, there absolutely no strict rules regarding the scientific method. At most there are conventions about the structure of papers in some journals.


that regards form, which indeed is not a part of the institution.


Fauci told us he was the science.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: