Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> DNA doesn't strike me as a good example. I don't think it was a paradigm shift.

> I don't know that I have, really. Sure, some things have changed. But I'm still writing OO code that isn't that different than what I was writing in the late 1980s.

Hmm, okay, I think the issue we're hitting here is something like "no true paradigm shift" -- I would have thought that the introduction of, say, the world wide web in the 1990s would count as a paradigm shift with respect to technology. Perhaps it is incremental? That you have experienced no paradigm shifts working in tech since the 1980s (or at least none you have adopted) seems like a surprising claim.

> Crick and Watson didn't discover it; they just showed how this particular molecule fit well into people's expectations for what was going on.

With respect to Watson and Crick I have to admit I only have surface knowledge of the history of science here. I can say that googling for "Watson Crick discovery" does show a bunch of pages discussing a discovery, many of which seem to think of it as a paradigm shift.

> The notion is that they change more slowly than a completely rational actor would, especially when social status is on the line. The actual speed depends on a variety of factors. Planck exaggerated for rhetorical effect.

I agree with this. Inferential differences in humans has been experimentally demonstrated in the Cognitive Biases literature (psychology, not sociology).

> I think question with science is, "Is it essentially different than almost anything else people do?" And I think the answer there is no.

This is a place that we disagree then, although you may (perhaps rightly) come back and claim I'm taking a "no true scientist" position. To me the remarkable thing about science is how radically it differs from normal human cognition. The desire to submit ideas to falsification, and discard them in the face of data is not a very natural idea for humans, at least judging by history.

> it's still a social enterprise among people embedded in status-driven primate dominance hierarchies.

And here's the bit where you can claim I'm no-true-scientisting: I think much of good science is about subverting the status-hierarchy. This is why you're linking material on electron charge (which requires a stunning amount of agreement on physics to be of interest). If science and scientists behaved like the rest of society, it seems to me we'd still be dealing with the question of atoms existing. For another more concrete difference, willfully falsifying results isn't always grounds for dismissal in other professions (it mainly depends who you falsified them to). That's not true for science.

Which is to say I agree with you broadly ("yes, science is done by scientists who live in a social hierarchy"), but I disagree that this is a particularly useful insight -- if you had tremendous amounts of experience on other professions operating in a social hierarchy your predictions of scientists would be poor.

> [link] That's easily explained if you treat science as another human social activity, but hard to explain otherwise.

"A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule." - https://doingbayesiandataanalysis.blogspot.com/2011/07/horse...

Which is to say I don't think the principal feature of that story is that scientists didn't want to embarrass themselves or others, but rather that there was a real possibility that their experimental apparatus was faulty. The Feynmann quote you linked to doesn't even believe they were doing it for status reasons, but rather something akin to the the https://en.wikipedia.org/wiki/Streetlight_effect



As to the web, I would hesitate to call it a paradigm shift in technology. For me, a paradigm shift requires going from an existing dominant paradigm (that is, overarching conceptual framework) to a new dominant paradigm. E.g., the Copernican Revolution.

The web was a paradigm shift for, say, newspaper publishers. It totally upturned their world. But from a technology perspective, it was pretty straightforward. It created new possibilities, but I'd call it a new frontier. The day before, we were writing daemons to output text over a network socket; we did the same thing the day after. Likewise, we were showing people text and images and getting them to enter data in forms. Indeed, I think the rapid spread of the web was only possible because it wasn't a paradigm shift.

I'd say a better example of a tech paradigm shift would be from mainframes to personal computers. Or from physical servers to whatever thing comes after virtual servers. Or from isolated computers to networked computers.

As to Crick and Watson, they did discover something, but I don't think some people on the Internet saying it's a paradigm shift means that it meets Kuhn's criteria for a paradigm shift.

I agree that the scientific method is important and valuable, but disagree that the social enterprise of science is therefore essentially different. Humans have always been social primates with modest empirical tendencies. The (social) mechanisms of science turn the knobs a bit away from "social primate" and toward "empirical", but it's a difference of degree, not of kind. It is still a social enterprise. We're still status-oriented primates.

As an example, look at the story of Barry Marshall. Sure, he eventually got the Nobel. But he endured enormous resistance because his opinions did not accord with those of the people with power in his field. There is no way to estimate the number of people who we've never heard of because they were not as stubborn as Marshall, but I'm sure it's not zero.

Finally, I disagree on your interpretation of Feynman's story. Humans are, like all their cousin species, intensely status-focused. They published wrong numbers because they didn't want to be wrong in public. "Wrong" here being defined not by actual factual correctness, but by social conformance. They were looking under a streetlight, but they all picked the same streetlight through a social process, not one imposed by the scientific method.

We totally agree on the scientific ideal. But I think it's vital to acknowledge the divergence between the ideal and actual practice, and to study the causes of that divergence.


> The web was a paradigm shift for, say, newspaper publishers. It totally upturned their world. But from a technology perspective, it was pretty straightforward. [...] The day before, we were writing daemons to output text over a network socket; we did the same thing the day after.

Again, I think we have a case of "no true paradigm shift" on our hands. My inner you says the same thing about the newspaper business actually: "the day before we were writing news stories and selling ads; we did the same thing the day after." Of course, in the technology case the kinds of software we were writing drastically shifted, which algorithms were important shifted, etc .. but that's also analogously true for news .. the beats and topics changed in value.

I'll go so far as to call this the central problem of sociology; the terms are vague enough that theories built on them can be bent to explain anything. Medicine has this problem too, in that tons of diseases are essentially catch-alls for unexplained pain, discomfort, or inflammation. Chemistry, Physics, or Math not so much. Psychology sometimes. I do think sociology has some value, but not more value than say science fiction or poetry (which also allows readers to think personally interesting thoughts that work with personal/non-objective categories).

^ for the record that's not an original idea. The https://en.wikipedia.org/wiki/Logical_positivism folk beat me to it.

> Humans have always been social primates with modest empirical tendencies. The (social) mechanisms of science turn the knobs a bit away from "social primate" and toward "empirical", but it's a difference of degree, not of kind. It is still a social enterprise. We're still status-oriented primates.

To be concrete, consider the "medicine" offered by https://en.wikipedia.org/wiki/Traditional_Chinese_medicine and then compare with actual medicine. If you'd like to call that a difference in degree that's your right I suppose... but from an external results perspective they appear to me a difference in kind. Maybe that's splitting hairs? Is a permanent marker different from a whiteboard marker by degree or kind? I dunno, I guess maybe I see the glass as 90% full rather than 10% empty here.

I do know that a dataset on the history of TCM would perform extremely poorly at predicting the progress that Chemistry has enjoyed... however a dataset on the history of Physics would do rather well. That seems like a difference of kind to me. Absolutely a difference of category from a ML clustering perspective.

> I disagree on your interpretation of Feynman's story. They published wrong numbers because they didn't want to be wrong in public.

As I'm sure you know from the link, Feynman disagrees with you here:

> When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard

Fortunately for us, your claim can actually be tested. We just need to see if scientists care about being wrong when the error won't be public. They do. As evidence I #include Neuton's massive catalog of unpublished works, and Feynman's book title "The Pleasure of Finding Things Out". Much of science is an intensely single-player puzzle game. You play because it's fun, not to show off your high score.

I do agree that no one enjoys being wrong in public -- we spellcheck our work for others, not for ourselves, etc. I just think the effect size is waaaaay smaller here than most areas of human endeavor (somewhat higher, but on the order of the status effects that can be seen in crossword puzzle solving behavior). I think Feynman also believes this, as evidenced by the above quote, and his autobiography more generally.

> We totally agree on the scientific ideal. But I think it's vital to acknowledge the divergence between the ideal and actual practice, and to study the causes of that divergence.

I do have a category of "science pretenders" that I use to explain essentially non-scientists that someone has accidentally given tenure to. If you just ignore these people you can still do just fine as far as predicting the future of science goes in my experience and opinion. They don't publish anything interesting.

This is what I was meaning when I earlier freely admitted to being guilty of playing "no true scientist" -- if some university, say, adds a fashion sciences department, that won't make the people employed there scientists, or fashion sciences a science. It will be individuals applying scientific thinking (ie puzzle solving type thought). Because it all depends on small groups of puzzle solvers, that's where the modeling data is. Contrast with non-sciences where the personal tastes of big players need to be modeled, or worse yet other people's guesses as to what good personal taste of the big player is.

As you move to softer sciences I suppose I can see an argument needing to study primate behavior more, I guess I just don't see it for the hard sciences, at least if you're selected a good group to work with. Hmm... maybe not needing large grants to do you research is also important there too. So essentially I think it's important "for non-scientific purposes related to science".


Ok. I don't think it's my job to argue you into understanding Kuhnian paradigms as distinct from other uses of the word. I also believe you continue to misunderstand what's going on with both the oil-drop experiment and with Feynman's take on it, but I don't see more words from me helping there either, but I'm happy to agree to disagree.


I will say that sociologists are better than normal at making model disagreements sound like the other person's fault. Perhaps my understanding of a Kuhnian paradigm shift is wrong? It doesn't feel wrong, and I've read a fair deal on it in my early college days.

I never got the chance to talk with Kuhn himself, but I'd imagine if we could bring him back and include him here we'd have a third notion of what he was talking about -- and all the models would be consistent with the text he wrote -- not a bad thing in my view, rather like differing opinions on a poem's meaning.

Take care and thanks for chatting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: