Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Physics theories are of course true (or not) regardless. But their "truth" (that is, the socially agreed set of things broadly agreed upon as true) does definitely vary as people die. Relativity has always been true, but it gradually became "true" in the first half of the 20th century. Luminiferous aether theory has always been false, but it was "true" for hundreds of years before.

And science really does work that way, which is exactly Planck's point. And Kuhn's, of course. It's a human enterprise, an essentially social one.

I also think that you're overfocused on Kuhn's words. He was being modestly hyperbolic. People are modestly capable of relearning, but the ability declines with time and they're better at it for marginal learning than foundational change.

Technology is not a counterexample. I've been coding for 30+ years now, and my dad started coding 50 years ago. I work very hard to keep up, but it's easier for someone new because they don't have to unlearn anything. They don't have to reconcile new data with a vast amount of old data.

A lot of technological progress happens because our field has been continuously expanding for decades, providing a flood of new people who seize upon the latest trends. And we work in a commercial context that heavily rewards innovation. Most major tech companies were founded by people who were young. There's a reason for that.



I think relativity is a red herring in this discussion because it was so famously hard to prove. That a theory which took on the order of a generation to satisfactorily prove required a generation for mass acceptance isn't in my opinion evidence for Kuhn's hypothesis, or particularly noteworthy. If you look at the different (though also foundational) example of Watson and Crick's discovery of DNA you won't see skepticism from the old guard but rather excitement. This is because the discovery, though revolutionary, was easily proved -- every cell has DNA, as can be verified by anyone once they are told how to look for it.

More broadly I think you will find that for every relativity-like-theory that was slow on the uptake (which is to say difficult to prove), there are also 10-100 promising theories which were discarded... and that the very real risk that a theory could be wrong is the principal reason for the eventually-winning-theories' slow uptake among scientists.

(Incidentally while double checking this critique and my DNA example, I found out it's one of the more common critiques of Khun's work. See http://plato.stanford.edu/entries/thomas-kuhn/#6.1)

With that out of the way, let's take a closer look at your claim that technological progress isn't a counterexample. Your point that the expansion of new people into tech should count as a new generations is well received, and I think a good and interesting one... but you also admit you yourself have changed paradigms in your lifetime. Does that not count as you putting yourself forward as a counterexample, and agreeing with respect to tech more generally?

I do agree that as I've gotten older I grumble a bit more when I have to learn a new way of thinking about something I'm already familiar with... but when it can be shown concretely that the new way is better (for example the results from deep learning) I do spend the time to relearn. This reticence seems more than enough to account of the data Kuhn is using, so don't see why a fancier hypothesis involving me (and more broadly everyone) secretly refusing to give up on lesser ideas is needed.


DNA doesn't strike me as a good example. I don't think it was a paradigm shift. Crick and Watson didn't discover it; they just showed how this particular molecule fit well into people's expectations for what was going on.

As to this:

> you also admit you yourself have changed paradigms in your lifetime

I don't know that I have, really. Sure, some things have changed. But I'm still writing OO code that isn't that different than what I was writing in the late 1980s. I still build systems on Unix-ish OSes on collections of discrete servers. The major difference is that the servers are virtual, but that's hardly a difference.

As an industry, the phrase "virtual server" is a sign we're still struggling to make a paradigm shift. It's like "radio with pictures" or "horseless carriage". But look at how much hate the possible alternatives, like containerization or serverless computing get. And that pattern of hate is a common thing in technology. A large proportion of people just won't use anything new unless circumstances force them. [1]

> Does that not count as you putting yourself forward as a counterexample, and agreeing with respect to tech more generally?

No, because nobody is claiming that people never change. The notion is that they change more slowly than a completely rational actor would, especially when social status is on the line. The actual speed depends on a variety of factors. Planck exaggerated for rhetorical effect.

> so don't see why a fancier hypothesis involving me (and more broadly everyone) secretly refusing to give up on lesser ideas is needed.

I don't think that's the right question to look at.

The pattern of people holding on to old ideas because they're comfortable or socially beneficial is pervasive. For example, consider this graph:

http://content.gallup.com/origin/gallupinc/GallupSpaces/Prod...

The change there is very close to the death rate. Or look at the way religions change.

I think question with science is, "Is it essentially different than almost anything else people do?" And I think the answer there is no. Science is somewhat better due to having real data. But it's still a social enterprise among people embedded in status-driven primate dominance hierarchies. This leads to results like the issues surrounding the measurement of the mass of the electron:

https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan.2...

That's easily explained if you treat science as another human social activity, but hard to explain otherwise.

[1] https://en.wikipedia.org/wiki/Technology_adoption_life_cycle


> DNA doesn't strike me as a good example. I don't think it was a paradigm shift.

> I don't know that I have, really. Sure, some things have changed. But I'm still writing OO code that isn't that different than what I was writing in the late 1980s.

Hmm, okay, I think the issue we're hitting here is something like "no true paradigm shift" -- I would have thought that the introduction of, say, the world wide web in the 1990s would count as a paradigm shift with respect to technology. Perhaps it is incremental? That you have experienced no paradigm shifts working in tech since the 1980s (or at least none you have adopted) seems like a surprising claim.

> Crick and Watson didn't discover it; they just showed how this particular molecule fit well into people's expectations for what was going on.

With respect to Watson and Crick I have to admit I only have surface knowledge of the history of science here. I can say that googling for "Watson Crick discovery" does show a bunch of pages discussing a discovery, many of which seem to think of it as a paradigm shift.

> The notion is that they change more slowly than a completely rational actor would, especially when social status is on the line. The actual speed depends on a variety of factors. Planck exaggerated for rhetorical effect.

I agree with this. Inferential differences in humans has been experimentally demonstrated in the Cognitive Biases literature (psychology, not sociology).

> I think question with science is, "Is it essentially different than almost anything else people do?" And I think the answer there is no.

This is a place that we disagree then, although you may (perhaps rightly) come back and claim I'm taking a "no true scientist" position. To me the remarkable thing about science is how radically it differs from normal human cognition. The desire to submit ideas to falsification, and discard them in the face of data is not a very natural idea for humans, at least judging by history.

> it's still a social enterprise among people embedded in status-driven primate dominance hierarchies.

And here's the bit where you can claim I'm no-true-scientisting: I think much of good science is about subverting the status-hierarchy. This is why you're linking material on electron charge (which requires a stunning amount of agreement on physics to be of interest). If science and scientists behaved like the rest of society, it seems to me we'd still be dealing with the question of atoms existing. For another more concrete difference, willfully falsifying results isn't always grounds for dismissal in other professions (it mainly depends who you falsified them to). That's not true for science.

Which is to say I agree with you broadly ("yes, science is done by scientists who live in a social hierarchy"), but I disagree that this is a particularly useful insight -- if you had tremendous amounts of experience on other professions operating in a social hierarchy your predictions of scientists would be poor.

> [link] That's easily explained if you treat science as another human social activity, but hard to explain otherwise.

"A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule." - https://doingbayesiandataanalysis.blogspot.com/2011/07/horse...

Which is to say I don't think the principal feature of that story is that scientists didn't want to embarrass themselves or others, but rather that there was a real possibility that their experimental apparatus was faulty. The Feynmann quote you linked to doesn't even believe they were doing it for status reasons, but rather something akin to the the https://en.wikipedia.org/wiki/Streetlight_effect


As to the web, I would hesitate to call it a paradigm shift in technology. For me, a paradigm shift requires going from an existing dominant paradigm (that is, overarching conceptual framework) to a new dominant paradigm. E.g., the Copernican Revolution.

The web was a paradigm shift for, say, newspaper publishers. It totally upturned their world. But from a technology perspective, it was pretty straightforward. It created new possibilities, but I'd call it a new frontier. The day before, we were writing daemons to output text over a network socket; we did the same thing the day after. Likewise, we were showing people text and images and getting them to enter data in forms. Indeed, I think the rapid spread of the web was only possible because it wasn't a paradigm shift.

I'd say a better example of a tech paradigm shift would be from mainframes to personal computers. Or from physical servers to whatever thing comes after virtual servers. Or from isolated computers to networked computers.

As to Crick and Watson, they did discover something, but I don't think some people on the Internet saying it's a paradigm shift means that it meets Kuhn's criteria for a paradigm shift.

I agree that the scientific method is important and valuable, but disagree that the social enterprise of science is therefore essentially different. Humans have always been social primates with modest empirical tendencies. The (social) mechanisms of science turn the knobs a bit away from "social primate" and toward "empirical", but it's a difference of degree, not of kind. It is still a social enterprise. We're still status-oriented primates.

As an example, look at the story of Barry Marshall. Sure, he eventually got the Nobel. But he endured enormous resistance because his opinions did not accord with those of the people with power in his field. There is no way to estimate the number of people who we've never heard of because they were not as stubborn as Marshall, but I'm sure it's not zero.

Finally, I disagree on your interpretation of Feynman's story. Humans are, like all their cousin species, intensely status-focused. They published wrong numbers because they didn't want to be wrong in public. "Wrong" here being defined not by actual factual correctness, but by social conformance. They were looking under a streetlight, but they all picked the same streetlight through a social process, not one imposed by the scientific method.

We totally agree on the scientific ideal. But I think it's vital to acknowledge the divergence between the ideal and actual practice, and to study the causes of that divergence.


> The web was a paradigm shift for, say, newspaper publishers. It totally upturned their world. But from a technology perspective, it was pretty straightforward. [...] The day before, we were writing daemons to output text over a network socket; we did the same thing the day after.

Again, I think we have a case of "no true paradigm shift" on our hands. My inner you says the same thing about the newspaper business actually: "the day before we were writing news stories and selling ads; we did the same thing the day after." Of course, in the technology case the kinds of software we were writing drastically shifted, which algorithms were important shifted, etc .. but that's also analogously true for news .. the beats and topics changed in value.

I'll go so far as to call this the central problem of sociology; the terms are vague enough that theories built on them can be bent to explain anything. Medicine has this problem too, in that tons of diseases are essentially catch-alls for unexplained pain, discomfort, or inflammation. Chemistry, Physics, or Math not so much. Psychology sometimes. I do think sociology has some value, but not more value than say science fiction or poetry (which also allows readers to think personally interesting thoughts that work with personal/non-objective categories).

^ for the record that's not an original idea. The https://en.wikipedia.org/wiki/Logical_positivism folk beat me to it.

> Humans have always been social primates with modest empirical tendencies. The (social) mechanisms of science turn the knobs a bit away from "social primate" and toward "empirical", but it's a difference of degree, not of kind. It is still a social enterprise. We're still status-oriented primates.

To be concrete, consider the "medicine" offered by https://en.wikipedia.org/wiki/Traditional_Chinese_medicine and then compare with actual medicine. If you'd like to call that a difference in degree that's your right I suppose... but from an external results perspective they appear to me a difference in kind. Maybe that's splitting hairs? Is a permanent marker different from a whiteboard marker by degree or kind? I dunno, I guess maybe I see the glass as 90% full rather than 10% empty here.

I do know that a dataset on the history of TCM would perform extremely poorly at predicting the progress that Chemistry has enjoyed... however a dataset on the history of Physics would do rather well. That seems like a difference of kind to me. Absolutely a difference of category from a ML clustering perspective.

> I disagree on your interpretation of Feynman's story. They published wrong numbers because they didn't want to be wrong in public.

As I'm sure you know from the link, Feynman disagrees with you here:

> When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard

Fortunately for us, your claim can actually be tested. We just need to see if scientists care about being wrong when the error won't be public. They do. As evidence I #include Neuton's massive catalog of unpublished works, and Feynman's book title "The Pleasure of Finding Things Out". Much of science is an intensely single-player puzzle game. You play because it's fun, not to show off your high score.

I do agree that no one enjoys being wrong in public -- we spellcheck our work for others, not for ourselves, etc. I just think the effect size is waaaaay smaller here than most areas of human endeavor (somewhat higher, but on the order of the status effects that can be seen in crossword puzzle solving behavior). I think Feynman also believes this, as evidenced by the above quote, and his autobiography more generally.

> We totally agree on the scientific ideal. But I think it's vital to acknowledge the divergence between the ideal and actual practice, and to study the causes of that divergence.

I do have a category of "science pretenders" that I use to explain essentially non-scientists that someone has accidentally given tenure to. If you just ignore these people you can still do just fine as far as predicting the future of science goes in my experience and opinion. They don't publish anything interesting.

This is what I was meaning when I earlier freely admitted to being guilty of playing "no true scientist" -- if some university, say, adds a fashion sciences department, that won't make the people employed there scientists, or fashion sciences a science. It will be individuals applying scientific thinking (ie puzzle solving type thought). Because it all depends on small groups of puzzle solvers, that's where the modeling data is. Contrast with non-sciences where the personal tastes of big players need to be modeled, or worse yet other people's guesses as to what good personal taste of the big player is.

As you move to softer sciences I suppose I can see an argument needing to study primate behavior more, I guess I just don't see it for the hard sciences, at least if you're selected a good group to work with. Hmm... maybe not needing large grants to do you research is also important there too. So essentially I think it's important "for non-scientific purposes related to science".


Ok. I don't think it's my job to argue you into understanding Kuhnian paradigms as distinct from other uses of the word. I also believe you continue to misunderstand what's going on with both the oil-drop experiment and with Feynman's take on it, but I don't see more words from me helping there either, but I'm happy to agree to disagree.


I will say that sociologists are better than normal at making model disagreements sound like the other person's fault. Perhaps my understanding of a Kuhnian paradigm shift is wrong? It doesn't feel wrong, and I've read a fair deal on it in my early college days.

I never got the chance to talk with Kuhn himself, but I'd imagine if we could bring him back and include him here we'd have a third notion of what he was talking about -- and all the models would be consistent with the text he wrote -- not a bad thing in my view, rather like differing opinions on a poem's meaning.

Take care and thanks for chatting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: