Unfortunately we have some people who put their beliefs and politics above science and will actively act against investigation only because a line of inquiry could result in evidence against their view.
You say that's people putting beliefs and politics above science, but while that's a rational thing to suggest there are two problems with it.
Firstly, a lot of issues don't and can't have any scientific rationale behind them. They're moral judgements. For example, if you want to go looking for scientific evidence for why the death penalty is a terrible idea you won't find any. You'll only find ethical and moral rhetoric about why killing innocent people by mistake is bad, or why the economics of keeping people in prison for decades when you could just kill them is irrational. Science has nothing to say. Science doesn't judge.
Secondly, people don't always believe evidence even if it's there. For example, if you take a random sample of teams and find the ones that make the best decisions are more diverse[1], some people will demand "positive discrimination" is a terrible idea and a meritocratic system must be better even in the face of the evidence or they'll just argue that the evidence is plain wrong. How do you persuade those people to change their minds?
While there may be evidence that diversity leads to better decision making, this isn't it. It's a marketing piece for a company that sells a decision making co-working platform. Direct from the white paper:
"The study was able to measure when teams made better
decisions by tracking how often the decision maker changed
their mind based on the input of the team. This is presumed to
be a better decision since the Cloverpop process ensures that
decisions are well framed with clear goals, adequate
information and multiple alternatives to avoid groupthink."
In other words, this study defines better decisions as ones which make their product relevant, which unless you're a Cloverpop marketing exec, probably isn't a good metric.
You know, I think science is a lot more capable of converting "is" into "ought" than it is commonly given credit for. I think there's room for a kind of "moral pragmatism". After all, our morals all came from somewhere - some type of reasoning drove their creation, be it conscious or evolutionary. There's a rock bottom, and science can expose contradictions between our core axiomatic beliefs, and our more shallow cached beliefs.
As an example, I'll run through a sample "scientifically pragmatic" argument against the death penalty: The death penalty is a terrible idea because it fails a cost benefit analysis. The benefit is a deterence to commit certain types crime, and a cost savings compared to life imprisonment. These are measurably small (research has shown that severity of a penalty has a nonlinear relationship with deterrent effect). The cost is the violation of a moral imperative not to kill. A society with the death penalty has made a conscious decision to compromise a moral imperative. This weakens the authority of moral imperatives, particularly the one against killing. The purpose of such moral imperatives is to promote a more civil, less violent society, because such societies enjoy an evolutionary advantage, hence their evolution in the first place.
How the argument proceeds from this point depends on core axioms - whether one values happiness or survival more, for instance. But the point is that it's possible to logically break down what appear to be moral questions into purely pragmatic ones, in service of deeper axioms.
The problem with this isn't that it's ineffective - it's that it's complicated, error prone, and relies heavily on a correct accounting of second-order effects and beyond. Science is hard - you need research and facts. Moral judgement is easy - you just say what pops into your head. As such, people mistrust the very idea that morals could be calculable, because it removes their agency. If someone has a strong, irrational feeling that Billy Murderer has it coming and should fry for what he did to those kids, my above argument (fleshed out properly to appeal to their core values, provided that's possible) will not convince them that it's a bad idea in the long run, no matter how scientific or testable it is. So they say things like "science has nothing to say". In fact, it does - we just don't want to listen.
Is that a Newtonian moral imperative, a quantum moral imperative, or a relativistic moral imperative? Science can help optimise toward a pre-existing ought (the moral imperative not to kill, for example) but it has nothing to say about what should be a moral imperative in the first place.
Well, the point was that moral imperatives don't materialize out of nowhere. The question of why humans feel this way is a scientific one, not a moral one. So you can evaluate it, and its effects, in the context of whatever broader goal you're trying to achieve, be it a space-faring civilization, a happy civilization, or even just a surviving one. Most moral imperatives are cached thoughts, subordinate to larger imperatives. Most people can probably be convinced to kill, for example, if they think it's justified.
I general support workplace diversity and mechanisms to promote it but you're making it sound like it's settled science based on some PR piece by a consulting firm promoting a a white paper that almost certainly wasn't rigorous enough to meet any reasonable standards - how do you properly control for confounding factors and how do you define "better" decision? That's miles away from being published in a peer-reviewed journal, which in isolation is also miles away from settled science. This is a microcosm of the political climate we're in - we're too eager to accept what we want to believe, while setting impossible standards for claims that we're disinclined to believe.
I dont really see your problem - and would counter with "we cant understand the world without data, but we also cant understand the world with data alone"[1]. With your firstly and secondly you are basically just making the normative-positive statement distinction: Sure, you cant prove normative statements, that's the point! But you can learn more about them with positive statements, which you can falsify, verify etc.
People not accepting evidence is kind of not really a problem of science, but of science communication. But yeah, its also becoming an increasingly severe problem with examples such as climate change denial and anit-vaxxers on the rise
I think Rosling would argue it differently. It's not a problem of "science communication", because there is no way to spin plain facts, hypotheses, and theory refinement in a way to make it automatically palatable to someone who doesn't want to believe it. (fwiw, I think Factfulness is the most valuable and important book I've read in at least a decade, and I recommend it to everyone.)
Rosling points out that smart, well-educated, well-meaning people who believe in science also see the world incorrectly - so incorrectly that they perform worse than random chance on multiple-choice questions. That's fascinating, and it suggests the problem is far more fundamental than "science communication". When facts are at odds with our instincts or cultural biases, we tend to choose the instincts and biases.
>people don't always believe evidence even if it's there.
I frankly admit some of the "facts" [0] known to me are wrong [1]. I know that for certain, as some of the "facts" are conflicting with each other, however that alone doesn't help with telling which is the wrong one, and which ones to base decisions & judgements off of. This causes various headaches; I end up resorting to fallible heuristics to try to sort out the good ones in time to make the necessary judgements and decisions. I actively try to gather more facts, hoping to improve decisive power in time.
However there's also a meta aspect: the trustworthiness of any given "fact" we learn. It's common to see people acting vigorously on information that's high impact but low trustworthiness. Another common sight is, as you say, people refusing to learn a new "fact "because it is in conflict with the other "facts" they already know, with little regard whether the new one is more trustworthy.
Somewhere along the road we fail, or maybe even refuse, to associate the "facts" we know with how much trust we can put in them. This is matter of handling and processing meta information, and frankly our current education and upbringing curricula don't seem to help much with it.
I hold it to be generally immoral to perform high impact acts based off of "facts" that are known with only low trustworthiness. And as you say science helps us with obtaining ever better set of facts.
--
[0] scare quotes to differentiate between information as it is known vs. idealized truthful facts
[1] either running counter to the idealized truthful facts, or imprecise enough to be misleading
Someone once told me that facts are political, that maybe we shouldn't evaluate all facts based on what's true but whether believing in them or not will create a better society and world to live in.
In many cases this will overlap, but in some it won't, and that means those facts that create a worse world if everyone believes them to be true should be considered as false no matter the actual truth.
My first feeling is that this might create severe trouble down the line at some point, but it might be less trouble than the alternative? An idea to ponder.
edit: The ideas in question touched worth of people, for example. We tie worth to things like earning power, intelligence and beauty. Changing how society views these things changes society. This is on the surface, but some aspects can go much deeper into who we are as a people, since we're storytellers.
The problem with moral judgements is that moral is not absolute. It always changes. That's why it's absurd to judge events happened 100 years ago by the current moral norms. In another 100 years there will be very different moral norms by which many of the things we're doing now will be considered absolutely amoral.
> Firstly, a lot of issues don't and can't have any scientific rationale behind them. They're moral judgements. For example, if you want to go looking for scientific evidence for why the death penalty is a terrible idea you won't find any. You'll only find ethical and moral rhetoric about why killing innocent people by mistake is bad, or why the economics of keeping people in prison for decades when you could just kill them is irrational. Science has nothing to say. Science doesn't judge.
I wanted to talk more about your example of the death penalty.
However, if you do look into the death penalty, you'll find:
- in places that lock people up for life instead of killing them, sometimes people are later proved to be innocent and then released.
- in places where the penalty is death, a jury that would've sent a person to life in prison will often choose to release the person instead of kill them, as they aren't 100% certain and the consequence is irreversible.
I think science absolutely has some things to say about this. You could take my two statements as hypotheses and do tests to see if they are true. (This would be a lot like medical tests, but it wouldn't be ethical to have a test group and a control group; you could, however, create a regression model between two similar societies (or the same one at different times) and control for various differences, the chiefest being 'uses death penalty' or not, and answer these questions ("prove"/"disprove" the hypotheses) with some confidence.)
I agree that while you could do a study to see if people are happier and economies fare better in societies with capital punishment ["science"], that the rational and the outcome have nothing to do with science. It is a lot like in the Ted talk "Teaching kids real math with computers"[0] where the speaker explains that math has four steps:
1. Posing the right questions
2. Real world -> math formulation
3. Computation
4. Math formulation -> real world verification
If step 3 was 'Do Science', then it becomes obvious that the other steps lie outside the domain of science, but it does not become obvious that we can't use the tools of science to reason about problems that people disagree on, including moral quandaries.
if you take a random sample of teams and find the ones that make the best decisions are more diverse
The Forbes article that you cited doesn't include the term "best decisions". The term they use is "better business decisions". Business decisions often relate to optimizing products for the lowest common denominator among consumers in a particular market. In this context, the benefit of having a diverse team is unsurprising.
Do you know of any research of which the results indicate that diverse teams of mathematicians or physicists make better decisions?
In a capitalist society, I will see in a negative light any statement or action that negatively impacts my livelihood. I don’t care if the most vocal activists think that it’s “right” in an absolute sense to enact a policy of reverse discrimination. Anything that detracts from my ability to provide for my family is bullshit.
Just to clarify, you're saying that being on a team that isn't diverse and consequently makes provably worse decisions means you're likely to earn less than the amount you'd earn on a diverse team making better decisions, so you're absolutely in favor of maximizing your earnings by enforcing diversity through positive discrimination, right?
The studied that found diverse teams are more effective often some fishy methodology. In particular, one of the more widely circulated studies reaches their conclusion by making diverse and non-diverse teams plan a wedding using dances, rituals, and food from two or more culturs. A lot of these studies start with a conclusion determined a priori and the design an experiment to support it.
Capitalism has nothing to do with it. Whether it's social security recipients or public school teachers, people whose income is determined by the government are as defensive of their livelihoods as people whose pay is set by market forces, perhaps even more so.
Capitalism has everything to do with it: a commercial entity fires people on a whim, whereas public school teachers basically cannot be fired. The NYC public school system is full of underperforming teachers who shouldn't be employed to begin with. Their pay and conditions of employment are determined via collective bargaining, so they're protected from any conceivable equivalent of a PIP or other "adverse action" by the employer.
Likewise, hiring practices in the private sector change like the weather.
You say that's people putting beliefs and politics above science, but while that's a rational thing to suggest there are two problems with it.
Firstly, a lot of issues don't and can't have any scientific rationale behind them. They're moral judgements. For example, if you want to go looking for scientific evidence for why the death penalty is a terrible idea you won't find any. You'll only find ethical and moral rhetoric about why killing innocent people by mistake is bad, or why the economics of keeping people in prison for decades when you could just kill them is irrational. Science has nothing to say. Science doesn't judge.
Secondly, people don't always believe evidence even if it's there. For example, if you take a random sample of teams and find the ones that make the best decisions are more diverse[1], some people will demand "positive discrimination" is a terrible idea and a meritocratic system must be better even in the face of the evidence or they'll just argue that the evidence is plain wrong. How do you persuade those people to change their minds?
[1] https://www.forbes.com/sites/eriklarson/2017/09/21/new-resea...