Replication is a huge issue, but I've been wondering lately about refusal to publish.
Even if all of the published papers were peer reviewed and replicated, there's a lot of science that never sees the light of day.
Even non-results are important - if not glamourous, and (even worse) publishing is disincentivised if the results go against what the funders wanted or expected.
The replication crisis can not be overstated though because "science" that can not be replicated isn't science at all. The very base definition of science is that it's a method that produces testable and predictable propositions about our world. If a theory or paper does not deliver that it's guesswork, a superstition or religious belief in extreme cases. But certainly not science, no matter how many people or institutions refer to it as such. Just as the Democratic People's Republic of Korea isn't actually democratic.
Science is about the process, not the results. Somebody discovers an phenomenon, studies it, fails to take something relevant into account, and publishes a faulty result, and that's science. Somebody else tries to replicate the study, fails it, and can't figure out what went wrong, and that's science. Then somebody discovers the flaw, gets a different result, and publishes it, and that's science.
When there is no reliable review mechanism, then the process is inherently flawed. It may be science in your book but following that definition, everything anyone studies and publishes is science. Homeopathy is science by that definition. Flat earth studies too.
There needs to be proper scrutiny to ensure minimal quality standards are at least followed, else the whole thing is bunk. A study that can not be replicated by anyone is not research, it's monkeys hitting keys without understanding what they're doing. Sadly monkeys with PhDs in some cases but that doesn't change anything.
The distinguishing feature of what "real science" is in the eye of the believer seems to be someone holding an academic title, i.e. an argument from authority, not verifiable standards that exclude randomness. There is nothing scientific about belief.
Mechanisms and rules are the problem, not the solution. The reason we got the reproducibility crisis in the first place is because too many people believe in rules. They believe that if you discover something by following the accepted rules of science, you have a scientific result.
But when you have a system, you always have people gaming it. People who believe that it's ok to bend the rules a bit. People who believe it's ok to break them, as long as you don't get caught. If you don't get caught, people who believe in the rules are inclined to accept your results, as you followed the rules.
By changing the rules, you deal with specific symptoms, but the underlying problem remains. The people gaming the system always win in the end.
And sometimes the rules are simply insufficient.
When science works, it works because people go beyond the rules. It works because people are skeptical of their own ideas and willing to show their best efforts to test them. You can't cultivate attitudes like that with rules and regulations.
My impression is that in social science, this happens when the results are "wrong". Maybe compounding the replication crisis if only the "right" results get published but "right" was actually wrong.
Even in the "hard" sciences, this bias can show up. In biology, for instance, this can be a big problem. For example, the Columbia professor who studied rapid-onset gender dysphoria and its apparent social contagion was told to retract her paper on the matter.
Even if all of the published papers were peer reviewed and replicated, there's a lot of science that never sees the light of day.
Even non-results are important - if not glamourous, and (even worse) publishing is disincentivised if the results go against what the funders wanted or expected.