"Research" is not some monolithic single concept. One might also ask, "How can research produce ChatGPT when for decades research failed to produce ChatGPT?"
Exactly. So why now are we trusting 'Research' that is trying to predict the future of other 'Research'. The linked article is just some estimates on the error built into the current LLM model.
How can we extrapolate that to be "well, gosh darn, these LLM's are already played out, guess we're all done"