I heard Schmidt talking about the book in podcasts, and from what i understand he bases his view of the success of NLP models like GPT*. I think this is sorely misguided, as nothing suggests that these models, which can generate readable text, are any form of intelligent or have any kind of agency. Nothing in the structure of a transformer indicates an ability for such things. GTPs seem to be more like a "central pattern generator" for language, like the neural circuits that make patterns like walking possible. The arguments weren't convincing and imho lacked insight beyond cheap fearmongering.
Are you stating that the issue Schmidt notes is that of the dangers of systems that "may approach intelligence"? Because it seemed that the issue remains with the dangers of systems that do not "approach intelligence" to decent and reliable degree.