For screening, it depends on the false positives rate. A radiologist with have to check every positive prediction. Although, I believe in Europe, they have approved AI to be used as a second reader.
The article is simplified (a retrospective metastudy) and might not be indicative of what real-life performance. Even reader studies (which would be more rigorous) skip so much that would be crucial to actual deployment (integration into the clinical workflow being one such critical factor).
Clinical AI (which is currently regulated as a CAD medical device by the FDA) won't replace radiologists but treated as an additional clinical vendor application integrated into existing software. Similar to speech recognition diction that has been provided by Nuance for decades.
It's not an issue of resolution but of generalizability. Populations and scanners shift over time and the biggest issue in clinical AI is the changing data distribution, such as data acquired at different times at different institution. Medical devices (which AI software is considered) is also more regulated than self-driving cars.
Also publications are not what determines if AI get deployed in clinical practice. That's the job of the FDA and million of dollars spent on validation like clinical trials and quality management systems.