That's the opposite. I've never read and re-read code more than i do today. The new hires generate 50 more code than they use to, and you _have_ to check it or have compounding production issues (been there, done that). And the errors can now be anywhere, when before you more or less knew what the person writing code is thinking and can understand why some errors are made. LLMs errors could hide _anywhere_, so you have to check it all.
Isn't that a losing proposition? Or do you get 50 times the value out of it too? In my experience the more verbose the code is, the less thought out it is. Lots of changes? Cool, now polish some more and come back when it's below 100 lines change, excluding tests and docs. I don't dare touch it before.
I agree, but i'm shouting at the cloud. Stuff needs to be done, it seems to work at first, so either i just abandon quality and let things rot, or i read everything and underline each time the code smell.
I too use AI, but mostly to generate scripts (the most usefull use of AI is 100-200 line scripts imho), test _cases_ (i write the test itself, the data inside is generated) and HTML/CSS/JS shenanigans (the logic i code, the presentation i'm inferior to any random guy on the internet, so i might as well use an AI). I also use it for stuff that never end in repository, for exploration/proof of concept and outside of scope tests (i like to understand how stuff work, that helps), or to summarize Powerpoint presentations so i can do actual work during 60-person "meetings" and still get the point.