I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
I think we really need to have a serious think of what is "good quality" in the age of coding agents. A lot of the effort we put into maintaining quality has to do with maintainability, readability etc. But is it relevant if the code isn't for humans? What is good for a human is not what is good for an AI necessarily (not to say there is no overlap). I think there are clearly measurable things we can agree still apply around bugs, security etc, but I think there are also going to be some things we need to just let go of.
The implications to your statement seems to me that is: "you'll never have to directly care about it yourself, so why do you care about it?". Unless you were talking about the codebase in a user-application relationship which in this case feel free to ignore the rest of my post.
I don't believe that the code will become an implementation detail, ever. When all you do is ship an MVP to demonstrate what you're building then no one cares, before or after LLM assistance. But any codebase that lives more than a year and serves real users while generating revenue deserves to have engineers who knows what's happening beyond authoring markdown instructions to multiple agents.
Your claim seems to push us towards a territory where externalizing out thought processes to a third party is the best possible outcome for all parties, because the models will only get better and stay just as affordable.
I will respond to that by pointing out that, models that will ultimately be flawless in code generation will worth a fortune in terms of adding value, and any corporation that will win the arms race will be actually killing themselves by not raising the cost of access to their services by a metric ton. This is because there will be few LLM providers that actually worth it by then, and because oligopoly is a thing.
So no. I don't expect that we'll ever reach a point where the average person will be "speaking forth" software the same way they post on Reddit, without paying cancer treatment levels of money.
But even if it's actually affordable... Why would I ever want to use your app instead of just asking an LLM to make me one from scratch? No one seems to think about that.
i've been building agent tooling for a while and this is the question i keep coming back to. the actual failure mode isn't messy code, agents produce reasonably clean, well-typed output these days. it's that the code confidently solves a different problem than what you intended. i've had an agent refactor an auth flow that passed every test but silently dropped a token refresh check because it "simplified" the logic. clean code, good types, tests green, security hole. so for me "quality" has shifted from cyclomatic complexity and readability scores to "does the output behaviour match the specification across edge cases, including the ones i didn't enumerate." that's fundamentally an evaluation problem, not a linting problem.
This is where I think its going, it feels that in the end we will end up with an "llm" language, one that is more suited to how an llm works and less human.
You can’t drop anything as long as a programmer is expected to edit the source code directly. Good luck investigating a bug when the code is unclear semantically, or updating a piece correctly when you’re not really sure it’s the only instance.
I think that's the question. Is a programmer expected to ever touch the source code? Or will AI -- and AI alone -- update the code that it generated?
Not entirely unlike other code generation mechanisms, such as tools for generating HTML based on a graphical design. A human could edit that, but it may not have been the intent. The intent was that, if you want a change, go back to the GUI editor and regenerate the HTML.
> Not entirely unlike other code generation mechanisms, such as tools for generating HTML based on a graphical design. A human could edit that, but it may not have been the intent. The intent was that, if you want a change, go back to the GUI editor and regenerate the HTML.
We largely moved back away from "work in a graphic tool then spit out HTML from it" because it wasn't robust for the level of change/iteration pace, this wasn't exactly my domain but IIRC there were especially a lot of problems around "small-looking changes are now surprisingly big changes in the generated output that have a large blast radius in terms of the other things (like interactivity) we've added in."
Any time you do a refactor that changes contract boundaries between functions/objects/models/whatever, and you have to update the tests to reflect this, you have a big risk of your new tests not covering exactly the same set of component interactions that your old tests did. LLM's don't change this. They can iterate until the tests are green, but certain changes will require changing the tests, and now "iterating until the tests are green" could be resolved by changing the tests in a way that subtly breaks surprising user-facing things.
The value of good design in software is having boundaries aligned with future desires (obviously this is never perfect foresight) to minimize that risk. And that's the scary thing to myself about not even reading the code.
So like we went from assembler to higher level programming languages, we will now move to specifications for LLMs? Interesting thought... Maybe, once the "compilers" get good enough, but for mission critical systems they are not nearly good enough yet.
Right. I work in aerospace software, and I do not know if this option would ever be on the table. It certainly isn't now.
So I think this question needs to be asked in the context of particular projects, not as an industry-wide yes or no answer. Does your particular project still need humans involved at the code level? Even just for review? If so, then you probably ought to retain human-oriented software design and coding techniques. If not, then, whatever. Doesn't matter. Aim for whatever efficiency metric you like.
It’s also pretty close to Steve Jobs initial vision of computing in the future (https://stevejobsarchive.com/stories/objects-of-our-life, 1983) but my point is that whatever it is we call AI now became reality so much faster than anyone really saw coming. Even if the pace slows down, and it didn’t yet, things are improving so massively all the time that the world can’t keep up changing to accommodate.
> I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Also we live in a capitalist society. The boss will soon ask: "Why the fuck am I paying you to sip a latte in a bar? While am machine does your work? Use all your time to make money for me, or you're fired."
AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.
> AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.
That’s a bit too cynical for me. After all, yes, your boss is not paying you for sipping lattes, but for producing value for the company. If there is a tool that maximises your output, why wouldn’t he want you to use that to great efficiency?
Put differently, would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?
> why wouldn’t he want you to use that to great efficiency
Because I deny that? It's not fun for me.
> would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?
Why not? If that makes enough money to keep going.
You might argue that in theoretical ideal market companies who're not utilizing every possible trick to improve productivity (including AI) will lose competition, but let's be real, a lot of companies are horribly inefficient and that does not make them bankrupt. The world of producing software is complicated.
I know that I deliver. When I'm asked to write a code, I deliver it and I responsible for it. I enjoy the process and I can support this code. I can't deliver with AI. I don't know what it'll generate. I don't know how much time would it take to iterate to the result that I precisely want. So I can't longer be responsible for my own output. Or I'd spend more time baby-sitting AI than it would take me to write the code. That's my position. Maybe I'm wrong, they'll fire me and I'll retire, who knows. AI hype is real and my boss often copy&pasting ChatGPT asking me to argue with it. That's super stupid and irritating.
I started this career because I liked writing code. I no longer write a lot of code as a lead, but I use writing code to learn, to gain a deeper understanding of the problem domain etc. I'm not the type who wants to write specs for every method and service but rather explore and discover and draft and refactor by... well, coding. I'm amazed at creating and reading beautiful, stylish, working code that tells a story.
If that's taken away, I'm not sure how I could retain my interest in this profession. Maybe I'll need to find something else, but after almost a decade this will be a hard shift.
I totally emphasise as a fellow developer, but I doubt you realise what an incredibly privileged position it is to just refuse working if you don't have fun doing it. And it doesn't really make for a convincing argument to keep you employed either.
> Why not? If that makes enough money to keep going.
If all other competing carpenters use power tools, you're going to loose contracts. We've had a few incredibly easy decades as software developers where market pressure wasn't really a thing, but that is about to change when the cost of producing code drops considerably.
> You might argue that in theoretical ideal market companies who're not utilizing every possible trick to improve productivity (including AI) will lose competition […]
You're moving the goalposts here. We're not talking about wringing every last drop of efficiency out of employees. We're talking about businesses not tolerating paying for licenses for AI agents to enable developers to sip Lattés while their computer does their job. That's a fundamentally different proposition.
> That’s a bit too cynical for me. After all, yes, your boss is not paying you for sipping lattes, but for producing value for the company. If there is a tool that maximises your output, why wouldn’t he want you to use that to great efficiency?
Sitting in a cafe enjoying a latte is not "producing value for the company." If having "5 agents to generate a new SAAS-product" matches your non-AI capacity and gives you enough free time to relax in a cafe, he's going to want to you run 50 agents generating 5 new SAAS products, until you hit your capacity.
If he doesn't need 5 new SAAS products, just one, then he's going to fire you or other members of your team.
Think of it this way: you're a piece of equipment to your boss, and every moment he lets you sit idle (on the clock) is money lost. He wants to run that piece of equipment as hard as he can, to maximize his profit.
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.