This data is pretty questionable. OpenAI employees have said on Twitter that it does not account for ChatGPT Enterprise, where most of their growth is, which is quote-only and not paid by credit card.
I would point out Anthropic isn't profitable either (yet), it's just that enterprise is where the money is. Now that all the AI companies are narrowing in on that market, becoming profitable will be even more challenging.
No, AI is truly useful in software engineering. I was a skeptic until I started using it. No, it isn’t going to solve every problem out there, but it’s a force multiplier.
You pay understanding for speed. How much this trade is acceptable is up to you and the task you have in front of you. I cannot recommend it as a general solution.
This field doesn’t do well on long-term thinking. Even if all this turns out to be a net loss, it will be reinterpreted as a win and just an opportunity for even more of the same solution. There are numerous examples of this, e.g. the OOP craze. Tech is a stock market of ideas and HN is a trading floor. The “line goes up” logic applies - not merit.
You may not recall the crazy era of OOP where people would go bonkers with massive object trees trying to objectify everything and using operator overloading to do (dumb) things like adding a control to a window with +=.
That’s just falls. I’ve spent disproportionate amount on “understanding” awful tooling like Gradle and npm. There’s no value in it if you’re not an infra engineer. It would take me a couple of days to manually restructure my hobby app, now I can just say “extract this into another workspace/subproject” and be done with it in minutes. And that’s just one example.
I agree with this sentiment. I just also see AI-driven development in core business logic, where truly understanding what is going on is essential and yet completely disregarded.
I liken it to VR. That was a big hype before AI and while I really love the tech (I have 5 headsets) I could have told anyone that the expectations were insane. The investors truly believed that in 2-3 years time everyone would be doing everything with a big headset on. It was dragged into lots of situations where it didn't belong.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
> It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
It is astonishingly poor at this. My intuition was that it should be good at this (it is basically a translation problem right? And LLMs are fundamentally translation systems) but the practical results are so poor. Not just mis-identifying speakers (frequently saying PersonX responded to PersonX) but managing complete opposite conclusions from what was actually said.
I'm genuinely intrigued as to what approaches have been taken in this space and what the "hard problem" is that is stopping it being good.
I mean it is a tough problem, you'd really have to voiceprint each speaker. But I'm sure this is technically possible considering voice cloning is pretty commonplace now.
And yeah the transcription quality also drops a lot. Where humans are still quite capable at reading it. Sometimes when I read the transcript I'm quite surprised it manages to make any intelligble minutes out of it at all.
I just don't understand how Microsoft place this feature as a minute-taking replacement when it's not ready for really super common usecases.
Thousands? Maybe not, but hundreds? Yeah, for my freelancer/contracting gigs, it's easily worth $200/month to be able to say "How come X is like that and what change lead to Y being Z?", wait 20 minutes and then get an answer that jumpstarts understanding a completely new codebase. If AI/LLMs never evolved beyond their current skills and usefulness, I'd still be happy to pay $200/month for this.
However, I don't know a single developer who pays "thousands of dollars a month", not sure how you'd end up like that.
From my vantage point AI consumption is being lead by tech leadership moreso than actual in-the-weeds programmers themselves. HN just happens to include more folk at the intersection of leadership and individual code contributor.
The top down push for AI is in line with the age old traditions of replacing highly skilled and highly compensated trade workers with automation. The writing is on the wall if folks care to look; many just don't want to. This has happened 1000 times before and it'll keep happening in the name of "progress" in capitalist systems for as long as there are "inefficiencies" to "resolve." AI is meant as our replacement, not as an extension of our skill as it happens to align with today.
Its increasingly obvious that the next phase in the evolution of the average programmer role will be as technical requirements writers and machine generated output validators, leaving the actual implementation outsourced to the machine. Even in that new role, there is no secret sauce protecting this "programmer" from further automation. Technical product managers eventually fall to automation given enough time and money poured into the automation of translating fuzzy, under specified ideas into concrete bulleted requirements where they can simply review the listed output, make minor tweaks and hit "send" to generate the list of jira-like units of work to farm out to a fleet of agents wearing various hats (architect, programming, validator, etc.)
The above is very much in progress already, and today I'm already spending the majority of my time reviewing the output of said AI "teams", and let me tell you: it gets closer and closer to "good enough" week by week. Last year's models are horse shit in comparison to what I'm using today with agentic teams of the latest frontier models (Opus 4.6 [1m] currently, with some Sonnet.)
Maybe we're at a plateau and the limitations inherent in GenAI tech will be insurmountable before we get to 100% replacement. But it literally won't matter in the end as "good enough" always prevails over the perfect, and human devs are far from perfect already.
I have been producing software (at fang scale) for several decades now, and I've been closely monitoring GenAI systems for coding specifically. Even just a few months ago I'd get a verbose, meandering sprawl of methods and logic scattered with the actual deliverables outlined in the prompt from these systems. Sometimes even with clear disregard of the requirements laid out, or "cheating" on validation via disabling tests or writing ones that don't actually do anything useful. Today I'm getting none of that. I don't know what changed, but I somehow get automated code with good separation of concerns, following best practices and proven architectural patterns. Sure, with a bunch of juniors let loose with AI you get garbage still, but that's simply a function of poor delegation of work units. Giving the individual developer and the AI too much leeway in the scope of changes is the bug there. Division of work into small enough units is the key and always has been for the de-skilling portion of automating away skilled human labor for machines. We're just watching Marxist theory on capitalist systems play out in real time in a field generally thought to be "safe." It certainly won't be the last.
Pretty bare bones setup: Claude Agent Teams with some wrappers to leverage our bedrock hosted models. Opus 4.6 [1m] as orchestrator, architect and reviewer, Sonnet 4.6 [1m] for investigation, data gathering and coding depending on task scope.
To be fair, LLMs are exceptional at coding and they very well could displace some jobs. But you'll always need people at the helm who know what they're doing too.
Yeah they are called PMs and already exist. And these people normally are creating the design documents, the flows etc. and then have to wait for the dev team to implement this.
So a good PM running 1-3 teams, will only need 1-3 agentic ai teams instead.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
> Any decently skilled human blows them out of the water
No, by far no. I’m by all accounts “decently skilled human”, at least if we go by our org, and it blows anyone out of the water with some slight guidance.
And the most important part: it doesn’t get tired, it doesn’t have any mood swings, its performance isn’t affected by poor sleep, party yesterday or their SO having a bad day.
The thing is, LLM's produce better quality one-shots than any of the products that get returned from overseas ultra-budget contractors in India or SEA. I don't know what that means for Western devs, but I can tell you that the fortune 500 I work for is dialing back on contracting and outsourcing because domestic teams can do higher-quality work faster.
>The thing is, LLM's produce better quality one-shots than any of the products that get returned from overseas ultra-budget contractors in India or SEA.
I have 20 years of experience and I don't handwrite any code anymore. Opus does everything, and it only needs a bit of steering occasionally. If you can give it guardrails (ie a pre-existing design system) and ways to verify its output (ie enforce TDD and use Chrome to visually verify) then it gets it right basically every time.
With the models I've been working with lately, providing them with small, actionable units of work that can easily fit within their context window (before compaction) seems to work well. If you can hit that sweet spot, you can get excellent output.
I don't tell the agents to "just go do it", as that tends to go off the rails for complex tasks. Emulating real world software development processes in meat space with your AI "team" seems to approximate similar outcomes.
I usually start by having the agents construct a plan document which I iterate on and build up well before writing code. This is a living document, not a final design (yet.) If I run into context window issues I just shut them down and rebuild from the document. I farm out research and data gathering tasks to build it up. Once all the findings are in I have the architect take a stab at the technical system design before the break down and delegation work begins. By then the units of work are small and manageable.
I’ve been a full stack developer for 10+ years now and I completely disagree.
Modern models like Opus / Gemini 3 are great coding companions; they are perfectly capable of building clean code given the right context and prompt.
At the end of the day it’s the same rule of garbage in -> garbage out, if you don’t have the right context / skills / guidance you can easily end up with bad code as you could with good code.
Step 1: make a coding product which is better on cost/quality/speed. Probably need to choose two, so redirecting compute from dumb ai videos to coding makes sense.
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
Imagine all the money they can save on Sora which surely cost them way more than regular LLM usage, that they can now invest into suave Superbowl ads trash-talking Claude.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
I believe that the $1b is apparently not coming anymore because it was basically dependent upon Sora being an actual product that actual people can use, which isn't the case anymore.
Software engineers have spent the last 40 years automating away other people's jobs. The discomfort only seems to start when the automation points inward.
I want to make people’s jobs easier and more interesting, I never want to make them redundant.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
Haven't mechanical engineers done the same thing (steam engines, trains,...)? The whole applied science is about using knowledge to remove tediousness (and now adding it back). A lot of jobs have been removed.
Have they? I keep seeing this little snippet of wisdom being thrown about everywhere in these AI discussions as a gotcha, but to me it seems like moving jobs into dirt cheap 3rd world countries with slave labor is the biggest culprit for job loss than any kind of automation from software.
If anything software engineers have spawned in uncountable numbers of jobs that never would've existed before, is what my intuition tells me.
maybe. that's a fair point. public opinion has moved away from israel so even the mass media in america might be a little less generous to israel, which would turn even more people away from israel.
Let me repeat: They are about to annex a sovereign nation while reducing the capital city to rubble. May or may not remind you of another country further north.
One may argue that Lebanon is already annexed by Iran using Hezbollah which has more power than the official Lebanon government or at least had more power before attacks from Israel in recent years. Also I don't beleive Israel is going to annex Lebanon but they may create a buffer zone in the south of the country.
> Lebanon is already annexed by Iran using Hezbollah which has more power than the official Lebanon government
I invite you to argue it, despite the Lebanese army, in their own words, "happily" working with Hezbollah in fighting against Israeli invasion into Lebanon.
> Also I don't beleive Israel is going to annex Lebanon but they may create a buffer zone in the south of the country.
It would remind me of that if Ukraine attacked first... over and over again throughout the last decades... together with it's allies in the region... occasionally abducting a few hundred Russia civilians... there is no parallel here.
Read accounts of former UN peacekeepers who've served at UNIFIL, or actual thoughts of Lebanese themselves. Israel has been longing to acquire South Lebanon since ages, and the only thing that has prevented them from doing so were the Hezbollah. Israeli troops would block roads, fire and shoot at UNIFIL positions, as well as carry out espionage and sabotage. Every peacekeeper will attest to Israeli troops being far more problematic and dangerous than Hezbollah attacks and rockets. Lebanese themselves will echo the above opinion, and further add that Israeli looting is pretty much the same as Russian looting in Ukraine - everybody stopped buying stuff because they would know that IDF troops would seize it from them within days during the previous occupation. Shops would rather remain shut and lose business than keep themselves open under Israeli watch. Even moderate Israeli media has been extremely hostile to the very idea of UNIFIL.
The Hezbollah has always been a boogie man excuse for the Israelis to expand into Lebanon. Well, Hezbollah's gone now and we already know what's happening. Lebanon is losing close to a fifth of its land.
Israel has been bombing (and conducting raids in?) Lebanon for years. They attacked Hezbollah's ally, Iran. And Hezbollah has been attacking Israel for years. It's not true that the conflict began with Hezbollah's recent actions.
> Unifil, the United Nations peacekeeping force in Lebanon that operates south of the Litani, says Israel has committed more than 10,000 air and ground violations during the ceasefire. According to the Lebanese health ministry, more than 330 people have been killed in Israeli attacks, including civilians.
I have about a hundred or more such incidents. The only effective one weird trick with Israel is to not exist near it.
I'm genuinely curious: in the face of overwhelming evidence of Israel being a monstrous force of death and destruction in this world, and popular opinion continuing to notice this and thus turn against Israel, why do you maintain the old rhetorical defenses? Do you personally genuinely believe Israel is just defending itself? Most Israelis I talk to have long abandoned that as obviously false, so I doubt you're motivated by national fervor as they were - they usually would toe into Islamophobia instead: "if we didn't do it to them first, they'd do it to us." "Why didn't they develop their land in the hundreds of years before Israel arrived? Now Israel settled territory is farmed and flourishing." Those sort of arguments.
What do you think the endgame is here in terms of popular support? IDF soldiers gleefully post their war crimes on Instagram and we all watch it, it's not like the truth can be spun anymore.
When it comes to Israel the truth will always be spun. And if someone “in politics” dares (or slips), she/he will ultimately be made to retract the truth (see California Governor just yesterday/today)
Lebanon is about UN Security Council Resolution 1701 and 20 years on it not being enforced. UNIFIL failed spectacularly, looks like Israel decided to enforce it themselves.
"For the first time, a country enamored of compromises, half measures and trickery is watching these options vanish, replaced by a brutal choice: confront Hezbollah and risk destruction, or ensure it by doing nothing."
It would be great if Israel also implemented UN Security Council Resolution 497 (1981) and gave up Golan Heights to Syria, etc. But they won't do that.
> confront Hezbollah and risk destruction, or ensure it by doing nothing."
This is ridiculous. When your nation's citizens are being wiped out into non-existence and your land occupied, will you support the invaders or the guys who are fighting the invaders ? Hezbollah now all of Lebanon as a recruitment pipeline. They have utterly no shortage of volunteers now.
The number of refurbished mac minis that are available in my country has suddenly dramatically increased ever since the Clawdbot tweet. People never learn.
They may not come after all the niche companies, but they definitely come after the most successful markets, especially those with low effort moats.
Same goes for relying on the Apple/Google app stores (ex - Apple literally got slapped in court for copying successful apps and then pushing their offering to the top of their stores... talk about wildly abusive behavior).
I may still choose to use AWS/GCP/Azure while trying to find product-market fit as an immature startup, but I'd look real, REAL hard at ditching them as soon as possible afterwards.
Unless you have particularly bursty workloads, they aren't even a good cost saving measure anymore.
This guys account currently sits on negative karma with this post:
>It's kind of crazy when someone has an outlier experience and then tries to frame an entire country as being that way.
I've experienced a lot of cultures, countries, and environments. The United States is KNOWN for being a friendly country of people who will talk to you and smile at you for "no reason" other than because Americans are friendly.
Go to many countries in Europe or even Russia, you'll experience the exact oppositive. If you smile at people or talk to a stranger, you will essentially be treated as if something is wrong with you.
Everyone knows this is true about the US. Your comment is clearly trying to portray the United States in a negative light with something that is entirely not true.
And then there's my experience: someone who has lived in the US for over 30+ years.
Karm isn't the indicator by which one can be judged. There is an entire world outside of Hacker News. All you have to do is create an account and make a comment others disagree with and bam -- negative Karma.
There's a saying in Spanish that says, "Don't make firewood out of a fallen tree." You could learn a lot from that saying.
Maybe some will call me a troll or wierdo, but there's one thing I will never be: someone who makes firewood out of a fallen tree.
God bless you, as a person. I know we hide behind these screennames, but if I were standing in front of you, I would extend my hand and from the bottom of my heart, ask God to bless you, as a person, in real life.
I'm not perfect, so I can't blame you for addressing me as a troll. But I speak from the heart brother.
The field more than likely has peaked out for software developers. Jevons paradox means there probably will not be enough agents anytime soon as ambition grows but that doesn't mean the number of developers will grow with it. You will probably need less "real" pilots behind these agents as tools improve around them and so do the agents themselves before they can essentially fully pilot themselves and work in parallel. Work may even increase for the developers that remain where they are expected to do the work of >10 developers as a single dev but to expect the number of developers to increase with the number of agents seems unlikely as having more people will actually end up being a bottleneck then a lean team that can orchestrate a growing force of agents.
AI companies have set themselves up for this scenario. They won't solve themselves out of the equation, they will only solve you.
reply