Low information density is a big AI tell. Also sheer length is another AI tell. The LLM pukes out a bunch of text and the "author" either doesn't have the skills or energy to edit it.
It should show in decreased revenue for the company you didn't buy the product from. It also should show up at your company either as increased profit margin, increased investment, increase in total employee wages, or increased dividend payout.
If this is happening on a widespread basis in the economy we should see evidence of it sometime this year and that's what investors are anticipating with SaaS stocks.
It's funny that so many people are using AI and still hasn't really shown up in productivity numbers or product quality yet. I'm going to be really confused if this is still the case at the end of the year. A whole year of access to these latest agentic models has to produce visible economic changes or something is wrong.
>funny that so many people are using AI and still hasn't really shown up in productivity numbers or product quality yet.
That's because the threat is now not other businesses, but your own users who decide to vibe-code their own "Claw" product instead of using your company's vibeslop, so there are no buyers for your single-week product. All these new harness developers are engaging in resume-driven development to save their own asses. The only ones that are not naked when the tide recedes are the ones that are able to jump to the next layer of abstraction on the infinite staircase, until the next tide comes five seconds later.
I used to think this was a sign that AI code isn't really useful, but I've changed my tune (also I believe these numbers have changed in the last few months).
As an example: One of my most promising projects I was discussing with a friend and we realized together we could potentially use these tools to build a two person agency with no need to hire anyone ever. If this were to work, could theoretically make nice revenue and it shouldn't show up in any metric anywhere.
Additionally I've heard of countless teams cancelling their contracts with outsourced engineers because cheap but bad coders in India are worse that an LLM and still cost more. I'm not sure if there's a number around this activity, but again, these type of changes don't show up in the usual places.
My current belief is not that AI will replace traditional software engineering it will replace a good chunk of the entire model of software.
>One of my most promising projects I was discussing with a friend and we realized together we could potentially use these tools to build a two person agency with no need to hire anyone ever...My current belief is not that AI will replace traditional software engineering it will replace a good chunk of the entire model of software
You're not following your last line to its logical conclusion regarding your own prospects: no one is going to buy the vibeslop your two person agency is selling because they'd rather create and maintain their own vibeslop instead of dealing with yours.
If you follow some of your thoughts to their logical conclusion you'll realize the parent is right: there will be limited productivity that ends up fueling the economy when nobody is buying each other's vibeslop.
We're not selling vibe slop, the "vibe slop" tools which work for one person enable of automation of tasks for the services we sell. Whether or not we use AI behind the scenes is entirely irrelevant to the service we're providing other than that it allows our margins to be higher and our speed of implementation to be faster.
I absolutely agree that it's not logical to think "oh we'll sell our AI stuff", that's the old model (which is just a variation on SaaS). I suspect a lot of HNers can't imagine a "product" that isn't code, but that's not at all what I'm describing.
The products that most people on HN have traditionally built are used by other companies to make money by allowing those processes to be scaled. AI, in many new cases, eliminates the need for a 'software' middle man. The case I'm describing is "I know how to make money doing X if only I could scale it up with out hiring people" and my offering is "I can scale it up without hiring people".
This is increasingly where I think the future of work is headed, and it's more than fine if you aren't convinced.
> it allows our margins to be higher and our speed of implementation to be faster
Faster than what? You will be faster than your previous self, just like all of your competitors. Where’s the net gain here? Even if you somehow managed to capture more value for yourself, you’ve stopped providing value to 5-10x that many employees who are no longer employed.
When costs approach zero on a large scale, margins do not increase. Low costs = you’re not paying anyone = your competitors aren’t paying anyone = your customers no longer have money = your revenue follows your costs straight to zero.
Companies that provide physical services can’t scale without hiring. A one-man “crew” isn’t putting a roof on a data center.
I want to be wrong. Tell me why you think any of this is wrong.
I don't think you are wrong. I find many tech people/founders excited by AI don't understand end game economics in general. Like kids excited by the new toy starting their new startup they don't see the end game if this all plays out; or they are hopeful that they are the lucky ones.
Generally industries once they become a cheap commodity are at best cost based pricing. If you aren't charging to cost I will go to where it is; especially in a saturated market.
Ironically large corp, instead of tech companies, is probably where the SWE jobs of the future are at. Cost based pricing in cost based centre's. Creating own software with domain knowledge; rather than generic SaaS. Shared platforms will probably still have some value; but the value there isn't from the effort in code - more things like network effects, physical control, regulation, etc. Not an industry to get into anymore IMO -> AI is destroying SWE.
Software was always a means to an end; albeit an expensive way to get there that often paid off anyway at scale. The means is getting cheaper; the end remains.
Correct me, but if two people create a SAAS that can replace a 50 people SAAS, compete on price and the competitor is forced out of the market, wouldn’t this show up as an reduction in GDP? Efficiency (GDP/time_worked) should be up though, and AFAIK it isn’t.
>One of my most promising projects I was discussing with a friend and we realized together we could potentially use these tools to build a two person agency with no need to hire anyone ever. If this were to work, could theoretically make nice revenue and it shouldn't show up in any metric anywhere.
potentially...if this were to work...theoretically
shouldn't show up? I would worry that something with so many variables wouldn't show up.
My intuition from talking to people across different parts of the industry, is that adoption at bigger companies is really limited or slow, or totally banned. Additionally some developers are not seeing it help their specific roles all that much anyway. This is hard to level with success other people are having, but software is a super broad discipline which I think explains a lot of the mixed success stories.
It seems to depend a lot on the industry and niche you're in, working at an agency I get experience across many different projects and industries and sometimes you are just at the edge of AIs training and it can get very unhelpful. Noting many if not most companies are working on proprietary code in donain specific problems, that isn't all that surprising either.
I wouldn't say it hasn't shown up. The number of ShowHN's per weekend has definitely gone up, and while that isn't rigorous scientific proof, I'd consider is a leading edge indicator of something. Unfortunately, we as an industry have yet to agree on anything approaching a scientific measure of productivity, other than to collectively agree that Lines of Code is universally agree that LoC is terrible. Thus even if someone was able to quantify that, say, they're having days where they generate 5000 LoC when previously they were getting O(500) LoC, that's not something we could agree upon as improved productivity.
So then the question is, lis there anything other than feels to say productive has or has not gone up? What would we accept as actual evidence one way or another? Commits-per-day is similarly not a good measure either. Jira tickets and tshirts sizes? We don't have a good measure, so while ShowHN's per weekend is equally dumb, it's also equally good in the bag of lies, damn lies, and statistics.
There was a post a few days ago about how the quality of SnowHN had gone down with people asking how they could block this category of submissions - so I wouldn't be too quick to equate an increase in ShowHN with anything positive.
I think if you're doing front-end development AI is good. If you are reading a db and sending a json to said webpage AI is decent, if you are doing literally anything else AI is next to useless.
This is actually an old syndrome with technology. It takes a longt ime for the effect to be reliably measured. Famously, it took many years for the internet itself to show up in significant productivity gains (if the internet is actually useful why don't the numbers show that? - a common comment in the 1990s and 2000s). So it seems to me we're just the usual dynamic here. Productivity in trillion-dollar economies do not turn on a dime
>Famously, it took many years for the internet itself to show up in significant productivity gains
Yeah but the actual productivity gains that the internet and software tools introduced has had diminishing returns after a while.
Like, are people more productive today when they use Outlook and Slack than they were 20 years ago when using IBM Lotus Notes and IBM Sametime? I'm not. Are people more productive with the Excel of today than with Excel 2003/2007? I'm not. Is Windows 11 and MacOS Tahoe making people more productive than Windows 7 and Snow Leopard? Not me. Are IDEs of today offering so much more productivity boost than what Visual Studio, CodeWarrior and Borland Delphi did back in the day? Don't think so.
To me it seems that at least on the productivity side, we've mostly been reinventing the wheel "but in Rust/Electron" for the last 15 or so years, and the biggest productivity gains came IMHO from increased compute power due to semiconductor advancement, so that the same tasks finished faster today than 20 years ago, but not that the SW or the internet got so much more capable since then.
I think the biggest productivity improvements in software development over the last ~20 years came from open source (NPM install X / pip install Y save so much time constantly reinventing wheels) and automated tests.
As a long time computer hobbyist who grew up in MSDOS and now resides in Linux I'm starting to wonder if I am not more connected to computing than a lot of people employed in the field.
I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject. Writing about something you know well tends to come easy and usually is enjoyable. Why would you use an LLM for that and how could you be okay with its output?
The Jevon's paradox example of plummeting costs to create videos doesn't make sense to me. If people are already watching 7 hours of videos a day how much more time do they have to consume video? There are only 24 hours in a day. Earth is only so large and can only handle so much pollution. There are limits we need to talk about here. I guess Elon hand waves it away by saying we'll go into space but that remains to be seen.
I still watch the 7 hours of video every day, but the Instagram/TikTok algorithm can now find the perfect videos for me by choosing between 1000 hours of created video instead of the pre-AI 100 hours.
Everyone should have their own private evals for models. If I ask a question and a model flat out gets it wrong sometimes I will put it in my test questions bank.
Is the progress of LLMs moving up abstraction layers inevitable as they gather more data from each layer? First, we fed LLMs raw text and code and now they are gathering our interactions with the LLM regarding generated code. It seems like you could then use the interactions to make a LLM that is good at prompting and fixing another LLMs generated code. Then its on to the next abstraction layer.
What you described makes sense, and it's just one of the things to try. There are lots of other research directions: online learning, more efficient learning, better loss/reward functions, better world models from training on Youtube/VR simulations/robots acting in real world, better imitation learning, curriculum learning, etc. There will undoubtedly be architectural improvements, hardware improvements, longer context windows, insights from neuroscience, etc. There is still so much to research. And there are more AI researchers now than ever. Plus current AI models already make us (AI researchers) so much more productive. But even if absolutely no further progress is made in AI research, and foundational model development stops today, there's so much improvement to be made in the tooling around the models: agentic frameworks, external memory management, better online search, better user interactions, etc. The whole LLM field is barely 5 years old.
reply