Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work for a large tech company, and our CTO has just released a memo with a new rubric for SDEs that includes "AI Fluency". We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.

A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.

edit: typo

 help



This is absolutely the norm across corporate America right now. Chief AI Czars enforcing AI usage metrics with mandatory AI training for anyone that isn't complying.

People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.

It's deeply fascinating psychologically and I'm not sure where this ends.

I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.


> I've never seen any tech theme pushed top down so hard in 20+ years working.

> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.

I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.

Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.

Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.

What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.


And telling us how to do our jobs. As if they've ever compared the optimized output of clang and gcc on an example program to track down a performance regression at 2AM.

> FOMOing at the mouth

This is a great line - evocative, funny, and a bit o wordplay.

I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.


I think one thing that I noticed this week in terms of "eye of the beholder" view on AI was the Goldman press release.

Apparently Anthropic has been in there for 6 months helping them with some back office streamlining and the outcome of that so far has been.. a press release announcing that they are working on it!

A cynic might also ask if this is simply PR for Goldman to get Anthropic's IPO mandate.

I think people underestimate the size/scope/complexity of big company tech stacks and what any sort of AI transformation may actually take.

It may turn into another cottage industry like big data / cloud / whatever adoption where "forward deployed / customer success engineers" are collocated by the 1000s for years at a time in order to move the needle.


I don't understand how all these companies issue these sorts of policies in lock-step with each other. The same happened with "Return To Office". All of a sudden every company decided to kill work from home within the same week or so. Is there some secret CEO cabal that meets on a remote island somewhere to coordinate what they're going to all make workers do next?

CEOs are ladder climbers. The main skill in ladder climbing is being in tune with what the people around them are thinking, and doing what pleases/maximizes other's approval of the job they are doing.

It's extremely human behavior. We all do it to some degree or another. The incentives work like this:

    - If all your peers are doing it and you do it and it doesn't work, it's not your fault, because all your peers were doing it too. "Who could have known? Everyone was doing it."
    - If all your peers _aren't_ doing it and you do it and it doesn't work, it's your fault alone, and your board and shareholders crucify you. "You idiot! What were you thinking? You should have just played it safe with our existing revenue streams."
And the one for what's happening with RTO, AI, etc.:

    - If all your peers are doing it and you _don't do it_ and it _works_, your board crucifies you for missing a plainly obvious sea change to the upside. "You idiot! How did you miss this? Everyone else was doing it!"
Non-founder/mercenary C-suites are incentivized to be fundamentally conservative by shareholders and boards. This is not necessarily bad, but sometimes it leads to funny aggregate behavior, like we're seeing now, when a critical mass of participants and/or money passes some arbitrary threshold resulting in a social environment that makes it hard for the remaining participants to sit on the sidelines.

Imagine a CEO going to their board today and going, "we're going to sit out on potentially historic productivity gains because we think everyone else in the United States is full of shit and we know something they don't". The board responds with, "but everything I've seen on CNBC and Bloomberg says we're the only ones not doing this, you're fired".


Agreed about peer following conservatism as well re: RTO. What is interesting though is how few of these supposedly meritocratic winners fail to have any level 2 thinking.

CTO friend at what you might call a smaller tier 2/tier 3 shop told me about some recent C-suite debates. CEO/COO types concerned that "the office is too empty". One notes that "our compensation costs are very high" and "maybe if we made everyone RTO, productivity would be higher and we wouldn't need as many developers".

One could just as easily & more logically argue - you are a smaller shop that pays less than market, your flexibility re: remote is a "free" benefit you can offer to get senior talent that might not otherwise be able to attract. Enforcing the same RTO as your higher paying and larger competitors opens you up to adverse selection your best talent trading up.

Of course telling your peers that your firm is a scrub is a different challenge.


I have wondered the exact same thing. It's uncanny how in-sync they all are. I can only suppose that the trend trickles down from the same few influential sources.

> Is there some secret CEO cabal that meets on a remote island somewhere

I mean.. recent FBI files of certain emails would imply.. probably, yes.


Probably yes? Definitely. See also articles like this one [1]. These guys all run in the same circles and the groupthink gets out of control.

https://www.semafor.com/article/04/27/2025/the-group-chats-t...


It is investor sentiment and FOMO. If your investors feel like AI is the answer you will need to start using AI.

I am not as negative on AI as the rest of the group here though. I think AI first companies will out pace companies that never start to learn the AI muscle. From my prospective these memos mostly seem reasonable.


If AI is the answer, then there's no reason for a top-down mandate like this. People will just start using as they see fit because it helps them do their jobs better, instead of it being forced on them, which doesn't sound much like AI is the answer investors thought it was.

No, because as discussed AI also changes the nature of your job in a way that might be negative to a worker, even if it’s more productive. Ie, it may be more fun to ride a horse to your friends house, but it’s not faster than a car. Or as the previous example, it may be more enjoyable to make a shoe by hand, but it’s less productive than using an assembly line

You might be misreading negative sentiment towards poor leadership as negative sentiment towards AI.

I agree that a lot of the current push is driven by investor sentiment and a degree of FOMO. If capital markets start to believe AI is table stakes, companies don’t really have the option to ignore it anymore. That said, I’m not bearish on AI either. I think there’s a meaningful difference between chasing AI for signaling purposes and deliberately building an “AI muscle” inside the organization. Companies that start learning how to use, govern, and integrate AI thoughtfully are likely to outpace those that never engage at all. From that perspective, most of these memos feel fairly reasonable to me. They’re less about declaring AI as a silver bullet and more about acknowledging that standing still carries its own risk.

At least they are consistently applying this to all roles instead of only making tech roles suffer through it like they do with interview processes

I'm so glad I'm nearer the end of my career than the beginning. Can't wait to leave this industry. I've got a stock cliff coming up late this summer, probably a good time to get out and find something better to do with my life.

Then, you might even tinker with some AI stuff on your own terms, you never know. :)

Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-)


> Then, you might even tinker with some AI stuff on your own terms, you never know

Indeed! I'm not like dead set against them. I just find they're kind of a bad tool for most jobs I've used them for and I'm just so goddamn tired of hearing about how revolutionary this kinda-bad tool is.


If you're finding their a bad tool for most jobs you're using them for, you're probably being closed minded and using it wrong. The trick with AI these days is to ask it to do something that you think is impossible and it will usually do a pretty decent job at it, or at least close enough for you to pick up or to guide it further.

I was a huge AI skeptic but since Jan 2025, I have been watching AI take my job away from me, so I adapted and am using AI now to accelerate my productivity. I'm in my 50s and have been programming for 30 years so I've seen both sides and there is nothing that is going to stop it.


Okay, I use OpenCode/Codex/Gemini daily (recently cancelled my personal CC plan given GPT 5.2/3 High/XHigh being a better value, but still have access to Opus 4.5/6 at work) and have found it can provide value in certain parts of my job and personal projects.

But the evangelist insistence that it literally cannot be a net negative in any contexts/workflows is just exhausting to read and is a massive turn-off. Or that others may simply not benefit the same way with that different work style.

Like I said, I feel like I get net value out of it, but if my work patterns were scientifically studied and it turned out it wasn't actually a time saver on the whole I wouldn't be that surprised.

There are times where after knocking request after request out of the park, I spend hours wrangling some dumb failures or run into spaghetti code from the last "successful" session that massively slow down new development or require painful refactoring and start to question whether this is a sustainable, true net multiplier in the long term. Plus the constant time investment of learning and maintaining new tools/rules/hooks/etc that should be counted too.

But, I enjoy the work style personally so stick with it.

I just find FOMO/hype inherently off-putting and don't understand why random people feel they can confidently say that some random other person they don't know anything about is doing it wrong or will be "left behind" by not chasing constantly changing SOTA/best practices.


I try them a few times a month, always to underwhelming results. They're always wrong. Maybe I'll find an interesting thing to do with them some day, I dunno. It's just not a fun or interesting tool for me to learn to use so I'm not motivated. I like deterministic & understandable systems that always function correctly; "smart" has always been a negative term in marketing to me. I'm more motivated to learn to drive a city bus or walk a postal route or something, so that's the direction I'm headed in.

> you're probably being closed minded and using it wrong

> I was a huge AI skeptic but since Jan 2025,

> I'm in my 50s and have been programming for 30 years

> there is nothing that is going to stop it.

I need to turn this into one of those checklists like the anti-spam one and just paste it every time we get the same 5 or 6 clichés


Maybe not everyone finds them as useful for their everyday tasks as you do? Software development is quite a broad term.

1. execs likely have spend commits and pressure from the board about their 'ai strategy', what better way to show we're making progress than stamping on some kpis like # of agents created?

2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)

3. there is some subset of ai detractors that refuse to use the tools for whatever reason

the metrics pushed by 1) rarely account for 2) and dont really serve 3)

i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo


The KPI problem is systemic and bigger than just Gen-AI, it’s in everything these days. Actual governance starts by being explicit about business value.

If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.

For some reason the last decade or so we have confused activity with productivity.

(and words/claims with company value - but that's another topic)


> (cc / codex / cursor / copilot (jk, nobody should be using copilot)

I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?


copilot is a much worse harness, although recently improvements in base model intelligence have helped it a bit

if you can, use cc or codex through your ide instead, oai and anthropic train on their own harnesses, you get better performance


I'm currently using opus in Zed via copilot (I think that's what you're recommending?) and tbh couldn't be happier. It's hard to imagine what better would look like.

oh, i meant copilot as in microsoft copilot in vscode. i havent used zed so can't speak to it but if it works for you it works!

Leadership loves AI more than anything they have ever loved before. It's because for them, the fawning, sycophantic, ego-stroking agents who cheerfully champions every dumb idea they have and helps them realize it with spectacular averageness, is EXACTLY what they've always expected to receive from their employees.

I'm so happy I work at a sane company. We're pushing the limits of AI and everyone sees the value, but we also see the danger/risks.

I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does.


This feels like a construction company demanding that everyone, from drywaller to admin assistant, go out and buy a drill.

Can I modify your example to:

Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).


Better, yeah.

Awesome, with that new policy we'll be sure to justify my purple drill evangelist role by showing that our average employee is dependent on purple drills for at least 1/8th of their workload. Who knew that our employees would so quickly embrace the new technology. Now the board can't cut me!

It's really cascaded down too.

Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.

Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.

And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.

Drilling cannot be fail, it can only be failed.


This is literally happening in non-tech finance firms where people in non-tech roles are being judged on their AI adoption.

Some companies swear by this. CP Rail is notorious for training everyone to drive a train.

That kind of makes sense philosophically if your business is trains, but I don't think that their business was AI agents. Although given they have a VP of AI, I have no idea. What a crazy title.

Reminds me of those little gadgets, which move your mouse, so that you show up online on Slack.

I’d just add a cron job to burn some tokens.


That sounds like a lot of work - maybe you could burn some tokens asking AI to write a cron to burn some tokens for you?

But then you'd have to code review that crap and write test harnesses and other shit.

Sounds like a lot more work tbh.


Assign the PR to the offshore team and then just forget about it when they never end up reviewing it?

Years ago I remember talking to someone who purchased a "mouseJiggler" for that very purpose. That was literally what he called it. Problem for him was we turned it into a meme, and he immediately regretted telling us.

> We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.

Enforced use means one of two things:

1. The tool sucks, so few will use it unless forced.

2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).


3. They discovered it's something they can measure so they made a metric about it.

4. They heard from their golf buddy who heard from his racquetball buddy that this other CTO at this other shop is saving lots of money with AI

I know you're speaking half in jest but the C-suite of my area actually used a tweet by an OpenAI executive as the agenda for an AI brainstorm meeting.

Well that's inspiring. If you're going to follow anyone right now be sure to follow someone from the company that has committed to spending a trillion dollars without ever having a profitable product. Those are the folks who know what good business is!

It'll never cease to amaze me how many powerful people can't tell advice from advertising.

I am at less than half jest here.

I have friends who are finance industry CTOs, and they have described it to me in realtime as CEO FOMO they need to manage ..

Remember tech is sort of an odd duck in how open people are about things and the amount of cross pollination. Many industries are far more secretive and so whatever people are hearing about competitors AI usage is 4th hand hearsay telephone game.

edit: noteworthy someone sent yet another firmwide email about AI today which was just linking to some twitter thread by a VC AI booster thinkbro


Or it has an annoying learning curve.

One small company I worked for had a similar mandate come from their large clients - since offshoring was fashionable in business journals, they must offshore the next project for those clients. That company spent more time reworking the offshored software than if we had done the development in-house.

This is just another business fad, but because the execs want to seem to be cool and seem to be doing what their "peers" claim to be doing, well, then by gosh, all of the workers have to do the same fad.


I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting. If you think it's still just slop and errors when managed by experienced devs, you're already behind.

> I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting.

I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.

Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.

If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.


The obvious pulling ahead from early AI adopters/forcers will happen any moment now... any moment

It's not obvious because the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team. Which yeah is scary, but my point is if you don't see any multiplier effect from using that latest AI tools, you are either doing a bad job of using them (or don't have the budget, can't blame anyone for that), or are maybe in some obscure niche coding world?

>the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team

This simply isn’t how economics works. There is always additional demand, especially in the software space. Every other productivity-boosting technology has resulted in an increase in jobs, not a decrease.


Well that's certainly and obviously how it's working at the moment in the software industry.

We're in the transition between traditional coding jobs and agentic managers (or something like that)


It's kind of inexplicable though, unless AI being the reason for layoffs is a lie, because it's true that historically there has always been way more demand for software than people who can make it (hence the decades of rising salaries relative to other professions).

It seems like too much of a coincidence that the AI got good enough to replace humans at exactly the same time that humans in general don't need as much software made.


I try these things a couple times a month. They're always underwhelming. Earlier this week I had the thing work tells me to use (claude code sonnet 4? something like that) generate some unit tests for a new function I wrote. I had a number of objections about the utility of the test cases it chose to write, but the largest problem was that it assigned the expected value to a test case struct field and then... didn't actually validate the retrieved value against it. If you didn't review the code, you wouldn't know that the test it wrote did literally nothing of value.

Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.

Trillions of dollars for this? Sigh... try again next week, I guess.


Twice now in this same story, different subthreads, I've seen AI dullards declaring that you, specifically, are holding it wrong. It's delightful, really.

I don't really care if other people want to be on or off the AI train (no hate to the gp poster), but if you are on the train and you read the above comment, it's hard not to think that this person might be holding it wrong.

Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience.

This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong.

Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI.


Yeah, I'm definitely guilty of not being motivated to use these tools. I find them annoying and boring. But my company's screaming that we should be using them, so I have been trying to find ways to integrate it into my work. As I mentioned, it's mostly not been going very well. I'm just using the tool the company put in front of me and told me to use, I don't know or really care what it is.

The whole point of "AI" in the first place is that it just vibes and doesn't need an instruction manual!

If "learn to hold it not wrong" is your message, then the AI bubble will be popping very soon.


How is that the point of AI. The point is that it can chug through things that would take humans hours in a matter of seconds. You still have to work with it. But it reduces huge tasks into very small ones

No, the point of AI is to fire your employees and replace them with "agents".

This implies that the managers managing your "agents" can be literal assclowns hired for pennies.


"Hey boss, I tried to replace my screwdriver with this thing you said I have to use? Milwaukee or something? When I used it, it rammed the screw in so tight that it cracked the wood."

^ If someone says that they are definitely "holding it wrong", yes. If they used it more they would understand that you use the clutch ring to the appropriate setting to avoid this. What you don't do, is keep using the screwdriver while the business that pays you needs 55 more townhouses built.


No need to be mean. It's not living up to the marketing (no surprise), but I am trying to find a way to use these things that doesn't suck. Not there yet, but I'll keep trying.

Try Opus?

Eh, there's a new shiny thing every 2 months. I'm waiting for the tools to settle down rather than keep up with that treadmill. Or I'll just go find a new career that's more appealing.

It seems that the rate of change will only accelerate.

I dunno. At some point the people who make these tools will have to turn a profit, and I suspect we'll find out that 98% of the AI industry is swimming naked.

Yeah I think it'll consolidate around one or two players. Mostly likely Xai, even though they're behind at the moment. No one can compete with the orbital infrastructure, if that works out. Big if. That's all a different topic.

But I feel you, part of me wants to quit too, but can't afford that yet.


I'm sorry but if you are taking orbital datacenters seriously in the same posts as boosting AI, it's hard not to discount your takes on AI severely.

In 4 to 5 years it'll be the dominant source of compute. If you're not taking it seriously... I don't know. But it's coming.

Power generation cannot be built quickly enough.


Launch costs are at best like $1000 per pound to reach LEO. Terrestrial data centers are becoming the size of small cities. In what planet does the $1000/lb headwind ever make this work? ` The only logic to orbital servers is that it’s a libertarian dream to be government regulation free. It is objectively more expensive and difficult to build and maintain by orders of magnitude otherwise.

Fall behind what? Writing code is only one part of building a successful product and business. Speed of writing code is often not what bottlenecks success.

Yes, the execution part has become cheap, but planning and strategizing is not much easier. But devs and organizations that keep their head in the sand will fall behind on one leg of that stool.

Anyone with more than 2 years of professional software engineering experience can tell this is completely nonsense.

Well 6 years experience here and I personally saved about 4 hours of work today using claude. My coworker also just solved a problem I had been looking into for a few days in about an hour with claude. So, I think maybe you are just a bit behind the curve.

Please stay on topic and focus on the "slop" and "fall behind" part.

What's in your comment has been said maybe 1000 times just over the past day, so I'm afraid that information is not particularly helpful.


I mean I think I stayed exactly on topic. Used by experience devs, its not slop, and if you arent usuing the people who are areprobably going to outpace you

That sounds awful... Thankfully our CTO is quite supportive of our teams anti-AI policy and is even supportive of posting our LLM-ban on job postings. I honestly dont think that I could operate in an environment with any sort of AI mandate...

That seems just as bad but the opposite direction.

I guess time will tell, but so far none of the AI-output we've seen is any good. We dont like to adopt technologies based on hype, so if it proves itself it will be adopted, but until then its a toy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: