Hacker Newsnew | past | comments | ask | show | jobs | submit | thepasch's commentslogin

Stop advertising pi, people. It _somehow_ continued to fly somewhat under the radar after that whole OpenClaw nonsense. Don’t make Anthropic’s sic their bloodhounds on them like they did on OpenCode.

Interestingly, since OpenClaw, there has been ~one post about Pi every week. But practically no one voted any of them except this one.

pi is an officially accepted harness of either Anthropic or OpenAI. I forgot which.

I feel like this misses the point of pi somewhat. The allure of pi is that it allows you to start from scratch and make it entirely your own; that it’s lightweight and uses only what you need. I go through the list of features in this and I think, okay, cool, but why should I use this over OpenCode if I just want a feature-packed (and honestly -bloated) ready-made harness?

The people who they’re going to piss off the most with this are the exact people who are the least susceptible to their walled garden play. If you’re using OpenCode, you’re not going to stop using it because Anthropic tells you to; you’re just going to think ‘fuck Anthropic’, press whatever you’ve bound “switch model” to, and just continue using OpenCode. I think most power users have realized by now that Claude Code is sub-par software and probably actively holding back the models because Anthropic thinks they can’t work right without 20,000 tokens worth of system prompt (my own system prompt has around 1,000 and outperforms CC at every test I throw it at).

They’re losing the exact crowd that they want in their corner because it’s the crowd that’s far more likely to be making the decisions when companies start pivoting their workflows en-masse. Keep pissing on them and they’ll remember the wet when the time comes to decide whom to give a share from the potentially massive company’s potentially massive coffers.


> I’m only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.

I have a feeling Anthropic might be in for an extremely rude awakening when that happens, and I don’t think it’s a matter of “if” anymore.


> pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude

And that’s out of the box. With how comically extensible pi is and how much control it gives you over every aspect of the pipeline, as soon as you start building extensions for your own, personal workflow, Claude Code legimitely feels like a trash app in comparison.

I don’t care what Anthropic does - I’ll keep using pi. If they think they need to ban me for that, then, oh well. I’ll just continue to keep using pi. Just no longer with Claude models.


As a Claude Code user looking for alternatives, I am very intrigued by this statement.

Can you please share good resources I can learn from to extend pi?


Pi has specific instructions to extend itself.

You can just tell it to create an extension to connect to any AI API provider and it'll most likely one or two-shot it for you.

IMO it's the most self-aware of all of the current harnesses.


I have an irrational anger for people who can't keep their agent's antics confined. Do to your _own_ machine and data whatever the heck you want, and read/scrape/pull as much stuff as you want - just leave the public alone with this nonsense. Stop your spawn from mucking around in (F)OSS projects. Nobody wants your slop (which is what an unsupervised LLM with no guardrails _will_ inevitably produce), you're not original, and you're not special.

Irrational?

It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.


Sometimes it feels like the advent of LLMs is hyperboosting the undoing of decades of slow societal technical literacy that wasn't even close to truly taking foot yet. Though LLMs aren't the reason; they're just the latest symptom.

For a while it felt like people were getting more comfortable with and knowledgeable about tech, but in recent years, the exact opposite has been the case.


I think it’s generally (at least from what I read) thought that the advent of smartphones reversed the tech literacy trend.


I think the real reason is that computers and technology shifted from being a tool (which would work symbiotically with the user’s tech literacy) to an advertising and scam delivery device (where tech literacy is seen as a problem as you’d be more wise to scams and less likely to “engage”).


They’re definitely what started it, but LLMs seem to be accelerating it at a terrifying rate.

This is a tool that is basically vibecoded alpha software published on GitHub and uses API keys. It’s technical people taking risks on their own machines or VMs/servers using experimental software because the idea is interesting to them.

I remember when Android was new it was full of apps that were spam and malware. Then it went through a long period of maturity with a focus on security.


> Is it a security risk? I hope not. (It's not.)

It very probably is, but if it's a personal project you're not planning on releasing anywhere, it doesn't matter much.

You should still be very cognizant that LLMs will currently fairly reliably implement massive security risks once a project grows beyond a certain size, though.


They can also identify and fix vulnerabilities when prompted. AI is being used heavily by security researchers for this purpose.

It’s really just a case of knowing how to use the tools. Said another way, the risk is being unaware of what the risks are. And awareness can help one get out of the bad habits that create real world issues.


If an open weights model is released that’s as capable at coding as Opus 4.5, then there’s very little reason not to offload the actual writing of code to open weight subagents running locally and stick strictly to planning with Opus 5. Could get you masses more usage out of your plan (or cut down on API costs).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: