You're not just delivering expertise, you're stepping into a situation where incentives are already misaligned, expectations are fuzzy, and there's often a cashflow problem hiding somewhere
It doesn't have to be messy. A lot of the messiness is self inflicted by contractors who are desperate for work and would be better of just getting a regular employee job instead.
Makes total sense. Consumer UX relies on pure determinism. When I click "Save", I know exactly what's going to happen. When I type a prompt into an "AI agent", I'm basically playing roulette every single time. Until we figure out how to wrap these probabilistic models inside rigid, predictable UX patterns, the mainstream crowd is going to keep treating AI like an annoying toy instead of an actual tool
Because AI only drove down the cost of writing code, not the cost of finding Product-Market Fit. Sure, you can spin up another Notion or Jira clone over the weekend using Cursor or Claude Code now. But getting users to actually migrate their data, change their workflow habits, and pay for it is just as brutally hard as it was a decade ago. Code is just a cheap commodity now, while distribution and trust have become exponentially more expensive
For me another Notion or Jira is not "disruptive software" I would expect disruptive software to be so... well, disruptive, as to fit a completely new niche or be so overwhelmingly better than their competitors that it doesn't even need good marketing.
I don’t think that code is that cheap either. Can we vibe code a new Notion? I doubt it. We probably can come up with a decent simulation, but I don’t think we can vibe code a Notion/Confluence/Slack that can handle millions of users in a performant way
That’s exactly where we’re headed. Architecturally it makes zero sense to spin up an LLM in every app's userspace. Since we have dedicated NPUs and GPUs now, we need a unified system-level orchestrator to balance inference queues across different programs - exactly how the OS handles access to the NIC or the audio stack. The browser should just be making an IPC call to the system instead of hauling its own heavy inference engine along for the ride
It’s a neat idea, but giving a 2B model full JS execution privileges on a live page is a bit sketchy from a security standpoint. Plus, why tie inference to the browser lifecycle at all? If Chrome crashes or the tab gets discarded, your agent's state is just gone. A local background daemon with a "dumb" extension client seems way more predictable and robust fwiw
At least in this case (not so sure about the Prompt API case mentioned in another thread) the agent is "in" the page. And that means that the agent is constrained by the same CORS limits that constrain the behavior of the page's own JS.
If you think about it, everything we've done to make malicious webpages unable to fiddle around with your state on other sites using XHRs, are exactly and already the proper set of constraints we'd want to prevent models working with webpages from doing the same thing.
There's indexed db, opfs, etc. Plenty of ways to store stuff in a browser that will survive your browser restarting. Background daemons don't work unless you install and start them yourself. That's a lot of installation friction. The whole point of a browser app is that you don't have to install stuff.
And what you call sketchy is what billions of people default to every day when they use web applications.
I was thinking the same thing: better to run models using a local service not in the wen browser. I use Ollama and LM Studio, switching between which service I have running depending on what I am working on. It should be straight forward to convert this open source project to use a different back end.
That said this looks like a cool project. It is so valuable writing projects like this that use local models, both for tool building and self education. I am writing my own “Emacs native” agentic coding harness and I am learning a lot.
OpenAI overtaking Microsoft? Seriously? Microsoft has a massively diversified business spanning from gaming and cloud infra to B2B software that the entire world runs on. OpenAI has exactly one product (matrix weights), which is getting heavily commoditized by open-source models every single day. Once a theoretical Llama 4 catches up to GPT-5, an API price war is going to completely nuke their hyper-margins
That one product can reproduce or replace nearly all of Microsoft's services. It's not OpenAI that is going to do it. It's people and other companies wielding OpenAI's model that will do it.
Your whole premise is built on the OpenAI model, but that's not a moat, it's just a temporary API endpoint. The second a theoretical Llama 5 drops that's 10% cheaper and 5% smarter, every single startup in that "ecosystem" will switch their base_url overnight to save their margins. Microsoft on the other hand, just owns Azure - they'll just add the new model as another option and continue to own the end customer. OpenAI it's just a swappable engine
reply