Hacker Newsnew | past | comments | ask | show | jobs | submit | otabdeveloper4's commentslogin

> Will your lifetimes work also be a mystery to future generations and will they shrug and say "all this computer code must be for religious sacrifice, we can see no other purposes for it"?

I doubt any computer code will survive for 1000 years.


really? I feel absolutely assured that every ugly temporary code fix I put in place, will persist for eternity..

Google's latest research shows AI coding increases speed by 3% while also increasing bugs by 9%. (I.e., a net negative.)

AI doesn't make you code faster, it just makes the boring stretches somewhat more exciting.


> deal with this avalanche of features

You mean avalanche of bugs and technical debt.


Technical debt is never a problem now since only AI reads code /s

> Don't review it, just make sure the feature is there!

Bad idea. Use another agent to do automatic review. (And a third agent writing tests.)

Don't forget the architecting and orchestrating agent too!


Multiple agents with different frontier models for best results. Claude code/codex shops don’t know what they’re missing if they never let Gemini roast their designs, code and formal models.

This.

Claude Code wrote a blog article for me documenting a Gemini interaction that I manually operated. I found it quite interesting - the difference in "personalities", and the quality of Claude's output is stark in comparison to the Gemini's.

But still, best to have two sets of eyes.


> It's not more work

It literally is. You're spending weeks of effort babysitting harnesses and evaluating models while shipping nothing at all.


That hasn't been my experience, as a "ship or die" solopreneur. It takes work to set up these new processes and procedures, but it's like building a factory; you're able to produce more once they're in place.

And you're able to play wider, which is why the small team is king. Roles are converging both in technologies and in functions. That leads to more software that's tailored to niche use cases.


> you're able to produce more once they're in place

Cool story, unfortunately the proof is not in the pudding and none of this fantom x10 vibe-coded software actually works or can be downloaded and used by real people.

P.S. Compare to AI-generated music which is actually a thing now and is everywhere on every streaming platform. If vibe coding was a real thing by now we'd have 10 vibecoded repos on Github for every real repo.


There's no need to be rude with comments like "cool story." I'm sharing my experience with you. I'm not an AI-hype influencer. I'm a SWE who runs a small SaaS business.

Where it sounds like we agree is that there's some obnoxious marketing hype around LLMs. And people who think they can vibe code without careful attention to detail are mistaken. I'm with you there.


> A stealth drop ship like this would have allowed that to happen.

Yeah, I saw that Agents of S.H.I.E.L.D. episode too.

Sadly, we might need some more intensive vibranium research before it becomes reality.


> The "in tune" notes are as much a function of culture as physics.

Huh? Pitch ratios are not a social construct, it's just arithmetic.


There's definitely some physical underpinnings--most music systems have the concept of an octave which maps nicely onto frequency-doubling, for example. But there's also culture: For a purely Western example, even-tempering is in tune, but you'll hear different "beat patterns" for a given interval than with an instrument tuned for music in just one key.

And the choice of which ratios "sound good" is cultural, to some extent.

No it isn't. You can hear integer pitch ratios in the same way you can see, e.g., the difference between a square and a rectangle or between a circle and an oval.

But more than 90% of all the written-composed music from western Europe and North America (at minimum) over the last several centuries does not use integer pitch ratios.

Yes, that's indeed what the TFA is about.

MCP is a dead end, just ignore it and it will go away.

And yet without MCP these CLI generators wouldn't be possible.

It building on top of them, because MCP did address some issues (which arguably could've been solved better with clis to begin with - like adding proper help texts to each command)... it just also introduced new ones, too.

Some of which still won't be solved via switching back to CLI.

The obvious one being authentication and privileges.

By default, I want the LLM to be able to have full read only access. This is straightforward to solve with an MCP because the tools have specific names.

With CLI it's not as straightforward, because it'll start piping etc and the same CLI is often used both for write and read access.

All solvable issues, but while I suspect CLIs are going to get a lot more traction over the next few months, it's still not the thing we'll settle on- unless the privileges situation can be solved without making me greenlight commands every 2 seconds (or ignoring their tendency to occasionally go batshit insane and randomly wipe things out while running in yolo mode)


Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value

Aside from consistent auth, that's what all APIs have done for decades.

Only takes 2 minutes for an agent to sort out auth on other APIs so the consistent auth piece isn't much of a selling point either.


Yes, MCP could've been solved differently - eg with an extension to the openapi spec for example, at least from the perspective of REST APIs... But you're misunderstanding the selling point.

The issue is that granting the LLM access to the API needs something more granular then "I don't care, just keep doing whatever you wanna do" and getting promoted every 2 seconds for the LLM to ask the permission to access something.

With MCP, each of these actions is exposed as a tool and can be safely added to the "you may execute this as often as you want" list, and you'll never need to worry that the LLM randomly decides to delete something - because you'll still get a prompt for that, as that hasn't been whitelisted.

This is once again solvable in different ways, and you could argue the current way is actually pretty suboptimal too... Because I don't really need the LLM to ask for permission to delete something it just created for example. But the MCP would only let me whitelist action, hence still unnecessary security prompts. But the MCP tool adds a different layer - we can both use it as a layer to essentially remove the authentication on the API you want the LLM to be able to call and greenlight actions for it to execute unattended.

Again, it's not a silver bullet and I'm sure what we'll eventually settle on will be something different - however as of today, MCP servers provide value to the LLM stack. Even if this value may be provided even better differently, current alternative all come with different trade-offs

And all of what I wrote ignores the fact that not every MCP is just for rest APIs. Local permissions need to be solved too. The tool use model is leaky, but better then nothing.


Of course they would be possible we could just turn the rest api into a cli.

It’s not, they are a big unlock when using something like cursor or copilot. I think people who say this don’t quite know what MCP is, it’s just a thin wrapper around an API that describes its endpoints as tools. How is there not a ton of value in this?

MCP is the future in enterprise and teams.

It's as you said: people misunderstand MCP and what it delivers.

If you only use it as an API? Useless. If you use it on a small solo project? Useless.

But if you want to share skills across a fleet of repos? Deliver standard prompts to baseline developer output and productivity? Without having to sync them? And have it updated live? MCP prompts.

If you want to share canonical docs like standard guidance on security and performance? Always up to date and available in every project from the start? No need to sync and update? MCP resources.

If you want standard telemetry and observability of usage? MCP because now you can emit and capture OTEL from the server side.

If you want to wire execution into sandboxed environments? MCP.

MCP makes sense for org-level agent engineering but doesn't make sense for the solo vibe coder working on an isolated codebase locally with no need to sandbox execution.

People are using MCP for the wrong use cases and then declaring them excess when the real use case is standardizing remote delivery and of skills and resources. Tool execution is secondary.


You sound more like you like skills than MCP itself. Skills encapsulate the behavior to be reused.

MCP is a protocol that may have been useful once, but it seems obsolete already. Agents are really good at discovering capabilities and using them. If you give it a list of CLI tools with a one line description, it would probably call the tool's help page and find out everything it needs to know before using the tool. What benefit does MCP actually add?


So just to clarify, in your case you're running a centralized MCP server for the whole org, right?

Otherwise I don't understand how MCP vs CLI solves anything.


Correct.

Centralized MCP server over HTTP that enables standardized doc lookup across the org, standardized skills (as MCP prompt), MCP resources (these are virtual indexes of the docs that is similar to how Vercel formatted their `AGENTS.md`), and a small set of tools.

We emit OTEL from the server and build dashboards to see how the agents and devs are using context and tools and which documents are "high signal" meaning they get hit frequently so we know that tuning these docs will yield more consistent output.

OAuth lets us see the users because every call has identity attached.


Sandboxing and auth is a problem solved at the agent ("harness") level. You don't need to reinvent OpenAPI badly.

    > Sandboxing and auth is a problem solved at the agent ("harness") level
If you run a homogeneous set of harnesses/runtimes (we don't; some folks are on Cursor, some on Codex, some on Claude, some on OpenCode, some on VS Code GHCP). The only thing that works across all of them? MCP.

Everything about local CLIs and skill files works great as long as you are 1) running in your own env, 2) working on a small, isolated codebase, 3) working in a fully homogeneous environment, 4) each repo only needs to know about itself and not about a broader ecosystem of services and capabilities.

Beyond that, some kind of protocol is necessary to standardize how information is shared across contexts.

That's why my OP prefaced that MCP is critical for orgs and enterprises because it alleviates some of the friction points for standardizing behavior across a fleet of repos and tools.

    > You don't need to reinvent OpenAPI badly
You are only latching onto one aspect of MCP servers: tools. But MCP delivers two other critical features: prompts and resources and it is here where MCP provides contextual scaffold over otherwise generic OpenAPI. Tools is perhaps the least interesting of MCP features (though useful, still, in an enterprise context because centralized tools allows for telemetry)

For prompts and resources to work, industry would have to agree on defined endpoints, request/response types. That's what MCP is.


Old news. Google "my dog vibecoded a game".

Just trust the vibe, bro. One trillion market cap cannot be wrong.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: