Yeah. There's always such a lack of realpolitik in these discussions. They turn into endless bike shedding about what a manager is supposed to do according to some ideology of management, rather than the reality of the decisions managers are actually in control of and their actual tangible outputs.
I can't help but compare what happened with nuclear physics to what will happen with ASI/AGI. We could have used nuclear energy to provide abundant, clean energy. Instead we used it for warfare to kill people. All the of the brightest minds and frontier technology was directed towards killing people.
We could use AI for medical advances and to create a communist utopia without serfdom. But it's already looking like we're getting killer robots and more oppression.
Hope I'm thinking about this wrong. I fear very soon the government will begin nationalizing AI resources and forcing AI researchers to direct their efforts towards weapons systems. Similar to what happened in physics. "We have to be first to have autonomous robot armies" basically.
Why use OpenClaw vs n8n with LLM to describe the workflow? In other words, if I can setup a Zapier/n8n workflow with natural language, why would I want to use OpenClaw?
Nondeterministic execution doesn’t sound great for stringing together tool calls.
It’s basically cron + LLMs + memory connected to their discord or WhatsApp to control remotely. A persistent personal agent that just does stuff for you. People have been running on their own machines letting the LLM access their shell, browser, whatever.
What I don’t get: If it’s just a workflow engine why even use LLM for anything but a natural language interface to workflows? In other words, if I can setup a Zapier/n8n workflow with natural language, why would I want to use OpenClaw?
Nondeterministic execution doesn’t sound great for stringing together tool calls.
Not going to last long though, at least not professionally. AI will do the spec and architecture too. The LLM will do the entire pipeline between customer or market research to deployment. This is already possible with bug fixes pretty much. And many features too depending on the business.
It AI gets to that level generally, there won’t be a customer, a market research department, or a software company at all.
But if AI is capable of that it’s not a big step to being capable of doing any white collar job, and we’ll either reorganize our economy completely or collapse.
I don't know. LLMs are great at writing code; but you have to have the right ideas to get decent output.
I spend tons of time handholding LLMs--they're not a replacement for thinking. If you give them a closed-loop problem where it's easy to experiment and check for correctness, then sure. But many problems are open-loop where there's no clear benchmark.
LLMs are powerful if you have the right ideas. Input = output. Otherwise you get slop that breaks often and barely gets the job done, full of hallucinations and incorrect reasoning. Because they can't think for you.
That has nothing to do with federation. A Gmail user can talk to an Outlook user. A Slack user cannot talk to a Teams user.
Email is only federated because it was used before it was commercialized. If companies could monopolize email they would. The solution here is government regulation requiring interconnection, like we got with phones and text messaging.
Yes, doomerism is a symptom of severe doomscrolling addiction. All the people who talk like this spend all day on X. They sound like delusional drug addicts TBH.
reply