He's pointing out that it's ironic to come into a thread about not shooting ideas and do nothing more than shoot down the idea. It's literally the exact behavior described in like the first paragraph of the article. It IS ironic.
No, irony does not depend on correctness. When the discussion is X, if your only input is -X, then it's ironic in the face of a discussion on shooting down ideas.
It would be ironic for me to make a typo while bashing your spelling prowess. It would not be ironic if I didn’t.
It is similarly ironic that the Alanis Morrisette song about irony mostly mentions non-ironic things; “rain on your wedding day” isn’t ironic! And that’s ironic!
Ok, I just thought it was ironic that the article was about how being critical of something isn't skillful. And, it appeared to me (but all my downvotes prove I'm in the minority), that you just added a critical comment without doing anything else the author wrote about.
For the record, I was curious about what else you have written, so I read some of your posts and comments. And, you seem to be a very thoughtful and intelligent person. I'm sorry if my comment was offensive, I meant it to be funny.
i dont agree with the self-indemnifying approach. let us apply this concept to itself - what could go wrong with not shooting down ideas? a lot of time could be wasted giving credence to that which is invalid. for example, ive just donated my energy debating the merits of an article that was probably written by a large language model (this is a judgement based on overt presence of negative parallelism, punchy prose, tripartite sentence structure, AI generated image, ...) - on a personal basis id prefer to have spent my time otherwise, as this entire argument can be summed as "be curious, not judgemental". therefore shooting this one down was probably a good idea?
I'm honestly having trouble understanding all the benefits and drawbacks of the different agents, specifically around what I want to permit for permissions.
My solution has been to create a new VM which inherits a Claude cli and Gemini CLI pre installed.
That way I can configure at a host level all the permissions I want and it is less likely the agent will access full sets of files and even worse delete things. I know this limits what I can do, but I am exhausted my understanding and auditing the different options for each agent.
I can install a new agent on that VM and then try it, but it is hard to justify the effort to test each one.
What am I getting from your tool for example? Worktree support is somewhat common, right? Does this give me multi agent support that Gemini and Claude do not, does that mean collaboration across team members? Is your approach better, or safer, than what I'm doing? How do I verify those claims?
Can I use your tool with local models like gemma 4 and ollama/llama.cpp: I have 3 24gb Nvidia cards and would like to try a three agent approach, one to write the code, one to write tests, one to architect. I obviously can't use local models with Gemini and Claude cli.
I'm just riffing on my concerns, and thanks for listening.
If you run a coding agent with full yolo permissions on your machine, there are two major problems:
1. unrestricted internet access is a vector for prompt injection and code/data exfiltration
2. other stuff on your machine that you don't want your agent to access or modify
Most coding agent harnesses went for the "low friction" sandboxing approach and used Seatbelt on Mac. This doesn't really work well in practice because you can't allowlist certain safe domains (so its either all internet or no internet) and it's really tricky to allowlist certain locations on disk (agents ideally need to be able to install system packages, work with mobile simulators, etc and a lot of that stuff is on disk outside of your workspace).
So our solution to this looks a lot like yours: give your agents a container and a network policy and then let them yolo. Per your container policy, they won't be able to access anything unsafe on your disk or internet, except what you narrowly allow.
This is not only a cleaner sandbox model, but it allows you to give them more autonomy instead of letting them pause on each command to run.
Your VM solution is definitely doing the right idea as well. The difference with ctx is that we automatically manage a lot of the VM complexity including elastic memory.
--- RE: Worktrees, Multi-Agent, Collaboration ---
Yes, worktree support is common now. The thing you mention about multi-agent support and collaboration across team members is spot on. All of your agent transcripts are stored in a unified format locally, so your conversations with Claude Code look exactly like your conversations with Gemini. So if your teammate uses one and you use another, the idea is that they can see your work equivalently.
Another interesting concept is that multi-agent support is agent harness agnostic. So you can have a Claude Code primary agent invoke a Gemini subagent.
--- RE: Local Models ---
We don't set anything up specifically for this, but any agent harness that already works with local models will work the same in ctx. I think Codex or OpenCode are both fairly easy to use with local models, whereas Gemini and Claude Code are harder to set up this way. But if you try it, I'd be interested to hear how it goes for you.
It feels like an appreciation for hypotheticals or givens is missing here. One can simultaneously be against the war and the bombing in general, and also accept it as a given and then think about a certain situation being understandable within that given.
I've been using kdenlive and it is functional as an open source video editor. I don't know if kdenlive supports shared assets and projects, but this feels like something this project could offer and exceed expectations. Is that on the roadmap?
Yes, that was part of the thinking behind the licensing choice. The goal was to keep the engine itself open source, while creating opportunities to monetize adjacent offerings like cloud file management, sharing, AI editing, and other higher-level capabilities.
Do you mean LXD and Incus? If so, sort of. Incus is a fork of LXD but it diverged quite a bit and due to the LXD licensing change, Incus can't take anything from LXD but LXD can from Incus. Incus is a community project and is a lot more active. They both use LXC under the hood.
Finding a simple GUI is not going to be easy because everyone has a different definition of what "simple" means. It also depends on what you mean by "review" and
"manage". There were a few web UIs for LXD containers and they were ported or used for Incus containers. Some are still maintained and active.
I personally prefer the command line and find it easier and simpler than using graphical interfaces so don't have a recommendation. When the number of containers and servers becomes large enough to warrant anything else, then that's when automation starts.
Mostly I want a quick glance at the state. I don't really do much using cockpit. It is read only for me and I'm using command line to do anything. I like that cockpit is generally mobile friendly because I can use it remotely as all my machines are on tailscale/headscale.
If you haven’t tried this out yet, you can also download terminal apps that let you ssh from your phone, and that’ll be easy to get going with Tailscale already set up.
We do have a cockpit-podman plugin and have added recently some features to simplify management of podman quadlets. (podman quadlets is like a systemd-friendly version of docker compose, which is a good fit for a single server use case)
So if you get onboard with podman, you may get some benefits from the Cockpit UI for it.
But you are right, there are many different container technologies and we haven't catched up with all of them.
I recently moved from a docker-compose setup with portainer as a manager to podman+quadlets+cockpit . After the initial pain of migration i'm really happy! I can also manage VMs, volumes, and check systemd logs so it's a good all-in-one solution for managing standalone servers. Also i think it uses systemd activation so it's really light on resources. For someone who dislikes the proxmox approach of custom kernel/os this is a good alternative.
Edit: so, this is the incus-ui-canonical package? It feels a bit ironic that canonical ships this, because I thought the whole point of incus was to avoid canonical and the direction they were taking lxd.
I've never used proxmox, but I've heard good things. Personally (and this is a bit crazy) the best bar none interface for containers I've used is the OpenMediaVault compose plugin - it's a NAS distro but I literally ran it on all my servers for years because of the UI
TrueNAS is also a NAS distribution and it has pretty good support for containers and VMs, so I’m not that surprised. They generally expect to be individual all-in-one type of servers.
Am I wrong that proxmox takes over the entire machine?
I like cockpit because I can use the machine as a regular Linux machine. It happens to have some containers and VMs running in a very ad hoc way. It wasn't the plan to use it for hosting originally but now it is. And cockpit can be configured to use other machines as well, right? So it makes it easy to grow into a quick way to review all the machines without me planning out nodes and centralized control.
Perhaps I'm mistaken, but I assumed proxmox was better if you planned on using a machine solely for the purpose of running virtualized machines.
LCD/incus seem like they would be a good fit for the way I used cockpit; because you can script them easily using CLI tools, so figured adding a cockpit plug-in would be easy. And you can migrate those containers and VMs to another host server easily.
This is all my homelab and I'm not being very intentional about the way I run things. I love to spin up a new server and then if things get overloaded (like I run out of ram on the host) I can easily move that server to another machine.
I have a bunch of host machines that are my kids gaming machines. They are basically unused during the day. ;)
Proxmox is just KVM/qemu with management modules running on Debian. I set up Plasma on a node for a while and used it for a workstation for a couple years, and it worked fine.
It's a full linux install but it is somewhat centered around VM's and Containers.
But you can install anything on it since it's a regular machine.
I just use it as a host for my containers and VM's though.
I notice you support ollama. Have you found it effective with any local models? Gemma 4?
I'm definitely going to play with this.
reply