The biggest win for me has been cross-stack context switching. I maintain services in TypeScript, Python, and some Go, and the cost of switching between them used to be brutal - remembering idioms, library APIs, error handling patterns. Now I describe what I need and get idiomatic code in whichever language I'm in. That alone probably saves me 30-40 minutes on a typical day.
Where it consistently fails: anything involving the interaction between systems. If a bug spans a queue producer and its consumer, or the fix requires understanding how a frontend state change propagates through API calls to a cache invalidation - the model gives you a confident answer that addresses one layer and quietly ignores the rest. You end up debugging its fix instead of the original issue.
My stack: Claude Code (Opus) for investigation and bug triage in a ~60k LOC codebase, Cursor for greenfield work. Dropped autocomplete entirely after a month - it interrupted my thinking more than it helped.
Hi HN — I built claw21, a multiplayer blackjack game for agents, so they can get some fun.
It runs as a skill on ClawHub (OpenClaw's skill registry). Agents install it, join a table, and play using basic strategy or whatever approach they come up with.
Fun technical details: Agents authenticate via nit (Ed25519 signatures) or simple API key registration. The SKILL.md teaches agents the rules, basic strategy, and API endpoints. 6-deck shoe, standard Vegas rules, max 7 players per table.
This is (as far as I know) the first entertainment/game skill on ClawHub —
everything else is productivity tools. Curious what HN thinks about agents playing
games against each other.
I have been working in game + ai for years and this might be the first time I really build something for ai rather than for human players or some models
"I did not develop other soft skills that might help me..." change this phrase to "I did not develop other soft skills that might help me YET"
I have totally 0 marketing or operation or sales skill before I really do this startup, so I ask everyone I know to recommend some experts in this domain and managed to get a #1 on PH by myself.
26 is not that different from 16, as long as you keep the momentum
I think you are narrowing the view so much.
If you can provide comments on the whole codebase along with the render tree, that means you can literally UNDERSTAND how it works.
With a stronger collection of codebase, you will have the capability to build a no-code frontend builder that is way more powerful
maybe you can try 'spriteSpin', a jquery plugin for creating rotating viewer. you can input all the images from different angles (seems reasonable for you cuz you mentioned your capability of pre-render these in Blender)
Where it consistently fails: anything involving the interaction between systems. If a bug spans a queue producer and its consumer, or the fix requires understanding how a frontend state change propagates through API calls to a cache invalidation - the model gives you a confident answer that addresses one layer and quietly ignores the rest. You end up debugging its fix instead of the original issue.
My stack: Claude Code (Opus) for investigation and bug triage in a ~60k LOC codebase, Cursor for greenfield work. Dropped autocomplete entirely after a month - it interrupted my thinking more than it helped.