Hacker Newsnew | past | comments | ask | show | jobs | submit | raahelb's commentslogin

Makes sense.

What do you think about the concept of community collections though? They allow people from any part of the world to add frames from movies of their local languages/regions and play amongst each other. This is inspired from Geoguessr.


Yeah that could definitely work. I think most of the ones I've seen are pretty Western-centric.

Personally I'd drop the signup (or at least make it optional), add in a room link / invitation system, and then setup some packs by country - maybe even autodetecting incoming IP address of visitor and suggesting a pack for them (famous Japanese movies, famous French films, etc).


You can use it by running this command in your session: `/model claude-sonnet-4-6`

Interesting to note that the reduced latency is not just due to the improved model speed, but also because of improvements made to the harness itself:

> "As we trained Codex-Spark, it became apparent that model speed was just part of the equation for real-time collaboration—we also needed to reduce latency across the full request-response pipeline. We implemented end-to-end latency improvements in our harness that will benefit all models [...] Through the introduction of a persistent WebSocket connection and targeted optimizations inside of Responses API, we reduced overhead per client/server roundtrip by 80%, per-token overhead by 30%, and time-to-first-token by 50%. The WebSocket path is enabled for Codex-Spark by default and will become the default for all models soon."

I wonder if all other harnesses (Claude Code, OpenCode, Cursor etc.,) can make similar improvements to reduce latency. I've been vibe coding (or doing agentic engineering) with Claude Code a lot for the last few days and I've had some tasks take as long as 30 minutes.


This might actually be hard for open source agents (e.g. Opencode) to replicate, barring a standardized WebSocket LLM API being widely adopted.

It is, I can see it my model picker on the web app

https://www.anthropic.com/news/claude-opus-4-6


> Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.

Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel

https://www.youtube.com/watch?v=kQRu7DdTTVA


Not many people are even going to read that prefilled prompt, so I imagine it will be a successful (and sneaky) way to achieve their goal


You will definitely like Josh Mock's recent post: https://joshmock.com/post/2026-agents-md-as-a-dark-signal/


I am confused by “senior-learning engineer”; so he’s learning as a senior, learning at a “senior” level in a “continuous learning”, “life long learning” kind of way? What is senior-learning? Searching for it only comes up with learning for seniors programs.


I'm looking at it now and it says "senior-leaning" not "senior-learning"

Might've been a typo they've since fixed.

>I am, as many senior-leaning engineers are, ambivalent about whether AI is making us more productive coders


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: