I'm actually waiting for something different - a "good enough" level for programming LLMs:
1. Where they can be used as autocompletion in an IDE at speeds comparable with Intellisense
2. And where they're good enough to generate most code reliably, while using a local LLM
3. While running on hardware costing in total max 2000€
4. And definitely with just a few "standard" pre-configured Open Source/open weights LLMs where I don't have to become an LLM engineer to figure out the million knobs
I have no clue how Intellisense works behind the scenes, yet I use it every day. Same story here.
“Good enough” will be like programming languages; an evolving frontier with many choices. New developments will make your previous “good enough” look inadequate.
Given how much better the bleeding edge models are now than 6 months ago, as long as any model is getting smarter I don’t see stagnation as a possibility. If Gemini starts being better at coding than Claude, you’re gonna switch over if your livelihood is dependent on it.
1. Where they can be used as autocompletion in an IDE at speeds comparable with Intellisense 2. And where they're good enough to generate most code reliably, while using a local LLM 3. While running on hardware costing in total max 2000€ 4. And definitely with just a few "standard" pre-configured Open Source/open weights LLMs where I don't have to become an LLM engineer to figure out the million knobs
I have no clue how Intellisense works behind the scenes, yet I use it every day. Same story here.