Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While interesting, this still can't account for domain expertise and system design decisions - you can't assume every character / line / function / method typed is just "correct" and exactly what you'll need. There are 1000s of ways to do both the wrong and right thing in software.

The real problem always comes back to the fact that the LLM cant just make code appear out of nowhere, it needs _your_ prompt (or at least code in the context window) to know what code to write. If you can't exactly describe the requirements - or what is increasingly happening - _know_ the actual technical descriptions for what you are trying to accomplish, its kinda like having a giant hammer with no nail to hit. I'm worried of a sort of future where we sort of program ourselves into a circle, all programs starting to look the same simply because the original "hardcore" or "forgotten" patterns and strategies of software design "just don't need to be taught anymore". In other words, people getting things to work but having no idea how they work. Yes I get the whole "most people dont know how cars work but use them", but as a software engineer not really knowing how the actual source code itself works? It feels strange and probably ultimately the wrong direction.

I also think the entire idea of a fully automated feature build / test / deploy AI system is just impossible... the complexity of such a landscape is just too large to automate with some sort of token generator. AGI, if course, but LLMs are so far from AGI it's laughable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: