Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't encounter a lot of "small", one-copy-paste-size problems in my daily work that I couldn't quickly solve myself, so I haven't found a lot of use for ChatGPT while coding, yet. (I reckon this is changing though.)

However a few times there have been some mechanical refactoring-style grunt work I've delighted to have been able to let ChatGPT do. However, the rate ChatGPT is giving me subtly wrong results is just high enough that I end up cross-checking everything, and then it takes me a bit more time than it would've otherwise taken. Give it a year or two, maybe?



> Give it a year or two, maybe?

Maybe? It’s not clear what would bring a qualitative improvement, barring massive amounts of new training data.


Based on the current front page of HackerNews, it's clear that quantitative improvement is happening all the time. I think that enough incremental progress will eventual feel like qualitative improvement. And it doesn't even need to be smarter models; improving prompts, IO systems etc. around the model might help. For example, I said I don't feel the need for help for copy-paste-sized problems, but inputing the whole repo seems to be achievable with some clever scaffolding with the IO around the model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: