Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Instead of training an LLM to make comments, train the LLM to generate test cases and then guess the line of code that breaks the test.


You might enjoy @goodside on Twitter. He’s a prompt engineer at Scale AI and a lot of his observations and techniques are fascinating.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: