Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are already there.

100% accuracy on up to 13 digit addition can be taught to 3.5 as is.

https://arxiv.org/abs/2211.09066

And 4 has little need for such out the box



> in this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools.

So it's not an emergent property of LLM but 4 new capability trainings. Noone is saying you can't teach these things to an Agent, just that these are not emergent abilities of the LLM training. by default a LLM can only match token proximity, all trainings of LLMs improve the proximity matching (clustering) of token but they do not teach algorithmic reasoning. it needs to get bolted on as addon.


No it doesn't need to be bolted on. GPT-4 can add straight out of the box, no need for any education. Where the model hadn't implicitly figured out the algorithm of addition in 3.5, it has in 4.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: