I never said anything about the surface appearance of reasoning. Either the model demonstrates some understanding or reasoning in the text it generates as it is perfectly capable of or it reasons faultily or lacks understanding in that area. This does not mean LLMs don't reason anymore than it means you don't reason.
The idea that LLMs "fake reason" and Humans "really reason" is an imaginary distinction. If you cannot create any test that can distinguish the two then you are literally making things up.
Dude, I just gave you an example, and you straight-up ignore it and say "show me a test"?!
An averagely smart human does not have these failure modes where they answer a question with something that looks like an answer "cross A to B, then B to A. done. there you go!" but has zero logic to it.
Do you follow news in this field at all? Are you aware that poor reasoning is basically the #1 shortcoming that all the labs are working on?!!
Feel free to have the last word as this is just getting repetitive.
You are supposed to show me an example no human will fail. I didn't ignore anything. I'm just baffled that you genuinely believe this:
>An averagely smart human does not have these failure modes where they answer a question with something that looks like an answer "cross A to B, then B to A. done. there you go!" but has zero logic to it.
Humans are poor at logic in general. We make decisions, give rationales with logical contradictions and nonsense all the time. I just genuinely can't believe you think we don't. It happens so often we have names for these cognitive shortcomings. Get any teacher you know and ask them this. No need to take my word for it. And i don't care about getting the last word.
The idea that LLMs "fake reason" and Humans "really reason" is an imaginary distinction. If you cannot create any test that can distinguish the two then you are literally making things up.