It depends on what you mean by "reason" exactly. The "thinking" parts of the model work with embeddings internally, not tokens. Or at least that's what they get as input; who knows what it becomes inside eventually.
OTOH, the not-really-internal monologue when you tell it to "think it out" loud, which also drastically improves quality of the final answer, is tokens since it has to be marshalled through the context window for the next inferred token.
OTOH, the not-really-internal monologue when you tell it to "think it out" loud, which also drastically improves quality of the final answer, is tokens since it has to be marshalled through the context window for the next inferred token.