>Interesting to think about what structures human intelligence has that these models don't.
Kant's Critique of Pure Reason has been a very influential way of examining this kind of epistemology. He put forth the argument that our ability to reason about objects comes through our apprehension of sensory input over time, schematizing these into an understanding of the objects, and finally, through reason (by way of the categories) into synthetic a priori knowledge (conclusions grounded in reason rather than empiricism).
If we look at this question in that sense, LLMs are good at symbolic manipulation that mimics our sensibility, as well as combining different encounters with concepts into an understanding of what those objects are relative to other sensed objects. What it lacks is the transcendental reasoning that can form novel and well grounded conclusions.
Such a system that could do this might consist of an LLM layer for translating sensory input (in LLM's case, language) into a representation that can be used by a logical system (of the kind that was popular in AI's first big boom) and then fed back out.
>Such a system that could do this might consist of an LLM layer for translating sensory input (in LLM's case, language) into a representation that can be used by a logical system (of the kind that was popular in AI's first big boom) and then fed back out.
This just goes back into the problems of that AI winter again though. First Order Logic isn't expressive enough to model the real world, while Second Order Logic dosen't have a complete proof system to truly verify all it'sstatements, and is too complex and unyieldy for practical uses. The number of people I would also imagine that are working on such problems would be very few, this isn't engineering that it is analytic philosophy and mathematics.
Kant predates analytical philosophy and some of its failures (the logical positivism you are referring to). The idea here is that first order logic doesn't need to be expressive enough to model the world. Only that some logic system is capable of modeling the understanding of a representation of the world mediated by way of perception (via the current multimodal generative AI models). And finally, it does not need to be complete or correct, just equivalent or better than how our minds do such.
Kant's Critique of Pure Reason has been a very influential way of examining this kind of epistemology. He put forth the argument that our ability to reason about objects comes through our apprehension of sensory input over time, schematizing these into an understanding of the objects, and finally, through reason (by way of the categories) into synthetic a priori knowledge (conclusions grounded in reason rather than empiricism).
If we look at this question in that sense, LLMs are good at symbolic manipulation that mimics our sensibility, as well as combining different encounters with concepts into an understanding of what those objects are relative to other sensed objects. What it lacks is the transcendental reasoning that can form novel and well grounded conclusions.
Such a system that could do this might consist of an LLM layer for translating sensory input (in LLM's case, language) into a representation that can be used by a logical system (of the kind that was popular in AI's first big boom) and then fed back out.