> "vectorized linear algebra" is at the root of most of modern Physics.
the "vectorized" adjective was meant to imply implementing linear algebra in digital computers that can operate concurrently on large-dimensional vectors/tensors. In this sense (and despite Wolfram's diligence and dearest wishes) modern physics theories have exactly 0% digital underpinning :-)
> It's not all that surprising to me that "intelligence" is represented by similar math.
yes, the state of the art of our modeling ability in pretty much any domain is to conceive of a non-linear system description and "solve it" by linearization. Me thinks this is primary reason we haven't really cracked "complexity": We can only solve the problems we have the proverbial hammer to apply to.
> AI fundamentally comes with the potential to do anything a human can do
That goes into wild speculation territory. In any case the economy is always about organizing human relationships. Technology artifacts only change the decor, not the substance of our social relations. Unless we completely cease to have dependencies on each other (what a dystopic world!) there will always be the question of an individual's ability to provide others with something of value.
> modern physics theories have exactly 0% digital underpinning
I don't think the "digital" part matters at all. Floating point tends to be close enough to Real (analog) numbers. The point is that at each point of space-type, the math "used" by Physics locally is linear algebra.
(EDIT): If your main point was the "vectorized" part, not the digital part, and the specifics of how that is computed in a GPU, then that's more or less directly analogous to how the laws of physics works. Physical state is generally represented by vectors (or vector fields) while the laws of physics are represented by tensor operations on those vectors(or fields).
Specifically, when sending input as vectors through a sequence of tensors in a neural net, it closely (at an abstract level) resembles how one world state in and around a point in space-time is sent into the tensors that are the laws of physics to calculate what the local world state in the next time "frame" will be.
(END OF EDIT)
> yes, the state of the art of our modeling ability in pretty much any domain is to conceive of a non-linear system description and "solve it" by linearization
True, though neural nets are NOT linearizations, I think. They can fit any function. Even if each neuron is doing linear operations, the network as a whole is (depending on the architecture) quite adept at describing highly non-linear shapes in spaces of extreme dimensionality.
> Me thinks this is primary reason we haven't really cracked "complexity"
I'm not sure it's even possible for human brains to "crack" "complexity". Wolfram may very well be right that the complexity is irreducible. But for the levels of complexity that we ARE able to comprehend, I think both human brains and neural nets do that by finding patterns/shapes in spaces with near-infinite orders of freedom.
My understanding is that neural nets fit the data in a way conceptually similar to linear regression, but where the topology of the network implicitly allows it to find symmetries such as those represented by Lie groups. In part this may be related to the "locality" of the network, just as it is in Physics. Of all possible patterns, most will be locally non-linear and also non-local.
But nets of tensors impose local linearity and locality (or something similar), just like it does in Physics.
And since this is how the real world operates, it makes sense to me that the data that neural nets are trained on have similar structures.
Or maybe more specifically: It makes sense to me that animal brains developed with such an architecture, and so when we try to replicate it in machines, it carries over.
>> AI fundamentally comes with the potential to do anything a human can do
> That goes into wild speculation territory.
It does. In fact, it has this in common with most factors involved in pricing stocks. I think the current pricing of AI businesses reflect that a sufficiently large fraction of shareholders thinks it's a possible (potential) future that AI can replace all or most human work.
> In any case the economy is always about organizing human relationships.
"The economy" can have many different meanings. The topic here was (I believe) who would derive monetary profit from AI and AI businesses.
I definitely agree that a world where the need for human input is either eliminated or extremely diminished is dystopian. That's another topic, though.
the "vectorized" adjective was meant to imply implementing linear algebra in digital computers that can operate concurrently on large-dimensional vectors/tensors. In this sense (and despite Wolfram's diligence and dearest wishes) modern physics theories have exactly 0% digital underpinning :-)
> It's not all that surprising to me that "intelligence" is represented by similar math.
yes, the state of the art of our modeling ability in pretty much any domain is to conceive of a non-linear system description and "solve it" by linearization. Me thinks this is primary reason we haven't really cracked "complexity": We can only solve the problems we have the proverbial hammer to apply to.
> AI fundamentally comes with the potential to do anything a human can do
That goes into wild speculation territory. In any case the economy is always about organizing human relationships. Technology artifacts only change the decor, not the substance of our social relations. Unless we completely cease to have dependencies on each other (what a dystopic world!) there will always be the question of an individual's ability to provide others with something of value.