My guess is that whoever develops superintelligence first will not release it to the public, but rather use it for their own purposes to gain an edge.
They may still release AI products to the public that are good enough and cheap enough to prevent competitors from being profitable or receive funding (to prevent them from catching up), but that's not where the value would come from.
Just as an example, let's say xAI is first. Instead of releasing the full capability as GROK 7, they would use the ASI to create a perfected version of their self driving software, to power their Optimus robots.
And to speed up the development of future manufacturing products (including, but not limited to cars and humanoid robots)
And as such winners may be challenged by anti-trust regulations, the ASI may also be utilized to gain leverage over the political system. Twitter/X could be one arena that would allow this.
Eventually, Tesla robots might even be used to replace police officers and military personnel. If so, the company might be a single software update away from total control.
My guess is that whoever develops superintelligence first will have a big number in their bank account while their body is disassembled to make solar panels and data centers
> We have no evidence that superintelligence will be developed.
Fundamentally, we have no evidence of anything that will happen in the future. All we do is to extrapolate from the past through the present, typically using some kind of theory of how the world operates.
The belief that we will eventually (whether it's this year or in 1000+ years), really only hinges on the following 3 assumptions:
1) The human brain is fully material (no divine souls is necessary for intelligence)
2) The human brain does not represent a global optimum for how intelligent a material intelligence-having-object can be.
3) We will eventually have the ability to build intelligence-having-objects (similar or different from the brain) that not only can do the same as a brain (that would be mere AGI), but also surpass it in many ways.
Assumptions 1 and 2 have a lot of support in the current scientific consensus. Those who reject them either do not know the science or they have a belief system that would be invalidated if one of those assumptions were true. (That could be anything from a Christian belief in the soul to an ideological reliance of a "Tabula Rasa" brain).
Assumption 3 is mostly techno-optimism, or an extrapolation of the trend that we are able to build ever more advanced devices.
As for WHEN we get there, there is a fourth assumption required for it to happens soon:
4. For intelligence-having-objects to do their thing, they don't need some exotic mechanisms we don't yet need to build. For instance, there is no need to build large quantum computers for this.
This assumption is mostly about belief, and we really don't know.
Yet, given the current rate of progress, and if we accept assumptions 1-3, I don't think assumption 4 is unreasonable.
If so, it's not unreasonable to assume that our synthetic brains reach roughly human level intelligence when their size/complexity becomes similar to that of human brains.
Human brains have ~200 trillion synapses. That's about 100x-1000x more than the latest neural nets that we're building.
Based only on scale, current nets (GPT-4 generation) should have total capabilities similar or slightly better than a rat. I think that's not very far off from what we're seeing, even if the nets tend to have those capabilities linked to text/images rather than the physical world that a rat navigates.
In other words, I think we DO have SOME evidence (not conclusive) that the capabilities of a neural net can reach similar "intelligence" to animals with a similar number of synapses.
So IF that hypothesis holds true, and given assumptions 1-3 above, there is a fair possibility that human level intelligence will be reached when we scale up to about 200 trillion weights (and have the ability to train such nets).
And currently, several of the largest and most valuable companies in the world are making a huge gamble on this being the case, with plans to scale up nets by 100x over the next few years, which will be enough to get very close to human brain sized nets.
> Assumption 3 is mostly techno-optimism, or an extrapolation of the trend that we are able to build ever more advanced devices.
This is your weak link. I don't see why progress will be a straight line and not a sloping-off curve. You shouldn't see the progress we've made in vehicle speed and assume we can hit the speed of light.
> I don't see why progress will be a straight line and not a sloping-off curve.
Technological progress often appears linear in the short term, but zooming out reveals an exponential curve, similar to compound interest.
> You shouldn't see the progress we've made in vehicle speed and assume we can hit the speed of light.
Consider the trajectory of maximum speeds over millennia, not just recent history. We've achieved speeds unimaginable to our ancestors, mostly in space—a realm they couldn't conceive. While reaching light speed is challenging, we're exploring novel concepts like light-propelled nano-vehicles. If consciousness is information-based, could light itself become a "vehicle"?
Reaching light speed isn't just an engineering problem—it's a fundamental issue in physics. The laws of physics (as we know them) prevent any object with mass from reaching light speed.
Notice however that our minds, like the instructions in DNA and RNA, are built from atoms, but they aren't the atoms themselves. They're the information in how those atoms are arranged. Once we can fully read and write this information—like we're starting to do with DNA and RNA—light itself could become our vehicle.
If even a single electron moves at the speed of light, it would tear apart the universe, at least that's what both special and general relativity would predict.
(It would have infinite energy meaning infinite relativistic mass, and would form a black hole whose event horizon would spread into space at the speed of light).
I don't think so at all. I'm personally convinced that humanity EVENTUALLY will build something more "intelligent" than human brains.
> I don't see
I see
> You shouldn't see the progress we've made in vehicle speed and assume we can hit the speed of light
There are laws of Physics that prevent us from moving faster than the speed of light. There IS a corresponding limit for computation [1], but it's about as far from the human brain's ability as the speed of light is from human running speed.
I'm sure some people who saw the first cars thought they could never possibly become faster than a horse.
Making ASI has no more reason to be impossible than to build something faster than the fastest animal, or (by stretching it), something faster than the speed of sound (which was supposed to be impossible).
There is simply no reason to think that the human brain is at a global maximum when it comes to intelligence.
Evolutionary history points towards brain size being limited by a combination of what is safe for the female hip width and also what amount of energy cost can be justified by increasing the size of the brain.
Those who really think that humans have reached the peak, like David Deutsch, tend to think that the brain operates as a Turing Machine. And while a human brain CAN act like a very underpowered Turing Machine if given huge/infinite amounts of paper and time, that's not how most of our actual thought process function in reality.
Since our ACTUAL thinking generally does NOT use Turing Complete computational facilities but rather relies on most information being stored in the actual neural net, the size of that net is a limiting factor for what mental operations a human can perform.
I would claim that ONE way to create an object significantly more intelligent than current humans would be through genetic manipulation that would produce a "human" with a neocortex several times the size of what regular humans have.
> Evolutionary history points towards brain size being limited by a combination of what is safe for the female hip width and also what amount of energy cost can be justified by increasing the size of the brain.
If bigger brains lead to higher intelligence, why do many highly intelligent people have average-sized heads? And do they need to eat much more to fuel their high IQs? If larger brains were always better, wouldn’t female hips have evolved to accommodate them? I think human IQ might be where it is because extremely high intelligence vs. what we on average have now) often leads to fewer descendants. Less awareness of reality can lead to more "reproductive bliss."
> If bigger brains lead to higher intelligence, why do many highly intelligent people have average-sized heads?
There IS a correlation between intelligence and brain size (of about 0.3). But the human brain does a lot of things apart from what we measure as "IQ". What shows up in IQ tests tend to be mostly related to variation of the thickness of certain areas of the cortex [1].
The rest of the brain is, however, responsible from a lot of the functions that separates GPT-4 or Tesla's self driving from a human. Those are things we tend to take for granted in healthy humans, or that can show up as talents we don't think of as "intelligence".
Also, the variation in the size of human brains is relatively small, so the specifics of how a given brain is organized probably contributes to more of the total variance than absolute size.
That being said, a chimp brain is not likely to produce (adult, healthy) human level intelligence.
> And do they need to eat much more to fuel their high IQs?
That depends on the size of the brain, primarily. Human brains consume significantly more calories than a chimp brains.
> If larger brains were always better, wouldn’t female hips have evolved to accommodate them?
They did, and significantly so. In particular the part around the birth canal.
> I think human IQ might be where it is because extremely high intelligence vs. what we on average have now) often leads to fewer descendants. Less awareness of reality can lead to more "reproductive bliss."
I believe this is more of a modern phenomenon, mostly affecting women from the 20th century on. There may have been similar situations at times in the past, too. But generally, over the last several million years, human intelligence has been rising sharply.
It also explains that a correlation of r = 0.3 means only about 9% of the variability in one variable is explained by the other. This makes me wonder: can intelligence really be estimated within 10% accuracy? I doubt it, especially considering how IQ test results can vary even for the same person over time.
Kids and teens have smaller brains, but their intelligence increases as they experience more mental stimulation. It’s not brain size that limits them but how their brains develop with use, much like how muscles grow with exercise.
> a chimp brain is not likely to produce (adult, healthy) human-level intelligence.
> Human brains consume significantly more calories than chimp brains.
If brain size and calorie consumption directly drove intelligence, we’d expect whales, with brains five times larger than humans, to be vastly more intelligent. Yet, they aren’t. Whales’ large brains are likely tied to their large bodies, which evolved to cover great distances in water.
Brains can large like arms can be large but big arms do not necessarily make you strong -- they may be large due to fat.
> But generally, over the last several million years, human intelligence has been rising sharply.
Yes, smaller-brained animals are generally less intelligent, but exceptions like whales and crows suggest that intelligence evolves alongside an animal’s ecological niche. Predators often need more intelligence to outsmart their prey, and this arms race likely shaped human intelligence.
As humans began living in larger communities, competing and cooperating with each other, intelligence became more important for survival and reproduction. But this has limits. High intelligence can lead to emotional challenges like overthinking, isolation, or an awareness of life’s difficulties. Highly intelligent individuals can also be unpredictable and harder to control, which may not always align with societal or biological goals.
As I see it, ecological niche drives intelligence, and factors like brain size follow from that. The relationship is dynamic, with feedback loops as the environment changes.
> As I see it, ecological niche drives intelligence
For this, you're perfectly correct.
> It’s not brain size that limits them but how their brains develop with use, much like how muscles grow with exercise.
Here, the anser is yes, but like for muscles, biology will create constraints. If you're male, you may be able to bench 200kg, but probably not 500kg unless your biology allows it.
> If brain size and calorie consumption directly drove intelligence, we’d expect whales
As you wrote later, there are costs to developing large brains. The benefits would not justify the costs, over evolutionary history.
> Brains can large like arms can be large but big arms do not necessarily make you strong -- they may be large due to fat.
A chimp has large arms. Try wrestling it.
> and factors like brain size follow from that
Large brains come with a significant metabolic cost. They would only have evolved if they provided a benefit that would outweigh those costs.
And in today's world, most mammal tissue is either part of a Homo Sapiens or part of the body of an animal used as livestock by Home Sapens.
> biology will create constraints. If you're male, you may be able to bench 200kg, but probably not 500kg unless your biology allows it.
On evolutionary timeframes what biology allows can evolve and the hard limits are due to chemistry and physics.
> there are costs to developing large brains. The benefits would not justify the costs, over evolutionary history.
> Large brains come with a significant metabolic cost. They would only have evolved if they provided a benefit that would outweigh those costs
Google "evolutionary spandrels" and you will learn there can be body features (large brains of whales) that are simply a byproduct of other evolutionary pressures rather than direct adaptation.
> Google "evolutionary spandrels" and you will learn there can be body features (large brains of whales) that are simply a byproduct of other evolutionary pressures rather than direct adaptation.
If you're a 10-150 ton whale, a 2-10 kg brain isn't a significant cost.
But if you're a 50kg primate, a brain of more than 1kg IS.
For humanoids over the past 10 million years, there are very active evolutionary pressures to minimize brain size. Still, the brain grew to maybe 2-4 times the size over this period.
This growth came at a huge cost, and the benefits must have justified those costs.
> On evolutionary timeframes what biology allows can evolve and the hard limits are due to chemistry and physics.
It's not about there being hard limits. Brain size or muscle size or density is about tradeoffs. Most large apes are 2-4 times stronger than humans, even when accounting for size, but human physiology has other advantages that make up for that.
For instance, our lower density muscles allow us to float/swim in water with relative ease.
Also, lighter bodies (relative to size) make us (in our natural form) extremaly capable long distance runners. Some humans can chase a horse on foot until it dies from exhaustion.
I'm sure a lot of other species could have developed human level intelligence if the evolutionary pressures had been there for them. It just happens to be that it was humans that first entered an ecological niche where evolving this level of intelligence was worth the costs.
> Also, lighter bodies (relative to size) make us (in our natural form) extremaly capable long distance runners.
Humans' ability to run long distances effectively is due to a combination of factors, with the ability to sweat being one of the most crucial. Here are the key adaptations that make humans good endurance runners:
a) Efficient sweating: Humans have a high density of sweat glands, allowing for effective thermoregulation during prolonged exercise.
b) Bipedalism: Our two-legged gait is energy-efficient for long-distance movement.
c) Lack of fur: This helps with heat dissipation.
d) Breathing independence from gait: Unlike quadrupeds, our breathing isn't tied to our running stride, allowing for better oxygen intake.
Lighter bodies (relative to size) plays a role but there plenty of creatures that have light bodies relative to size that are not great at long distance running.
> Still, the brain grew to maybe 2-4 times the size over this period.
I read somewhere that the human brain makes up only about 2% of body weight but uses 20% of the body’s energy. While brain size has increased over time, brain size does not determine intelligence. The brain’s high energy use, constant activity, and complex processes are more important. Its metabolic activity, continuous glucose and oxygen consumption, neurotransmitter dynamics, and synaptic plasticity all play major roles in cognitive function. Intelligence is shaped by the brain’s efficiency, how well it forms and adjusts neural connections, and the energy it invests in processing information. Intelligence depends far more on how the brain works than on its size.
Rats can navigate the physical world, though. In terms of total capabilities, I think it's not unreasonable to rate what GPT-4 is doing at or above the total capabilities of a rat, even if they manifest in different ways.
As we continue to make models larger, and assuming that model capabilities keep up with brains that have synapse counts similar to the weights in the models, we're now 2-3 OOM from human level (possibly less).
>> Fundamentally, we have no evidence of anything that will happen in the future.
Yeah, by this line of thought Jesus will descend from Heaven and save us all.
By the same line of fantasy, "give us billions to bring AGI", why not "gimme a billion to bring Jesus. I'll pray really hard, I promise!"
It's all become a disgusting scam, effectively just religious. Believe in AGI that's all there is to it. In practice it's just as (un) likely as scientists spontaneously creating life out of primordial soup concoctions.
This reply seems eerily similar to folks months/years before the wright brothers proved flight was indeed possible.
All the building evidence was there but people just refused to believe it was possible.
I am not buying that AI right now is going to displace every job or change the world in the next 5 years but I would t bet against world impacts in that timefram. The writing is in the wall. I am old enough to remember AI efforts in the late 80s and early 90s. We saw how very little progress was made.
The progress made in the past 10 years is pretty insane.
Precisely. I remember back in the late 90s, some Particle Physics papers were published that used neural nets to replace hand crafted statistical features.
While the power was not amazing, I've kind of assumed since then that scale was what would be needed.
I then half-way forgot about this, until I saw the results from Alexnet.
Since then, the capabilities of the models have generally been keeping up with how they were scaled, at least within about 1 OOM.
If that continues, the next 5-20 years are going to be perhaps the most significant in history.
They may still release AI products to the public that are good enough and cheap enough to prevent competitors from being profitable or receive funding (to prevent them from catching up), but that's not where the value would come from.
Just as an example, let's say xAI is first. Instead of releasing the full capability as GROK 7, they would use the ASI to create a perfected version of their self driving software, to power their Optimus robots.
And to speed up the development of future manufacturing products (including, but not limited to cars and humanoid robots)
And as such winners may be challenged by anti-trust regulations, the ASI may also be utilized to gain leverage over the political system. Twitter/X could be one arena that would allow this.
Eventually, Tesla robots might even be used to replace police officers and military personnel. If so, the company might be a single software update away from total control.