Repo seems legit, and some of the ideas are pretty novel. As always though, we'll have to see how it scales. A lot of interesting architectures have failed the GPT3+ scale test.
As a sidenote--does anyone really think human-like intelligence on silica is a good idea? Assuming it comes with consciousness, which I think is fair to presume, brain-like AI seems to me like a technology that shouldn't be made.
This isn't a doomer position; that human-like AI would bring about the apocalypse. It is one of empathy: At this point in time, our species isn't mature enough to have the ability to spin up conscious beings so readily. I mean look how we treat each other--we can't even treat beings we know to be conscious with kindness and compassion. Mix our immaturity with a newfound ability to create digital life and it'll be the greatest ethical disaster of all time.
It feels like researchers in the space think there is glory to be found in figuring out human-like intelligence on silicon. That glory has even attracted big names outside the space (see John Carmack), under the presumption that the technology is a huge lever for good and likely to bring eternal fame.
I honestly think it is a safer bet that, given how we aren't ready for such technology, the person / team who would go on to actually crack brain-like AI would be remembered closer to Hitler than to Einstein.
We have no clue what "consciousness" even is, let alone what the prerequisites are. Our best guesses are just that. Guesses. Guesswork based on information so sparse that astronomers in ancient Greece might have had a better time guessing what the stars truly are.
For all we know, an ICE in a 2001 Toyota truck is conscious too - just completely inhuman in its consciousness.
Nonetheless, here we are - building humanlike intelligence. Because it's useful. Having machines that think like humans do is very useful. LLMs are a breakthrough in that already - they implement a lot of humanlike thinking on a completely inhuman substrate.
For the record, I'm agnostic to whether or not consciousnses is possible upon silica. I think it is pretty safe to say though that it likely is an emergent property of specifically-configured complex systems, and humanlike intelligence on silica is certainly something that might qualify.
I don't think appealing to whether or not inanimate objects may be conscious is sufficient to discount that we are toying with a different beast in machine learning. And, if we were to discover that inanimate objects are in-fact conscious, that would be an even greater reason to reconfigure our society and world around compassion.
I agree that LLMs are a great breakthrough, and I think there are many reasons to doubt consciousness there. But I would suggest we rest on our laurels for a bit, and see what we can get out of LLMs, rather than push to create something that is closer to mimicking humans because it might be more useful. From the evil perspective of pure utility, slaves are quite useful as well.
The issue so far is that this "closer to mimicking humans" doesn't actually seem to give performance gains. So, why bother?
Existing LLMs are already trained to mimic humans - by imitating text, most of which is written by humans, or for humans, and occasionally both. The gains from other types of human-mimicry don't quite seem to land.
The closest we got to "breakthrough by mimicking what humans do" since pre-training on unlabeled text would probably be reasoning. And it's unclear how much of reasoning was "try to imitate what humans do on a high level", and how much was just trying to generalize the lessons from the early "let's think about it step by step" prompting techniques.
It's likely that we just don't know enough about the human mind to spot, extract and apply the features that would be worth copying. And even if we did... what are the chances that the features we would want to copy would turn out to be the ones vital for consciousness?
For the most part I think we agree. There is a lot of uncertainty around the mechanics of consciousness, a lot of reasons to doubt the existence of those mechanics in current AI, and a lot of failed endeavors to use biological mimicry to improve AI state of the art.
I don't think that precludes remaining concerned with the continued push to make current models more humanlike in nature. My initial comment was spurred by the fact that this paper is literally presenting itself as solving the missing link between transformer architectures and the human brain.
Here's to hoping this all goes toward a better world.
Yea, actual "human-like" consciousness would be an ethical nightmare. Any sane company should not be legitimately pursuing this.
My most generous interpretation of Anthropic's flirting with it is they too think it would be a nightmare and are hyper-vigilant. (My more realistic interpretation is that it's just some mix of a Frankenstein complex and hype-boosting.)
I hope your generous interpretation is right... I can't really tell what's going on with Anthropic's theater either. They definitely seem like they are vigilant of bad outcomes, going as far as to publish their own economic index trying to monitor how AI is affecting labor markets.
That said, the cynic in me thinks they give lip service to these things while pushing fully ahead into the unknown on the presumption of glory and a possibility of abundance. A bunch of the leadership are EAs who subscribe to a kind of superintelligence eschatology that goes as far as to give a shot at their own immortality. Given that, I think they act on the assumption that AGI is a necessity, and they'd rather take the risks on everyone's behalf than just not create the technology in the first place.
Them recently flirting with money from the gulf states is a pretty concerning signal pointing to them being more concerned with their own goals rather than ethics.
I mean look how we treat each other--we can't even treat beings we know to be conscious with kindness and compassion. Mix our immaturity with a newfound ability to create digital life and it'll be the greatest ethical disaster of all time.
Or maybe if we had artificial life to abuse, it would be a suffcient outlet for our destructive and selfish impulses so that we would do less of it to genuine life. Maybe it's just an extension of sport contests that scratch that tribal itch to compete and win. There are no easy answers to these questions.
In this thought experiment, I am considering artificial life genuine. I would agree that there could be productive outlets for our selfish impulses if there was something that mimicked their targets without consciousness to experience the externalities of such impulses.
That said, I think probably the best path would just be to build and foster technologies that help our species mature, so if one day we do get the ability to spin-up conscious beings artificially, it can be done in a manner that adds more beauty rather than despair to our universe.
> human-like intelligence on silica is a good idea.
The famous Chinese Room Translator -- silica is irelevant, you could probably implement LLM-like algorithm with pen and paper, do you still think the paper could suffer or be "conscious"?
I am empathetic to arguments against consciounsess being computational. Definitely strange to imagine an algorithm played out on trillions of abacuses being conscious.
That said, I don't think it is a sufficient appeal to entirely discount the possibility that the right process implemented on silicon could in fact be conscious in the same way we are. I'm open to whether or not it is possible--I don't have a vested interest in the space--but silica seems to be a medium that can possible hold the level of complexity for something like consciousness to emerge.
So this is to say that I agree with you that consciousness likely requires substrate-specific embodiment, but I'm open to silica being a possible substrate. I certainly don't think it can be discounted at this point in time, and I'd suggest that we don't risk a digital holocaust on the bet that it can't.
> the person / team who would go on to actually crack brain-like AI would be remembered closer to Hitler than to Einstein
I completely agree. I think that the people who are funding AI research are essentially attempting to create slaves. The engineers actually doing the work have either not thought it through or don't care.
> Assuming it comes with consciousness, which I think is fair to presume, brain-like AI seems to me like a technology that shouldn't be made.
"Fair to presume" is a good way to put it. I'm not convinced that being "like a brain" is either necessary or sufficient for consciousness, but it's necessary to presume it will, because consciousness is not understood well enough for the risk to be eliminated.
As a sidenote--does anyone really think human-like intelligence on silica is a good idea? Assuming it comes with consciousness, which I think is fair to presume, brain-like AI seems to me like a technology that shouldn't be made.
This isn't a doomer position; that human-like AI would bring about the apocalypse. It is one of empathy: At this point in time, our species isn't mature enough to have the ability to spin up conscious beings so readily. I mean look how we treat each other--we can't even treat beings we know to be conscious with kindness and compassion. Mix our immaturity with a newfound ability to create digital life and it'll be the greatest ethical disaster of all time.
It feels like researchers in the space think there is glory to be found in figuring out human-like intelligence on silicon. That glory has even attracted big names outside the space (see John Carmack), under the presumption that the technology is a huge lever for good and likely to bring eternal fame.
I honestly think it is a safer bet that, given how we aren't ready for such technology, the person / team who would go on to actually crack brain-like AI would be remembered closer to Hitler than to Einstein.