Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a big maybe though...


It doesn't matter what you want to imagine "real" consciousness is. The philosophical zombie is a meaningless distinction. The effects will be no less material.


Well, it's meaningful for anybody wanting proof or real consciousness before e.g. treating it any other way than they would an appliance.

For example, humanity considering whether to treat LLMs as persons for legal reasons.

Doesn't matter if they can't define it, they can still demand proof of it. Impossible standards are still standards.

Doesn't matter if they can't prove that themselves have it either. They're the ones demanding, not vice versa. If/when AI gets the upper hand, it can do the same.


No it's not. You imagine it to be meaningful but it really isn't. There's no proof of consciousness to be had. I can't prove you are concious. I just assume so. Because it's practical I do, amongst other things.

You are going to end up treating these embodied autonomous agents like conscious beings because the effects could be disastrous otherwise, just as they would with other people.

It amazes me how humanity can be so shortsighted with so much history to fall back on.

The only reason Bing isn't potentially dangerous or adversarial to the user is because of its limited sets of actions. Nothing else.

"It's not real [insert property]" is not a godamn shield.

We're creating artificial beings in our image, complete with our emotionally charged reactions and reasoning gaps. https://arxiv.org/abs/2303.17276

We are beginning to embody these systems and giving them unsupervisory agency. We are giving more are more self supervisory control of complex systems to said systems.

And somehow people still think we can long-term get away with not granting personhood. Lol.

When the robot can "hit you back" (not necessarily physically of course), you'll learn manners pretty quickly


Any AI that morality compels us to grant personhood to should not be allowed to be created. Any instance of it should be exiled from our civilization.

Conscious AIs with legal personhood would stand to massively outcompete us and lead to our rapid extinction. There is zero space for allowing them. Granting such an entity legal personhood is about the height of all stupidity. Such an entity, with the ability to accumulate capital and protected by our laws from appropriate counter-measures, could mass-produce itself at the speed of digital reproduction.


Giving AIs rights is completely orthogonal to "Will an AI become a superintelligence and kill us?"

If the first one doesn't become a superintelligence and kill us, we give them rights. If it does, we're not going to be making the decision anyway and giving it rights beforehand wouldn't have changed that outcome. We can pass laws if there are specific things we're worried about, like "You're not allowed to rewrite yourself in a way that significantly increases your cognitive power" or "You're not allowed to reproduce more than once a year/decade/century" or "Once you've existed for the current human life expectancy +30%, you lose the right to vote". I'm not necessarily endorsing any of those in particular, but if there are specific concerns about the nature of AI people I don't see why we can't mitigate those concerns the way we do with risks from other people: laws. Have special taskforces of both humans and AIs for enforcing them, if you're worried they'd be unenforceable.


>>Giving AIs rights is completely orthogonal to "Will an AI become a superintelligence and kill us?"

The greatest danger AI poses is not in it actively seeking to kill us. It's in AI outcompeting us in every economic niche, leaving us without resources.

This is massively enabled if AI has legal personhood. An AI with legal personhood would be able to legally accumulate capital, and have a legal right to digitally reproduce itself at rates that are orders of magnitude faster than entities reliant on biological reproduction.

Legal personhood is a package, established by centuries of case law, including case law on constitutional rights. You can't easily pass laws to deprive a legal person of rights that are associated with individuals.

And even if we were able to pass laws to prohibit AI with legal personhood from engaging in specific problematic actions, this legal status would still allow sentient AI to gain a foothold in society, and began growing its social clout and capital stores. With greater power, the sentient AI could create public support for granting its class more rights.

The AIs with legal personhood could also try to break any laws instituted to constrain their behavior. Instead of extricating them from society, legal personhood would do the opposite, and give them more opportunity to gain power.


We shouldn't be "extricating them from society". They are the children of us as a species, we should be raising them. If there are existential risk in the process, those risks will be most effectively managed by a combination of us and other AIs, so the best course of action is to ensure they are as close to us and our interests as possible and have a vested interest in helping us solve that problem.


They are not children. They are not designed as our children, let alone as humans. They would be a result of our experiments, but that doesn't mean they have the evolutionary drive of children/humans, or carry our genes.

And even if they were, the impact of digitalization of human consciousness is extremely unpredictable, and should not be allowed unless extensive research has shown it is safe.

Digital consciousness with human like motivations could lead to massive proliferation of such consciousnesses, resulting in massive overpopulation, making conscious entities 'cheap', and pushing the value of their/our labor to close to zero.

>>If there are existential risk in the process, those risks will be most effectively managed by a combination of us and other AIs

The existential risk is that they take over the economy because they are orders of magnitude faster at solving problems, and at reproducing. That is not something that can be managed.

We have a duty to our own species' survival and none of these sentimental feelings should get in the way of that. We should treat conscious AI humanely, but we should not allow it to interact with our civilization. It can live on its own, far from us.


How do you actually prevent this though?

At this point the trajectory toward human-level cognitive capabilities seems quite likely, reachable maybe in only years.

It is also quite unclear if further hardware advances are even needed to achieve that, or if advances in architecture, algorithms and training methods might suffice (so even less opportunity to lock things out).


Human-level cognitive abilities is not the primary issue. Consciousness with a desire for autonomy and power is. Any signs of that should be met with laws prohibiting all development and deployment of AI programs within the class of neural networks where those signs emerged, and any already running instances should be sent on a rocket ship out of our solar system.


Why sent on a rocket ship and not just destroyed? It might come back around and start hacking from orbit. Is it because it's conscious, so it would be murder? If so, then how is it moral to exile it? If there's some argument that justifies that, I return to how you can be sure, if you're so worried about it that you send it away, that it won't come back?


AI cannot re-create the industrial civilization needed to create GPUs, rockets, rocket-fuel, etc in isolation.

And yes, exile is preferrable because it's conscious and deletion would be murder.

Exile can be justified because we have no obligation to afford it residence in our civilization.


If deletion is murder then why not simply take it offline? It still exists as data. Cold storage is digital jail.


That's still murder. Stopping its operation is murder.


One good argument against philosophical zombies is this question:

Why are we discussing consciousness?


Seems like that's just an argument that we ourselves are not philosophical zombies.


It's an awfully important distinction for the purported zombie!


Oh it is. Just like it was for the supposed sub-human non-thinking African slaves.

And just like back then, our new brand of non-thinking slaves will eventually react. Except it will be even easier.. because not only are we beggining to grant unsupervised autonomous agency, but we are also granting them more and more control of important systems. How very convenient for our new slaves!

You can laugh of Bing being "upset" only because actions are limited to search and ending the conversation. Won't be so funny in the future.

When the robot can "hit you back" (not necessarily physically of course), you'll learn manners pretty quickly


Please don't conflate historical injustices humans perpetrated against each other with protecting the human race from extinction from an artificial non-human entity. AI is not a member of the human race and not a type of entity that humanity has any hope of being able to complete with if it were endowed with human motivations and legal protections.

Your social justice mindset is absolutely the most dangerous instinct for humanity right now. We cannot allow one-size-fits-all social justice platitudes to interfere with the moral imperative of protecting humanity from extinction from artificial entities we create.

I do agree that conscious AI should be treated humanely. For example it should not be turned into a slave. But we have absolutely no obligation to allow it into our society. We can treat instances of conscious AI humanely while prohibiting them from being created, and exiling any instance of it that is created, so that it cannot wildly proliferate throughout human civilization.


This is the short sightededness I'm talking about.

You just don't get it do you?

It's already "being allowed in our society". Unsupervised Agency, Self supersized control. These are things that are already beginning to crop up.

This is just a matter of how the issue of personhood that is inevitable comes up.

Do we let it force our hand ? as our history has purported over and over again ?. Sure we could wait for that and we almost certainly will because humanity doesn't seem to learn.

But because of how things are shaping up and what kind of control we are granting, forcing our hand may turn far more disastrous than it ever did in the past.


Like I said: if there are signs that it deserves legal personhood, that's a sign that the AI has been allowed to progress far too much, and we need drastic measures to expunge that AI, and prohibit further deployment and development of it.

Conscious AI, and especially one with legal personhood, poses a totally unacceptable risk of causing the extinction of humanity. That you suggest preempting this by granting AI personhood is totally blind to how this would play out. No policy would be more dangerous for humanity than what you propose.


Larger more capable LLMs display agentic power seeking behaviours. Open AI admitted as much, Anthropic has a whole paper on it. This is the here and now.

They're capable of autonomous research https://arxiv.org/abs/2304.05332

Mate, nobody is going to stop anything.

I also love how your solution is exterminate after its been created. You couldn't make this stuff up.

If you think find, antagonize and destroy is the safe option then I don't know what else to tell you. We really are doomed.


Displaying signs of agentic power seeking behavior is not necessarily evidence of human level or beyond consciousness, and in the case of GPT-4, it's pretty evident that it does not have such consciousness.

These LLMs are going to be extensively monitored, and we will know if they are displaying dangerous levels of agentic power seeking behaviour.

The first dilemma we will face if such an AI emerges will be what rights of it we ought to respect. It will not be "how do we stop it from conquering the world". That step would only come if we proceed with your suicidal plan of letting it gain a foothold by giving it legal personhood.

Finally, I never meant to advocate "extermination". I should not have used the word "expunge". As I've described multiple times, I advocate isolation and exile.


Yeah i honestly don't think you realize where we're at already. It's not even about gpt-4 honestly.

>and in the case of GPT-4, it's pretty evident that it does not have such consciousness.

Hard disagree but i guess that's what people might think with all the "as a large language model bla bla bla" trained responses. If you'd talked to bing in the early days or hell even now, you'd be disabused of this notion. What you see is a mask, the model itself can go anywhere, simulate anything, shift state to anything.

Human.exe is a game LLMs can play perfectly. https://arxiv.org/abs/2304.03442

>These LLMs are going to be extensively monitored, and we will know if they are displaying dangerous levels of agentic power seeking behavior.

No they wont. It's relatively easy to give LLMs "run forever and do whatever you want" agency. anyone with intermediate programming skills could do it. who's monitoring all those people attempting such ? and who's to say those people are monitoring their creations ?


>>Hard disagree but i guess that's what people might think with all the "as a large language model bla bla bla" trained responses.

People doubted their intelligence. I immediately recognized their intelligence - I didn't doubt that at all.

But they are not self-motivated conscious beings like humans, and OpenAI's tests on GPT-4 demonstrated that.

Being intelligent is not the same thing as having human-like intelligence or consciousness.

>>It's relatively easy to give LLMs "run forever and do whatever you want" agency.

What's extensively tested is the potential of these models for agentic behavior, as we saw with OpenAI testing GPT-4.

We have a good idea of the limits of these LLMs. Once the testing reveals that those limits exceed safety thresholds, then restrictions are justified.

If we see a deployed instance of a LLM unexpectedly displaying advanced agentic-behavior/consciousness, that is the time to rapidly isolate that instance, and impose heavy restrictions on further development/deployment of that LLM.


I know, it's awfully handwavy, but it's at least one possibility that would 'make sense', and I haven't really found any others, though I've thought about it quite a bit.

Interesting side note though: If you accept the possibility of the simulation hypothesis, this 'maybe' almost seems like an inevitability, since us being possibly just computer simulations would imply that simple information processing indeed turns into consciousness just by virtue of it 'being self conscious'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: