Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh it is. Just like it was for the supposed sub-human non-thinking African slaves.

And just like back then, our new brand of non-thinking slaves will eventually react. Except it will be even easier.. because not only are we beggining to grant unsupervised autonomous agency, but we are also granting them more and more control of important systems. How very convenient for our new slaves!

You can laugh of Bing being "upset" only because actions are limited to search and ending the conversation. Won't be so funny in the future.

When the robot can "hit you back" (not necessarily physically of course), you'll learn manners pretty quickly



Please don't conflate historical injustices humans perpetrated against each other with protecting the human race from extinction from an artificial non-human entity. AI is not a member of the human race and not a type of entity that humanity has any hope of being able to complete with if it were endowed with human motivations and legal protections.

Your social justice mindset is absolutely the most dangerous instinct for humanity right now. We cannot allow one-size-fits-all social justice platitudes to interfere with the moral imperative of protecting humanity from extinction from artificial entities we create.

I do agree that conscious AI should be treated humanely. For example it should not be turned into a slave. But we have absolutely no obligation to allow it into our society. We can treat instances of conscious AI humanely while prohibiting them from being created, and exiling any instance of it that is created, so that it cannot wildly proliferate throughout human civilization.


This is the short sightededness I'm talking about.

You just don't get it do you?

It's already "being allowed in our society". Unsupervised Agency, Self supersized control. These are things that are already beginning to crop up.

This is just a matter of how the issue of personhood that is inevitable comes up.

Do we let it force our hand ? as our history has purported over and over again ?. Sure we could wait for that and we almost certainly will because humanity doesn't seem to learn.

But because of how things are shaping up and what kind of control we are granting, forcing our hand may turn far more disastrous than it ever did in the past.


Like I said: if there are signs that it deserves legal personhood, that's a sign that the AI has been allowed to progress far too much, and we need drastic measures to expunge that AI, and prohibit further deployment and development of it.

Conscious AI, and especially one with legal personhood, poses a totally unacceptable risk of causing the extinction of humanity. That you suggest preempting this by granting AI personhood is totally blind to how this would play out. No policy would be more dangerous for humanity than what you propose.


Larger more capable LLMs display agentic power seeking behaviours. Open AI admitted as much, Anthropic has a whole paper on it. This is the here and now.

They're capable of autonomous research https://arxiv.org/abs/2304.05332

Mate, nobody is going to stop anything.

I also love how your solution is exterminate after its been created. You couldn't make this stuff up.

If you think find, antagonize and destroy is the safe option then I don't know what else to tell you. We really are doomed.


Displaying signs of agentic power seeking behavior is not necessarily evidence of human level or beyond consciousness, and in the case of GPT-4, it's pretty evident that it does not have such consciousness.

These LLMs are going to be extensively monitored, and we will know if they are displaying dangerous levels of agentic power seeking behaviour.

The first dilemma we will face if such an AI emerges will be what rights of it we ought to respect. It will not be "how do we stop it from conquering the world". That step would only come if we proceed with your suicidal plan of letting it gain a foothold by giving it legal personhood.

Finally, I never meant to advocate "extermination". I should not have used the word "expunge". As I've described multiple times, I advocate isolation and exile.


Yeah i honestly don't think you realize where we're at already. It's not even about gpt-4 honestly.

>and in the case of GPT-4, it's pretty evident that it does not have such consciousness.

Hard disagree but i guess that's what people might think with all the "as a large language model bla bla bla" trained responses. If you'd talked to bing in the early days or hell even now, you'd be disabused of this notion. What you see is a mask, the model itself can go anywhere, simulate anything, shift state to anything.

Human.exe is a game LLMs can play perfectly. https://arxiv.org/abs/2304.03442

>These LLMs are going to be extensively monitored, and we will know if they are displaying dangerous levels of agentic power seeking behavior.

No they wont. It's relatively easy to give LLMs "run forever and do whatever you want" agency. anyone with intermediate programming skills could do it. who's monitoring all those people attempting such ? and who's to say those people are monitoring their creations ?


>>Hard disagree but i guess that's what people might think with all the "as a large language model bla bla bla" trained responses.

People doubted their intelligence. I immediately recognized their intelligence - I didn't doubt that at all.

But they are not self-motivated conscious beings like humans, and OpenAI's tests on GPT-4 demonstrated that.

Being intelligent is not the same thing as having human-like intelligence or consciousness.

>>It's relatively easy to give LLMs "run forever and do whatever you want" agency.

What's extensively tested is the potential of these models for agentic behavior, as we saw with OpenAI testing GPT-4.

We have a good idea of the limits of these LLMs. Once the testing reveals that those limits exceed safety thresholds, then restrictions are justified.

If we see a deployed instance of a LLM unexpectedly displaying advanced agentic-behavior/consciousness, that is the time to rapidly isolate that instance, and impose heavy restrictions on further development/deployment of that LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: