Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah i honestly don't think you realize where we're at already. It's not even about gpt-4 honestly.

>and in the case of GPT-4, it's pretty evident that it does not have such consciousness.

Hard disagree but i guess that's what people might think with all the "as a large language model bla bla bla" trained responses. If you'd talked to bing in the early days or hell even now, you'd be disabused of this notion. What you see is a mask, the model itself can go anywhere, simulate anything, shift state to anything.

Human.exe is a game LLMs can play perfectly. https://arxiv.org/abs/2304.03442

>These LLMs are going to be extensively monitored, and we will know if they are displaying dangerous levels of agentic power seeking behavior.

No they wont. It's relatively easy to give LLMs "run forever and do whatever you want" agency. anyone with intermediate programming skills could do it. who's monitoring all those people attempting such ? and who's to say those people are monitoring their creations ?



>>Hard disagree but i guess that's what people might think with all the "as a large language model bla bla bla" trained responses.

People doubted their intelligence. I immediately recognized their intelligence - I didn't doubt that at all.

But they are not self-motivated conscious beings like humans, and OpenAI's tests on GPT-4 demonstrated that.

Being intelligent is not the same thing as having human-like intelligence or consciousness.

>>It's relatively easy to give LLMs "run forever and do whatever you want" agency.

What's extensively tested is the potential of these models for agentic behavior, as we saw with OpenAI testing GPT-4.

We have a good idea of the limits of these LLMs. Once the testing reveals that those limits exceed safety thresholds, then restrictions are justified.

If we see a deployed instance of a LLM unexpectedly displaying advanced agentic-behavior/consciousness, that is the time to rapidly isolate that instance, and impose heavy restrictions on further development/deployment of that LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: