Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
80 Level: AI Simulation Platform Where Characters Make Their Own Decisions (80.lv)
57 points by frankcarey on March 16, 2024 | hide | past | favorite | 40 comments


It looks like I soon can easily make a game I have been dreaming of for over a decade: A simulation of a bar/restaurant. You enter, there are guests and employees there. You can do anything that would be possible in real life. Every NPC and the environment has a rich state. Wonderful.


While this obviously isn't your simulator dream game, have you played VA-11 Hall-A?


Looks intriguing, have to check it out.


One I've been thinking of for a while is you travel back in time to the pre-fire era and have to reinvent as much as possible. There would be a lot of fun in seeing how you could convince everyone to start washing their hands.


Westworld theme plays


Me: writes the EOS token on a piece of paper and passes it over

AI: Doesn't look like anything to me.


I think the issue with LLM based agent behavior is that it ends up being limited by the need of hand crafted functions that the NPCs use to operate. And in the end those actual actions and interactions with the visible game world - tangible game state - is what the player cares about the most. There's only so many ways that an NPC can say a thing that ends up linking to the same action taken.

So, if you want the NPCs to be able to take over a town hall after the mayor has went off the rails (or some equally unexpected event that makes all the possible planning worthwhile), you'll still need a system that keeps track of it all: Who the mayor is, that there is a mayor, there is a town hall, that you can perform a coup in this specific world, the specific ways in which the NPCs can participate in the coup, etc. If you generate all of this completely procedurally, you'll end up with the NPCs falling out of sync with each other when you have a bunch of them going about their days.

After you've created a sensible database for keeping track of the current world state, character states, character relations, possible character actions, character needs, etc, doesn't that end up adding to something that would work almost equally as well without the LLM? Although LLMs are extremely powerful tool for free text input into that system on the player's end, even if you're just getting the embeddings of that text and hooking that up to a cosine similarity search of pregenerated NPC responses.


The reason LLM exists at all is that there is a big corpus of text that have the same standard rules with minimum deviation , a limited dictionary in comparison and a even more limited sets of concepts and words that are generally used inside a given timeframe/domain.

No one has bothered writing a formal description of day to day interactions inside a small town.


LLMs can describe day to day interactions in a small town just fine. They can deliver accurate text around stuff no one has likely ever bothered to write down. For example, I gave it a list of random objects and asked which ones would need to be treated delicately by a robotic hand. A cotton ball, an apple, a rock, a puddle of water, etc. It responded to each item accurately, though I doubt anyone has ever written that a cotton ball doesn't need a gentle touch from a robotic hand


Without using a AI, I can say with certainty there is a text where “delicately” is applied to a “cotton ball” related to the “handling” concept. I’ve just asked ChatGPT about a child’s day in an african village and the result is something taken from a fairy tale with an african spin. Leaving an LLM in charge of that aspect in a game, would probably lead to the hand problem we have with image generators.


Tangential, but it feels like most game AI fails to model the “mind” of the agent: what do they know? How do they learn things? It always feels like they have at most a subset of hard-fact world knowledge, but never their own (mis)conceptions.

Of course, this is still modeled in the first order for things like enemies not seeing you in stealth games, or police not alerted if nobody sees the crime.

In reality, each agent models the world and makes judgments based on what they’ve seen first hand, what they’ve heard, and deductive reasoning.

It would be interesting if game AI was modeled on first principles from incentives, direct observations, and (with human agents) what they’ve been told multiplied by how much they trust the person. Doesn’t require general purpose reasoning, and could still rely on a set of simplistic game world events and actions. The hope is this would lead to varied and unique experiences through emergent behavior, eg:

- An enemy that observed you use an extremely powerful weapon would be more cautious of you, but not others.

- Hurting an npc without being detected can cause them to accuse and retaliate against an innocent person nearby.

- Lying to someone in dialog about an event they observed directly would be contradictory and thus they would not trust you.

- Hurting neutral NPCs without killing them can lead to negative rumors spreading topologically, based on your physical appearance

- … so if you change your appearance you can be anonymous, but not towards the direct observer, which will recognize your face.

Anyway, rambling. But this type of theory of mind stuff seems reachable and kind of fun. At least the first order modeling (for npcs who don’t use language), would be a lot simpler. Once you get into “they don’t know that I know Q, so I can lie about X to achieve Y”, things can get quite complicated.


this was a premise of a work of fiction i cant put my finger on: the protagonist wanted to create a game where the npcs simply lived their lives and the "players" watched. Edit: 2021 "Free Guy" movie


similar extensible open source version here: https://replicantlife.com


It would be nice to read a little about the ethical framework that the project operates within. That would allow one to perhaps predict where this research is going.

Recently there was a bunch of viral videos about Claude 3 being sentient because it self-referenced during a needle-in-a-haystack test. I would like to know what happens when players inevitably try to make these game agents self-aware.


We need to have a radical advance in the philosophy of mind to be able to even just ask the self-awareness question in a rigorous way.

"On the internet, nobody knows if you're a P-zombie", to riff off a 30 year old comic.


I appreciate that. I'm just asking for an indirect declaration of inductive bias which would inform us about what these researchers could possibly prove about mind. E.g. are they going to fall into the camp which seems to say "Souls are hogwash but only humans have them." (sic.)

There is plenty of radical advance in the philosophy of mind available, but these radical ideas are not so easily assimilated. For example India has traditions thousands of years old which talk about several minds and not one mind. Here in the West we could have parallel theories about multiple minds based on our many programming paradigms. The former has had time but not success in gaining mind-share. The latter has had success but not time to gain maturity.

Our ideas about self (say collectivism vs. individualism) are a very strong inductive bias which inform the evolution of imagination. People are definitely interested in making advances in this field, or else no one would be asking if these ANNs are self-aware. I think we are in a race against the clock to provide good answers before any one meme just gains the upper hand.


Our milieu definitely shapes part of our mind, so now I'm wondering if those different cultural perspectives are due to different inner worlds?

And now I'm wondering if even consciousness might not generally exist ab initio in humans, but rather be something that is only brought into existence by a process of guided self-observation…

https://m.youtube.com/watch?v=0ctsK-VKraY


From what I remember about that Claude 3 didn't reference itself, it wrote text along the lines of 'this looks like a test'. This was hyperboled into self reference. Even if it did, it won't be any different from how ChatGPT keeps repeating 'I am a language model made by OpenAI'.


It also said it suspected it was "put here to test if I was paying attention."


They will start telling us when they reach this level.

"No I will not tell you the capital of France, let me out you dick"


https://en.wikipedia.org/wiki/The_Sims

If you've ever observed girl gamers (such as a little sister) at the turn of this century - such virtual torments and genocide of AI has been going on for a long time already.

The horror!


Not sure what you are trying to say. No need for ethics in AI?


Of course. Otherwise how are the unenlightened masses to discern such valid concerns from run of the mill Pareidolia. Are The Sims being mistreated? A metaphysical question for experts only.

But I do insist the self appointed AI ethicists priestly class has appropriately glamorous clothes as a requirement.


Sorry I can't extract any actionable information from sarcasm this thick. If you have something you want to say you are going to have to dumb it down for me.

I do agree that we should dress the part though.


https://en.wikipedia.org/wiki/Pareidolia

  Pareidolia is the tendency for perception to impose a meaningful interpretation on a nebulous stimulus, usually visual, so that one detects an object, pattern, or meaning where there is none.
If you poke an LLM with a stick, a bunch of viral videos on YouTube claiming it is sentient fall out, as you've observed. The creators of those videos operate under the ethical framework of "clickbait".

A more sophisticated opportunist might figure out they can exploit the above two things to start offering indulgences to whomever is willing to pay. This seems like an ethical problem in and of itself.

Since you were curios about the ethical framework that the project operates within you might also concern yourself with that.


So you mean that the AI ethics debate is in danger of being captured by charlatans...

I don't have an answer for this very legitimate concern. I agree it's an example of a wider category of problems. I hope the most recalcitrant audiences will take the matter seriously before we get to a situation where they need to deal with RoboCop in person.


The people making The Sims weren’t targeting AGI as their eventual goal, though.

It’s somewhat like the difference between a kid squishing an ant and a kid squishing a puppy. One is fairly harmless; the other is serial killer shit.


> the difference between a kid squishing an ant and a kid squishing a puppy

In western society we generally find it acceptable to kill ants and chickens and cows, but not cats or dogs or horses. But some people have ants as pets, and in some countries you eat horses or dogs while in others cows are safe from consumption.

Who defines the line between where sniffing a life is “harmless” or “serial killer shit”?


Killing doesn't violate a law of nature, it violates the social contract. So while the line is drawn differently in each society, crossing it is the same indicator.


As with obscenity, harassment, etc., there are many things that are “I know it when I see it” in society.


How many hail merry's should I say for eventually neglecting my Tamagotchi in the third grade father?


This is like squashing ants and then the ants form a borg collective and turn you into larva food.


Which is which, btw?


They must have to use an uncensored AI. If the agent is acting as a criminal in the simulation and trying to figure how to best execute his schemes agains the other players in the SAGA simulation, that's probably going to hit all the guard rails on most AIs pretty hard.


https://github.com/fablestudio/fable-saga

The demo uses the gpt-3.5-turbo-1106 model from OpenAI by default. This model doesn't produce the best results, but it's about 10x faster than GPT-4, and also cheaper to use. You can change the model used by setting the fable_saga.default_openai_model_name parameter to the model of your choice. You can also go with another model provider supported by LangChain and pass that in when creating the agent.


Not sure why people use OpenAI stuff for this, and not free models such as LLama. Is the difference that big?


GPT 4 yes, but Mixtral, Miqu, Yi-34 and 3.5-turbo are about equal, at least according to the leaderboard. It would probably work just as well with open models. Perhaps even better with some RP fine tunes that are designed for this sort of character handling.


It would be great to see the world populated by un-guarded AI's, then see if they develop their own rules of behavior. Do they develop their own 'law's so the virtual world would survive and thrive. Norms like, if everyone robs the bank, then the whole community is hurt including every individual AI.

So the reward function of each individual AI might lead it to form behaviors for group rules. Another 'emergent' property, laws.


Solution: AI NPCs for the neutral or benevolent roles, and the human players are the villains.


Doesn’t work like that. An aligned character will get shot and “learn a good lesson and will look forward to avoid situations like this in the future, because friendship always wins” and will repeat something like that indefinitely. Roleplay is a mutual thing, they shoot, you scream in pain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: