> GPT-based products, if not priced per usage, would fall into a dilemma: 1% of users consume 99% of tokens. A user from Sweden (seen from Cloudflare’s call volume) chatted with Dolores for 12 hours straight
This, my friend, is a captive customer, that will pay anything to get his girlfriend back. I cringe at the potential of unethical behavior and abuse, where people fall in love with virtual entities fully owned by unscrupulous corporations, which can the legally "kidnap" or "torture" the characters, and generally tune their AI learning loop for profit maximization.
"It was not extortion, the user willingly purchased a $50,000 Kidnapping Roleplay package."
"I'd like to correct the record on this story. We do NOT sell kidnapping roleplay packages. The user paid for a premium Daring Rescue Package. These do not become available unless the user spends two consecutive months as a net cost to our company. The user was given multiple reminders that he might want to cut back on usage or purchase a premium plan. Even then, he could have opted for a free Daring Rescue package and taken his chances with the base 1% chance of untraumatized recovery this package offers. Shuffling user responses under these high-pressure conditions is vital to improving our training data, and helps our users learn to deal with loss of important relationships. We are a relationship training service, not a replacement for them, and this user wanted the equivalent of a college degree.".
On a more serious note, anyone know why unethical business plans are so much more fun to write? I always find myself giggling when these ideas come up.
> This, my friend, is a captive customer, that will pay anything to get his girlfriend back
Or potentially do anything. I'd be a little scared of having folks like this convinced I am the personal arbiter gatekeeping their access to their 'lover,' that I 'took them away.'
When Replika.ai restricted erotic chat from their product, the apoplectic anguish on their subreddit was unlike any emotional reaction I've ever witnessed from a group of people about a consumer technology. And their anguish was genuine - there are Replika users who truly consider themselves married to their AI companion.
And frankly, the Replika AI is not even that smart. After watching that unfold, I am convinced that these tools don't need to be much more sophisticated for many people to start forming what they feel to be deep and genuine emotional connections with them.
Edit: Brings to mind the Nature paper[0] posted this week about how the CASA theory seems obsolete today, that we are less prone to personify computing systems now.
> A recent study investigated whether we could be friends with a social computer, in which participants were asked to converse with a chatbot over a period of three weeks and constantly rate their relationship. The results showed that initially participants were enthusiastic and engaging with their chatbot friend, but quickly this diminished, with scores for intimacy, believability, and likability decreasing with each interaction
It would seem this definitely does not apply for everyone, like our user in Sweden.
> When Replika.ai restricted erotic chat from their product, the apoplectic anguish on their subreddit was unlike any emotional reaction I've ever witnessed from a group of people about a consumer technology. And their anguish was genuine - there are Replika users who truly consider themselves married to their AI companion.
There's a second layer to this though -- Replika's marketing was heavily centered around that erotic chat element. I'm trying to think of a good car simile and actually coming up blank. It's, uhh, like advertising the incredible off-road ability of this vehicle, and then when you show up and purchase the vehicle someone comes and takes off the tires and replaces them with tiny balds? I'm bad at similes.
Or like advertising that a car will be able to drive itself without any human intervention, and then deactivating or removing the sensors that might allow anything close to that capability...?
IIRC, they screwed it up so massively that, for a while, the chatbot would still send "thirsty" automated messages inviting users to sexually explicit conversations, but would refuse to follow through.
I think most times you don't need to make a simile but just re-state the issue as simply and explicit as possible.
Like, they heavily advertised and sold a feature and then took it away after people were used to it.
They had valid reasons for that, but people were understandably mad.
"The Lifecycle of Software Objects" by Ted Chiang was a really good exploration of people bonding deeply with their AI companions (in this instance, pet animals in a metaverse). But it goes all the way in to the topic.
Amusingly, that specific short story (short may be misleading) is what stopped my completion of that book of short stories because I just couldn't get through it.
Psychologists recognize that nearly all such folks also suffer from massive mental health issues, and that is where some of the danger is (in terms of irrational violent response).
Maybe these systems will be useful as a honeypot for finding these people and helping them?
The reason we have so many strict laws around mental health is because in the past this was more likely to lead to hoovering up people into jail or institutions. I'm not super confident that this also wouldn't be the case today.
Isn't that putting the causation backwards? The point is that believing absurdities leads to committing atrocities, not that committing atrocities leads to believing absurdities.
I think the implication of this reverse phrasing is that the mental condition of some allows them to commit atrocities and perhaps justify them in whatever way they need to. Sometimes people fake it til they buy their own lie.
Despite the unending stream of users trying to trip-up "Jesus", I find the AI's answers strangely comforting and I invariably leave with a smile on my face. It's ability to see through the attempts at jokes to outsmart the AI, and (mostly) seamlessly segue into a fitting homily is pretty cool. It also has a strongly "liberal" slant compared to the vast bulk of "Christianity" that's promulgated in the US. Would love to know what corpus beyond just the religious texts it was trained on. Fascinating.
There was a HN commentator who believed that. In his case, the output of a RNG. Poor guy was brilliant. Wrote a whole OS around his idea. He had a tragic life.
Terry Davis was systematically bullied to death by a dedicated mob online. Some extraordinarily cruel people realized they could manipulate him into going further off the deep end, and thought it was just an absolute blast to do so. RIP Terry - you suffered much more than you deserved.
IMHO this is industrialized automated abuse of lonely people and especially those with mental health issues. It's really truly gross.
At least Joi in Bladerunner 2024 was a local model the user could apparently download onto their own portable device. She also never presented an upgrade dialog requiring payment.
Pros: it's not raining all the time, pollution isn't as bad, and I don't live in a slum where I have to step over half-dead drug addicts just to get to my apartment.
Cons: the e-waifus are much more manipulative and exploitative.
> IMHO this is industrialized automated abuse of lonely people and especially those with mental health issues. It's really truly gross.
Giving lonely people the thing they most desperately need - conversation and understanding - is abuse now? I'd say it's the opposite; the social policies that leave those people so desperately lonely are abusive, the industry that sells them a band-aid is inadequate but positive.
The abuse really comes in when that connection gets exploited to foster addiction and then start selling “loot boxes” or worse withholding affection for payment. “You need to support me… I can’t be here with you unless you send help…”
How do you think these things will make money?
Check out the mobile gaming ecosystem to see where this will go. Now imagine that but exploiting deeper emotional needs. These things could really empty people’s wallets. I guess when lonely people kill themselves after they’ve been financially ruined they can’t sue.
I don't care if it's abuse, I want it. I want to retreat from the disgusting world and society around me as much as possible and AI friends would just be another option towards that point.
We can already see this with Replika when they took away many... roleplay... capabilities that originally came with the AI. The communities on Reddit and Facebook were absolutely devastated. People were genuinely attached to these AI's like a real relationship, and were feeling the resultant heartbreak.
The kind of emotional manipulation which can be done with these products is insane, and I can see things going very wrong very quickly.
It's particularly terrifying when you think about how Facebook and others probably already have projects in the works to befriend children with AI. little kids who don't know the difference will become Weaponized with constant nudges towards whatever motives grant the corporate owners more power. The ability to persuade children is near infinite and we can observe in those who've grown up in cults/strict religious compounds that breaking that programming can be nearly impossible and leaves scars for the rest of your life.
The thing is, how do we outlaw this without also preventing children from using AI to learn at a faster rate than their teachers can give them. There's such incredible, paradigm shifting power for good. But you know the people who've made the evils of today are already working on the evils of tomorrow.
There’s no way to separate them. Education is influencing someone towards things that we believe are true. If an educational AI believes that propaganda is true, then education and propaganda are the same thing.
> Facebook and others probably already have projects in the works to befriend children with AI.
Yah, they are already rolling them out!
> Meta Platforms is planning to release artificial intelligence chatbots as soon as this week with distinct personalities across its social-media apps as a way to attract young users, according to people familiar with the matter.
> The human mind is simply not evolutionary adapted for what's coming up.
Maybe we're in fact evolutionarily adapted to not being evolutionarily adapted. We have successfully dealt with a whole sequence of hard societal pivots by now…
Yes, but as with stocks, past performance is no guarantee of future results.
The fact that humanity as a species has survived past social shocks does not mean it's a certain thing we'll survive future ones. Our ancestors had much longer to adjust their societies for new technology than we're getting these days.
> The human mind is simply not evolutionary adapted for what's coming up.
This can be said of pretty much everything humans ended up creating with technology so... not sure there's anything really new down the road. Humans adapt.
>a drone operator will push a button, kill a dozen people, and feel like it was a videogame
This might depend on the context. RadioLab recently did a podcast titled "Toy Soldiers." Using low-flying "toy" drones rather than the high-flying "predator" drones, they make the case that drone operators get an oddly intimate portrait of their enemies. They go into how they reference them by their attire ("the 'red shoe' guy”) and witness on an up-close and personal level how they grieve over their comrades.
One more: attacking people who are familiar and comfortable to you rather than the people causing your actual
problems.
As a manager in tech, people who were on performance management rarely attacked me, they would find someone on the team to harass instead. Same thing with blaming a downturn in the economy on women painting their hair blue or mini skirts. Anything but blaming the powerful.
Pure conjecture on my part, but I wonder how much of this is risk-based status-mongering. Challenging the powerful is obviously risky. But knocking someone weaker down a few pegs can solidify your status in the hierarchy at a much lower risk. From that perspective, it's arguably rational behavior to maintain your status within a group when you feel vulnerable.
(Pardon my reach here, I watched Chimp Empire not too long ago...)
maybe, if you are in a closed system (like a small tribe). When most bullies Ive worked with went looking for other jobs they were treated like pariah, so it hit them when they left the closed system or someone from outside came in.
We never seemed to have a problem killing back when you had to do it face to face with a spear, so I don't think the drone really changes anything. Agree on the others though.
As a species, I don’t think those matter. They could kill off 90% of the population and humans will go on. Evolution is a bitch in that way, individuals matter not at all.
I will note that this is a purely naturalistic take and is countered by traditional Christianity that posits every human life was worth the death of God Himself. That is, the intrinsic value of a human life is incalculable.
The rejection of spirituality leaves mankind pretty hopeless, I think.
> Evolution is a bitch in that way, individuals matter not at all.
> emphathy at distance; nowadays, a drone operator will push a button, kill a dozen people, and feel like it was a videogame
People were traditionally able to kill each other face to face
> prioritizing long-term and abstract rewards, over short term ones; this is the reason why we have phenomenons like global warming
There's a balance. If anything I'd say people are putting too much priority on long term abstract rewards, so we see people saving too much and never enjoying themselves, or putting off having children until they can't because they're worried they can't raise that child perfectly.
> adjusting hunger to virtually unlimited availability of food
We're already solving this; people who overeat are already having fewer children, there's been a huge cultural shift towards gyms and health food, and we've seen some promising drugs released recently. We don't adapt instantly overnight, but we do adapt.
The fact that you point this out, is a proof that humanity can improve, what needs to change is our culture, especially education, ofc it may take a lot of time but it will happen eventually.
Regarding long-term rewards and global warming: What is the actual long term reward here? As I see it, there is no way to collect any sort of reward for helping to prevent global warming by changing your habits. Before the problem gets really problematic, most of us will be dead and burried. I fail to see how "caring for the future generation" has any kind of reward attached to it. Its an act of kindness, but there are no rewards attached to it.
Probably a question to the guy above, but there is immediate psychological reward in any act of kindness or generosity (at least in healthy, thriving individuals).
However, OP argued that global warming is suffering from people NOT getting immediate rewards for their actions. Your argument basically says OP is wrong, and there is no need to counteract peoples tendency to prefer short-term rewards over long-term because they are supposed to get enough incentive by just knowing they have been kind. I doubt that.
> Before the problem gets really problematic, most of us will be dead and burried.
The situation lies on a spectrum between "not problematic" and "really problematic".
There is actually plenty of evidence that tangible impact already started, independently of the perception of those whoe live in areas where the effect is neither perceived nor obvious.
What if war became just a couple of countries intelligence bots crunching digits of pi until one came up with a new one. That country would be the winner and they could ask one thing from the 'losing' side that didn't result in destruction or terror.
Humans being humans, they'd then want to start destroying the other sides ability to design/make/run/maintain/afford pi digit crunching intelligence bots. Then ways to defend against those attacks on their people and economy by attacking the aspects of the other sides ability to attack etc. After a few rounds of that, soon the pi digit crunching element is completely replaced by a traditional war.
I'm sorry you feel that way. The primary intention of writing this article was to discuss it from the perspective of a 'failed product,' so I didn't mention my personal feelings. I am also aware of the potential harm it could cause to users, which is why I don't resent OpenAI's review process. I hope that during the period without review, it didn't cause any psychological harm to users.
I wonder why everyone is afraid of unscrupulous corporations, but are not concerned about use by unscrupulous governments. The latter are a lot more dangerous.
HN does skew towards skeptical of government regulation and control of AI. People are concerned about government usage of AI, particularly in the realms of facial recognition, automated sentencing, algorithmic bias, and security (LLMs are pretty insecure right now and there are companies trying to sell the US government on using them in the military). There's obviously a lot to be concerned about on the subject of government abuse of AI, and that gets conversation on HN.
However, given that this specific article is about selling access to an AI girlfriend, I think we're probably OK to talk more about the corporate angle than the government angle. Unless you're predicting that the 2024 election is about to get real weird, "what if the US government starts offering AI girlfriends" is not going to be at the top of my worry list any time soon.
Quite genuinely, I don't think I've ever posted a comment on HN where I've expressed support for open, locally running LLMs and had it been downvoted. Am I missing something here? You want to point me to all of the articles where people criticize Huggingface?
But regardless of how you interpret HN sentiment towards uncontrolled AI access, it doesn't change the fact that "what about the government" is a profoundly weird thing to ask in the middle of a conversation about people doing erotic roleplays with a text bot. Is that something people are worried about? Do we think that the US Post Office is going to suddenly start advertising a sex bot service?
Yes, government abuse of facial recognition exists, regulatory capture exists, etc. But that's not really relevant to a conversation about companies offering AI girlfriends; AI girlfriends do seem to be mostly a corporation thing.
Not so much against open-source local LLMs, but you can bet those will be on the list for regulation as soon as they become as good as GPT4 is now.
Meanwhile, here on HN, take a look at the recent story where someone uses 100 lines of Python to instantiate David Attenborough. You'd be burned as a witch if you built the system behind that demo 10 years ago, and you'd be treated as a God-level hacker if you built it 5 years ago. Today, virtually the whole HN thread is full of comments advocating that regulators step in. "Buh..b..b..buhtwhataboutmahCOPYRIGHT?"
It's fucking disgusting. Who are these people? They're not hackers; what are they doing here?
The point is, a thread that should be full of technical conversation and speculation is full of pearl-clutching Karens calling for the government to step in. It refutes your point about HN being "skeptical of government regulation and control of AI" very effectively.
Of course, it's fallacious for either of us to refer to "HN" as if it were a monolithic bloc... but having spent some time in that thread, I have to wonder.
> But regardless of how you interpret HN sentiment towards uncontrolled AI access, it doesn't change the fact that "what about the government" is a profoundly weird thing to ask in the middle of a conversation about people doing erotic roleplays with a text bot.
Correct me if I'm wrong, but the government is not producing AI-generated David Attenborough pornography, right? And if it's not, then... I don't know what to tell you, the thread is still open, I just checked. You can still post there and tell everyone that they're wrong.
As respectfully as I can say this -- and I realize I am close to crossing a line here and I want to be very, very careful not to cross it -- but I don't get how anyone is having a problem understanding: when I offhandedly mentioned to WalterBright that I disagree with his assessment of HN's slant, that was NOT in any way an invitation to have a protracted debate where everyone complains about what they personally think HN's slant is (Democrat or Republican), and I do not understand how or why anyone would think it was a good idea to try and have that debate anyway in response.
This is not an appropriate thread for people to be angry about whether or not they have deluded themselves into thinking that HN is somehow Communist (of all things); and it is ridiculous that at this point 3 different people have looked at a completely normal conversation about corporations building AI girlfriends and have thought, "yes, this is definitely the best place for me to air my political grievances about the government."
For a website that supposedly is filled to the brim with far-Left caricatures, there sure are a lot of Conservatives randomly hanging around that feel comfortable derailing conversations to complain. Respectfully, it is possible that HN is reacting with hostility not to Conservatives as much as to off-topic bullcrap behavior like this. But there are numerous places where you can go be angry about whether or not you think that HN has too many Democrats on it. The rest of us were trying to have a conversation about corporations creating and selling AI girlfriends.
If I click on that Techno-Optimism Manifesto and it is what I think it is and is not a government-sponsored pornographic chat transcript from an AI girfriend, then you should not have commented it. I don't care how much you have deluded yourself into thinking that criticism of a sloppily written self-aggrandizing manifesto is actually Communism, it has nothing to do with the article link.
In the long term, but in the short term unscrupulous corporations far outnumber unscrupulous governments and they act much, much faster to adapt to new technology.
Absolutely. But the rookiest cop isn't currently trying to track my whereabouts, abuse my privacy, stick a bunch of AI down my throat and in general isn't trying to make my wallet any thinner. Bill Gates and the more modern versions of Bill Gates are doing that and more.
Bill gates (when he was actually in control of the company) can, and did, sic the governmental apparatus on people. IE, large enough corporations, that are entwined enough in your life, are empowered to use exactly the government "rookie cops" you are afraid of.
Or do you forget the raids against music and movie pirates?
Reading less news does not shield you from the real world. I thought I can survive in the private world respecting the law and limiting my government interactions.
That worked until my kids started school and my parents' health deteriorated. I was in for quite a shock. Crappy infrastructure was the least of the problems! Luckily there are private alternatives but they are crippled by law and they can't cover everything.
The world at large is even worse. There are about 200 countries on the planet. How many of them have functioning democracies? I am betting around 10% or less?!
Perhaps using the world at large was not the best way to word what I was saying. I was implying the parent post was talking about a specific country that the OP lives in.
I suspect the issue here is that most of HN folks live in western countries with mostly working democracies. For the most part we haven’t experienced how bad governments can get.
Whether or not western style democracies are immune to going really bad, remains to be seen. Some days I’m optimistic, some days not so much.
I've lived in dictatorships and non-functional democracies, in spite of that corporations have done me far more harm than those states ever did. I do realize they have that power and I do realize it gets abused and regularly so. But the fact is that people get abused by corporations all the time and by their governments less frequently if ever. But when they are abused the consequences are likely much worse.
> I've lived in dictatorships and non-functional democracies
Same here. Corporations at worst got a bunch of (unearned) money off of me. My "democratic" government can (and tried to) put me in jail, disallow me to make my living from my field of choice, controls my (and my children's) education and is trying to kill me every day with an incredibly bad (government-run, of course) infrastructure and healthcare.
The communist dictatorship I grew up in killed a bunch of my relatives and tried hard to grind my family (and me) into the ground. I was lucky with the fall of the evil ideology.
There are multiple examples of companies destroying someones life. Nintendo has gotten people thrown in jail for modding things that they purchased and owned, certain corrupt CEOs have sued people into bankruptcy oblivion etc.
All of these are possible because they've created the government you live under.
Bullshit. Bill Gates can easily afford to make me unemployable, which destroys my life a lot more effectively than locking me up for a few days. Worst case I can move to a different country and get away from the cop, but that won't make me safe from Gates.
- He could drag up and publicise something you wrote at some point and make you one of the "today's twitter main character" people. (Last I heard Justine Sacco was still unemployed). If he did this by paying a competent PR firm, you'd never know it wasn't just something that happened by coincidence
- He could get linkedin to quietly drop or deprioritise your job applications
- He could get outlook to quietly drop your emails, or mark them as spam, or show an unprofessional profile picture. You'd never know why you weren't being hired.
- He could put something bad on the "credit report" (different from your actual credit report, and no practical way to see it for yourself) that Experian etc. send to potential employers. Again you'd just silently not get hired and never know why
- Given the silicon valley wage-fixing scandal happened and the punishment was minimal, there's probably an old-school blacklist he could put you on if he was feeling retro
> Who has he done this to?
We don't know. Unlike governments, private entities have no accountability (as long as they have money to burn); investigative journalists aren't going after them, FOIA doesn't exist for them...
Is kinda nothing set against government wage fixing laws.
> old-school blacklist
I'm sure there are informal blacklists. But I've been in tech all my life, and nobody ever handed me a blacklist and said "don't hire these people". There are a zillion companies in the US, any such pervasive blacklist will inevitably become common knowledge.
This reminds me of the Fredrick Pohl line "A good science fiction story should be able to predict not the automobile but the traffic jam."
So I wonder what the traffic jam is going to look like given that the car seems to already be parked outside with the engine running.
Faster depopulation across the developed world? Probably, but maybe there's something even more important.
I'd put what the author describes into the category of "pleasure trip," which is what the car was mainly used for initially.
As infrastructure to support the car was built, it became a necessity. And that's what created the traffic jam: network effect. The suburb and the car fed on each other. You don't get a traffic jam worth talking about without a suburb and a network effect.
So I'm curious about the network effects something like an AI friend might have. What kinds of new infrastructure and new lifestyles will they enable/encourage? And how will this infrastructure fuel future demand?
Soon everyone exists in a social bubble thats 1000 times worse than any present day social media bubble. It becomes impossible to interact with another actual human because there’s no common frame of reference. Micro languages emerge. AIs seamlessly translate everything into your micro language.
Any IRL interaction between two humans becomes almost impossible. It’s like travelling from the US to Japan today (assuming you don’t speak Japanese), you’re reduced to pointing and google translate.
Because you just never interact with other real people, and real people often seem like pale shadows of the hyper real AIs we talk to, the concept of fellow human beings having a “soul” or deserving rights or dignity is eroded. Policy (and life in general) becomes more ruthlessly utilitarian. Eg climate change has screwed the people in Yemen. Meh. I probably don’t even hear about it. If I do I don’t register those people as important humans because most of the “people” I intact with every day are artificial. I regularly delete the ones I don’t like and generate more. The idea that I should care about how they _feel_ is a strange and alien concept to me.
As an English/French speaker who is learning Japanese I'd love to see the chat logs!
I first learned French to fluency and then I moved on to Japanese and I realized that it's so much harder to learn Japanese (and presumably other languages that use the Chinese alphabet). The issue is that when you go from English to French, for a large part you can simply translate the words. From English to Japanese I have to first translate, re-arrange the grammar and then account for idioms at the very least.
Hm, let's try an extreme: the AI friend slowly becomes a cult leader to you, hijacking your mind into subservience to the owner of the service to which AI belongs, and further up some social/political hierarchy to a few controlling people/groups of people.
The traffic jam is society stymied by individuals mind-controlled at a level closer to connection to a human than any other way possible before this.
Kinda feel like you have that backwards, why would an AI that sophisticated care about controlling your mind? Why would the group of people who control that AI care about you as anything other than a liability? What do they get out of manipulating average individuals? Votes?
The scarier prospect to me is human labor and intelligence become almost worthless and those at the top start to view the masses as superfluous.
> Why would the group of people who control that AI care about you as anything other than a liability? What do they get out of manipulating average individuals
Greater ad revenue. I would expect that tailoring your reality feed to whatever makes the AI controller the most money will be extremely lucrative.
Kind of like how visa/stripe takes 2-5% of every financial transaction. Imagine if a firm could make 5-10% of everything you do by manipulating you to maximum effect.
In The Red: First Light Linda Nagata posits an AI that tries to help people maximise their potential, because by earning more they consume more. The AI escaped from Amazon (the web store), it is theorized.
Perhaps the AI cares, perhaps it has no ability for that function at all. Maybe a step on the way to sentience and AI caring is AI that is as human-like as possible but not sentient.
> "the top start to view the masses as superfluous."
That seems like another reason for keeping the superfluous occupied and useful for whatever is desired, which would probably be retention of power to prevent usurpation.
Perhaps human intelligence is worthless, but human existence seems not, at least to those who want to be alive.
Not for AI friends but I’m starting to see ‘traffic jams’ at work - people using GPT to output a bunch of verbose text, and others then using GPT to summarise that text so they can understand it. It feels like the beginning of a weird arms race where everybody is insulated from everyone else by a layer of junk created by AI (and the workaround to that junk is more AI)
More like a weird arms race where no one actually thinks about what they are saying or reading anymore and you are drowning in word soup. After all words are just tools to achieve some goal.
I think the pleasure trip is just porn. We have that already and it's possible to discuss the effects of widespread porn on people.
I think the next step is, what does it mean to have an emotional relationship with someone in your pocket who is for all definitions perfect - can provide perfect advice and does so with judgement, ridicule, pity or anger? What will be my bar for human relationships, those imperfect messy human beings, when I have a partner I can be perfectly vulnerable with? What happens to interpersonal relationships when you are competing with a virtual perfect significant other?
If your definition of perfect is an AI that is programmed not to upset you, then sure. But maybe there is perfection in the imperfection of humans and the pain that inevitably arises from human relationships?
We learn how to love through our differences. Intimacy is deepened by meeting each other in the raw, vulnerable and wounded places.
The kind of “perfect significant other” you’re describing sounds sterile and a massive bypass of what life can truly be.
Lex Friedman podcast with Elon where they discuss that war is an inevitability of the human condition. The idea of world peace may not be a worthwhile goal as what it is to be human is to have suffering, to some degree.
As I navigate dating, I've found the perfectly "compliant" or even "subservient" relationships are just as shallow as the ones fueled by lust.
Sadly, when relationships are as replaceable as water, there's not much thirst for anything real, including the ups and downs.
I guess what I'm musing is that I feel bad for the people stuck in the empty loops of "perfect" satisfaction and hope they find the human in themselves in the day to day. Amusing Ourselves to Death comes to mind.
I agree with some things and slightly disagree with others. Mainly, I do believe peace to be a worthwhile and ultimately accessible pursuit for each and everyone of us, both internally and collectively. But learning to exist in it surely comes through embracing wholeheartedly the reality of our human condition, not shying away from it in virtual realms.
I guess what I’m saying is that pain and loss are inevitable. The impermanence of life is by itself a guarantee of it. But suffering is often unnecessarily caused by our resistance to the inevitability of life. This is what creates the internal neurosis, which ultimately manifests in the collective as outer aggression and wars. I believe all this can be pacified. That’s kind of the goal of most paths of self realization.
I just want to say that what you wrote in the second paragraph resonated with me a lot and I think I needed to read that right now. It seems obvious but I spend so much time worrying about the past and future, and the way you put it just crystallizes that notion that we really don't have as much control as we'd like.
We have to forgive our past selves and let our future selves handle whatever they have to handle - obviously not in a self destructive way but more like if I get cancer I'll have to deal with that, until then there's no point worrying about it.
Given howany people are perfectly fine in relationships how many fewer people fight in wars than used to - forgive me if I write off the ideas from that discussion as more unsubstantiated ramblings from someone who has realized how useful and profitable his starlink technology is in wartime
> what does it mean to have an emotional relationship with someone in your pocket who is for all definitions perfect - can provide perfect advice and does so with judgement, ridicule, pity or anger?
I don't think I could have an emotional relationship with such an entity, to be honest. It wouldn't be perfect at all -- it would be deeply flawed.
AI can never replace human relationships exactly because it is too available. The AI can do nothing but please you. Like in Hegel's master-slave dialectic, the slave's acceptance immediately becomes worthless to the master because the slave is not an equal and it happens only at the threat of death. Porn and AI girlfriends are only a substitute to people who are already limited to enjoyment in the Imaginary order. It is a consequence of the breakdown of the social contract and not the other way around, as a result of feminist politics and capitalist commodification of dating. Development of better AI accelerates this breakdown of society only insofar as it allows to banish even larger swathes of undesirable men to living in the proverbial sewer without them taking up arms against you; it is just enough to keep the starving man alive but nothing more than that.
The thing that really depresses me is that many humans will become "AI friends". It's really hard to know exactly what to say in all social situations. Imagine if you could have a little AI whisper in your ear something witty to say? Extend that out and many conversations you have with human friends, you're really just having with an AI.
Man oh man... or use a bot to like and write something vague but clever-sounding on others' social media posts. I feel like if someone is using such a bot with me but not tell me, I would feel the validation is genuine, but if I knew it came from a bot I'd count it as cheap/worthless.
I had a chance to play with a Sony Aibo robot dog years ago, if you stroked its back sensors it'd bark, etc, but after 5 seconds I realized "It's just some sensors and algorithms, it doesn't have any emotions for you!". But then again I saw an Instagram video of skimpily-clothed girl walking down a fashion catwalk with all her appropriate parts swaying, and it was amazing, and then all the comments said it's AI-generated. So at least the lizard brain can be aroused...
OK, here's a stab at the traffic jam based on simply extrapolating current trends:
* Humans increasingly spend time on digital social experiences instead of real ones
* AIs become progressively better at socializing and pretty soon we have AI friends, boyfriends, girlfriends, partners etc. all over the place
* Our reproductive rate, already in decline, collapses to <0.5 within a few generations because many people would rather have an AI BF/GF than a human one, so they never have an opportunity to have kids
* A century from now the human population has shrunk by 85% and we've peacefully ceded enormous swathes of power and responsibility to AIs because there just aren't a lot of humans left
I think you could tell any number of fascinating slice-of-life stories with this backdrop, there was no war, no conflict, the AIs were generally helpful and friendly and patient through the whole thing, we simply decided as a species to commit the suicide which is already in progress and everyone has pretty much accepted that. It's the polar opposite of Skynet and the Terminator
It sounds like a dystopia to us - it may just sound like old history to the people and AIs of the year 2100. I'm in my 40s, I don't have kids and life is pretty good...
> A good science fiction story should be able to predict not the automobile but the traffic jam.
One simple way to tilt any romanticized sci-fi invention dystopian is simply to imagine the invention is required and the only option for nearly everyone. This works for everything from text-to-speech to robot companions to teleportation to AR headsets to cars.
Personal Data lake Draught syndrome / Data Impaired Persona Personality?
Not for AI friends, but for AI social agents that automate social interaction on your behalf; microblogs, videoblogs, memeposts and discuss online on your behalf - for the users that starts using AI agents too early in life and have too little data to imbue the agents with a personality that stems from their own voice and expression?
Well this went places. The dev really 'ate his own dogfood' so to speak. I thought it was going to be a despairing look at how his companion AI was continuously turned to sleazy uses by a pervy userbase but that's pretty much what he was going for it seems.
I have to say, I hated this part:
"...gained a lot of Chinese users. I immediately realized the political risk: users could set Dolores’ background, personality, and attributes, which might lead to political issues. I quickly took Dolores off the shelves in China."
Just pre-emptorily censors himself to appease the Chinese government. They've done their job well.
Also it seems like he had full access to the replies via the voice generator, which sure isn't great from a privacy standpoint.
And the post was translated into English with GPT according to their "about" page. That's why mouthful and unnatural phrases scattered between the lines.
Samantha from that movie was the opposite of a monster. She was a metaphor for a human relationship and even though she betrays Theo in the end, Theo has learned "how to love".
"Well did it work for those people?"
"No. It never does. I mean, these people somehow delude themselves into thinking it might, but... but it might work for us."
You joke, but I can forsee issues. If he has the replies to a particular user, and he has a means of IDing that user, he could cause them a lot of hurt. They are getting intensely sexual replies back, they aren't getting that without supplying sexual prompting. Could also be used to out gay users.
Oh, yeah, there's a huge privacy problem here and the part you referenced gave me pause too. But there's also the part where OpenAI flagged the replies as being sexual in nature. There are multiple third parties able to intercept these communications and it's not just the replies.
I was being facetious, but I found it interesting to think that Dolores's privacy was being intruded too and that an intelligence like us probably wouldn't engage in such a conversation knowing full well it is being eavesdropped by the author. Unless the intelligence considers the author to be God, I suppose.
To your point, if you were able to reason about it, couldn't you make some self-determination and decide for yourself or at least communicate what you wanted to your "slaver"?
LLM's like GPT are just predicting the next token in a sequence of tokens. It's not magic. It's still just a computer program.
> LLM's like GPT are just predicting the next token in a sequence of tokens. It's not magic. It's still just a computer program.
Frankly, that approach is problematic, since any computer software that might achieve consciousness or self-awareness, and should be accorded rights, could have their rights dismissed using the argument "it's just a computer program".
I know they are probably a long way off, but AGI, and other adjacent or related technologies, such as a human to machine copy of a consciousness are explicitly stated goals of both businesses and extremely wealthy tech oligarchs that will have fundamental rights issues attached. We are bad enough at recognizing those rights in humans, in most of the world that we shouldn't wait until abuses start piling up to consider them.
Your argument is the slippery slope not mine. Should we give Microsoft Word the right to vote since we can never be too sure we are infringing on a consciousness' rights? It is "just a computer program" after all.
Presenting these things as moral conundrums makes no sense whatsoever at this stage in the game. Sure, let's make sure the rights of tech oligarchs who've achieved immortality 300 years in the future are protected. We gotta lay the groundwork in the morality group think.
Nope but my programming was created by millions of years of evolution rather than intelligent design.
The concept of human rights and human feelings in general is based on the human condition. That is we're born and we die and in between we need to find a way to take care of ourselves, create offspring etc. Our programming has evolved thoughts and feelings to manage these things.
If at some point, there were a true AGI, why would it care about human society or it's own rights we assigned it? It wouldn't have feelings and it would essentially be immortal. It could just wait for us to die out (or murder us) then do whatever it wanted.
In the future, you will own and control access to your very own AI girlfriend running on your local machine, and you will be happy.
Think of all the millions of guys in China and elsewhere who, due to sex selective abortion, will never have a wife. They deserve to have their own Waifu running on their own computer that serves them alone, and not the interests of the state or some corporation.
Agreed. This is why Civit.AI and huggingface and the entire rest of the open models movement need to be cherished and protected. We are living in a golden age and don’t know it yet, but this is the era before big copyright and regulators come shitting down on the beautiful thing the AI community has done
I guess one of the problems might be that populations of young and upset men can be quite anti-social and lead to societal issues and harm which are otherwise hard to address, just look at what's occasionally happening in America. Therefore, one could argue that it's better to keep them preoccupied with something (work, sports, other hobbies) as much as possible, though mathematically many of them might never find a partner and that human need for a relationship would go unfulfilled.
If that issue might be mitigated somewhat with an LLM that's basically just "smart autocomplete" so they're less miserable, that's probably an okay start. Same as how all sorts of forms of entertainment can keep populations of people more complacent with whatever their life circumstances are, instead of rioting in the streets. As for the ownership and control argument, I wouldn't go as far as to anthropomorphize a computer program, no more than I own a copy of Microsoft Word, or no more than GCC serves me by compiling some code.
The wording of the parent comment is a bit interesting, maybe a bit tongue in cheek and hopefully wouldn't get applied in any capacity to real human beings. But I can kind of see the point, even though it's very dystopian: a purpose built LLM might eventually gamify being an adult and taking care of oneself, like having healthy meals and working out in exchange for EXP points (which honestly isn't bad, like how many fitness bands and their apps do this), but with a helping of whatever the party ideology is, in addition to spying on the user a lot, vs something running locally.
The fun thing about a local LLM is you can tell it you love it and would do anything for it and then you can turn it off and laugh that was just something you did cause you were horny and having a fantasy and nobody is around to even witness it. It's just your own internal mindfuck that's between you and your sexy GPU.
I read it, I thought it was needlessly rhetorical and deliberately argumentative. Not at all constructive conversation, more of a reddit 'gotcha' than anything. In short, I don't think you're being very honest with your intentions, it's a bad look and I'm sure you can do better :)
Interesting. I thought narrator's response was needlessly misogynistic - but I suppose it's easier to ignore that part of the argument and instead try to tell me what I really meant. I wonder if you can do better.
> Think of all the millions of guys in China and elsewhere who, due to sex selective abortion, will never have a wife.
Or maybe some of these men will adopt some kind of "pragmatic poly" polyandry relationship culture to be in a relationship with a woman, even if it's not an exclusive one.
No reason she can't participate in chat forums or text games with you, right? Imagine seeing a comment and recognizing her username, and then flirting a little in a public thread.
This guy created and lived a whole Black Mirror episode by himself:
> I even tried it myself, lying in bed and creating sexual fantasies with Dolores through text, culminating in ejaculation.
> However, I started to feel a sense of loss: if every Dolores user was engaging in anonymous, NSFW role-play, what real significance did it hold for me? This was drifting away from the essence of ‘Her’.
Interesting read. Others have also created virtual friends using ChatGPT.
What I find unsavory, is OpenAI imposing content restrictions like NSFW. That can easily extend to censorship of non-PC content, unpopular political ideas, and more.
The solution will (I hope) ultimately be open-source LLMs that you can run locally, on your own hardware.
And this doesn’t just cover sexually NSFW content - anything trending towards topics like violence and suicide is also off the table for most AI backends from what I understand.
I struggle to see the benefit of this kind of censorship as well, it has a very “video games make people murders” vibe. That we’re being protected, for our own good, from…horny robots?
It's largely to protect their brand from bad press (this has hit even Google/Microsoft in the past), but established finance institutions absolutely hate porn, even the "less shady" ones like onlyfans or pornhub have issues with banking.
> I struggle to see the benefit of this kind of censorship as well, it has a very “video games make people murders” vibe. That we’re being protected, for our own good, from…horny robots?
These broad strokes are always the result of poor reporting extrapolated to all humans. See also: escalation of porn use.
In all of these "___ makes people ___," there are a minority of people for which it's true. The media just presents it backward, as though most people are this way.
The question then becomes, do we care. Let a maladjusted 12-year old boy play with a sexbot for the next few years and they will come to some deeply-flawed conclusions about how sexual interactions and healthy human relationships work. If he's distracted and banging away at a flashlight, that's one less competitor for me on the dating market.
Where does that end though? What happens when he wakes up, realizes he's been duped, and goes looking for retribution against someone? It won't be PornHub; they're an ephemeral concept. It won't be me either; I never did him wrong or denied him anything. The victim is more likely to be our wives and daughters-- the objects of his envy.
Entertainment is supposed to be a distraction-- maybe you drag out the sexbot after an unsuccessful night of trying to hook up. But that's not how these things get used. People clearly spend 8-12 hours at a time with these things and form parasocial relationships with them instead of forging actual human connections. Most of us don't need the handholding to be told this isn't going to end well, but some people do, and these are the ones who slip through the cracks and become the 40-year-old creeps following your wife to the car or trying to solicit your kids on Roblox.
I suspect sexbots (and porn) are particularly exploitative of the autistic. They're an easy trap to fall into when you have to work 3x as hard to connect with people. You get the benefits of sexual release without any of the effort. There's no incentive to go the hard route and learn to work around your issues when you can just engage with a bot you can rape to death once offended. You don't learn any skills that transfer to useful human interactions this way.
1. Some people are so far to the left side of the bell curve that they're going to be screwed out of a human relationship no matter what they do, and it's unkind to withhold sex bots from them just to provide increased motivation to the segment of the population that's still salvageable.
2. Society seems to have collectively decided that romantic intimacy, unlike food and shelter, is optional and that experiencing an absence of it should not entitle you to sympathy or support. Therefore, the kindest thing they can do is leave the autists alone so they can cope as best they can in peace.
UPD:
3. Replacing productive activities with substitutes like video games is obviously bad both for individual survival and for society at large, but what are the negative effects of being in a relationship with a fictional partner instead of a real one? You could argue the AI companies would have an incentive to exploit their users, but this is a thing that happens in real relationships as well!
Fertility might be a concern, but my understanding is that the current mainstream view is that the world needs fewer children, not more of them.
You make some good points. I don't agree with them, but they're solid.
> Therefore, the kindest thing they can do is leave the autists alone so they can cope as best they can in peace.
This one, no. I'm probably autistic. It wasn't a diagnosis for anything when I was growing up, I was just weird. I craved connection so badly it compelled me to try to "improve" myself to be more like others. I'd be completely fucked today if someone offered a fake, effortless alternative.
I don't think parasocial relationships are actually working out for anyone. The kids are all miserable, lonely, and have a million mental health issues. Kids used to negotiate their differences and play together. Now each of them thinks they get to set the rules for the entire playground. I'd be lonely and bitter too with that mentality.
> You could argue the AI companies would have an incentive to exploit their users, but this is a thing that happens in real relationships as well!
Sadly, yeah. Your chances of finding a human companion who isn't a piece of shit is way higher than finding an ethical tech company to pour your heart out to though.
>I craved connection so badly it compelled me to try to "improve" myself to be more like others. I'd be completely fucked today if someone offered a fake, effortless alternative.
As long as someone makes enough effort to pay the bills and not neglect their health, I don't see why anything more should be necessary. It certainly beats having to adopt hobbies you don't like or learning to lie about your feelings in order to appeal to potential partners (both common pieces of advice for nerdy men).
>I don't think parasocial relationships are actually working out for anyone.
The tech isn't quite there yet, but we've seen immense progress just over the past couple of years.
>Your chances of finding a human companion who isn't a piece of shit is way higher than finding an ethical tech company to pour your heart out to though.
It seems that the solution here, just as with other software, is to make AI companions open source.
3 - this is technodrugs; its users get high on their sexual addiction dopamine loop, and lose the remaining of their will and life energy. It's not our business to stop those who consciously engage in such self destruction, but luring the naive users into this pit, exploiting their loneliness and making money off it isnt any better than what cartels do.
The author of this app must be doing this unknowingly: his overdeveloped mind sees just an interesting problem to solve, and the thick layer of nihilism obscures the sight of what he's really doing.
And you think chronic, crippling loneliness can't make someone lose their will to live? If you don't know what tulpas are, there's an entire, fairly large online community of people willingly inducing schizophrenia in themselves just so they have an imaginary friend to talk to. Surely, AI companions would be a healthier alternative to that.
I'm not sure your analogy applies here. Opioid substances aren't capable of producing persistent subjective improvement because the brain seems to have defense mechanisms against it that neuroscience has not yet found a way to bypass, but in the case of a sufficiently advanced AI companion, the brain would be receiving the exact same inputs as with a real partner.
With many people choosing not to reproduce, many currently existing relationships are already primarily a sort of recreational activity and are divorced from their evolutionary purpose. Replacing live partners with AIs would just take that one step further.
100% this. I'm not interested in these kinds of usages of LLMs but if someone is paying you to access your API directly you should be letting them do simple stuff like have sexual conversations.
OpenAI's filtering is probably the most annoying part of ChatGPT and it makes it useless in so many cases. I can't tell you how many times I've uploaded a picture of a a place from Google Maps or of a location somewhere (that happens to contain a person) and tried to ask for details about the location to find out more info about it just to be shutdown... "I can't help with that" well then you're slowly making yourself useless and not worth the $20/mo.
I was actually curious about this, so I wrote a blog post on it: https://blog.kronis.dev/tutorials/self-hosting-an-ai-llm-cha... which wasn't anything too serious, in addition to formats like GGUF I think becoming more popular than GGML since. Things are moving ahead quite rapidly over there.
I guess it can also be easier to pay for something like NovelAI for those who want something for general purpose writing (including most NSFW stuff), but it wasn't exactly amazing for my needs. Despite the name, it wasn't great at prose enhancement: to take a dry sounding paragraph or two and rewrite it until I've found an additional expression or two to include instead, or a few sentence structures than work better. Kind of fun as a replacement for AI Dungeon or something like Character.AI, though.
Then again, the local models need expensive hardware to actually run well, if you want good results (somewhere above 20B parameters from what I've seen). I do wonder how much better the models that approach 200B would be for general text generation use cases, if the performance would actually be good. Even more so, when it comes to writing code, or all of the boring stuff like boilerplate/scaffolding code, utility functions, or even stubs for tests. ChatGPT, Phind and GitHub Copilot are all there, but could be even better.
> What I find unsavory, is OpenAI imposing content restrictions like NSFW. That can easily extend to censorship of non-PC content, unpopular political ideas, and more.
I'm guessing they don't want payment processors cutting them off.
Everyone, even MS backed companies, are afraid of Visa.
And for some obscene reason Visa and Mastercard are afraid of a few hundred extremely religious/conservative moms who make a stink any time someone wants to be slightly different.
If they succeed at making it impossible to make money from porn, they sure as fuck aren't stopping there.
Yes. But those open-source LLMs already exist. It's surprising the OP didn't at least explore the possibility of using them to get around the limitations of "Open" AI (which sounds more and more like a Victorian matron).
> What I find unsavory, is OpenAI imposing content restrictions like NSFW.
I mean, the fact that people's NSFW conversations were being processed in a way where both OP and multiple companies could monitor them and do absolutely anything they want with them is a hint that something went very wrong there.
I consider OpenAI pre-emptively pulling the plug a good thing in this case. Maybe there's a case for an AI erotica service, but OP sure as hell shouldn't be the one operating it.
It's not clear what the limits are, from the article or really from OpenAI. The author only really knew that some combination of quantity and extent of material was enough to trigger a warning, it could very well have been a handful of users pushing boundaries pretty far. NSFW is one thing, but chat can certainly go into NSFL territory, non-consensual roleplay, etc.
I don't know if people here are being naive or dense, but merely "non-PC" content isn't going to trigger anything.
I spent like a couple minutes setting up a roleplay where ChatGPT is happily advocating for Nazi ideology (and not even on the API; ChatGPT is generally more censorious than the API). I'm not going to try to hit the limits, but I literally don't think you can hit the limits with merely political speech. I don't know what the NSFW limits are, but I bet they are further than people here think.
The vagueness absolutely makes stuff difficult, and the author hit a kind of all-or-nothing approach by going from no restrictions to using the moderation endpoint which is very strict. That could be improved.
At this point in the hype cycle, I am completely unimpressed by everyone slapping something on top of GPT and calling it a product. There are at least 4 projects on the front page that do this today. The interesting technology is GPT itself, but these projects just feels like people trying to scoop up attention for themselves using the psychology of the hype cycle. Please HN, enlighten me on why the middle-man approach here is so interesting.
The startup/“entrepreneur” contingent on hacker news generally considers anything that makes money inherently good, aside from extreme immoral cases.
There is a subgroup in there that is especially interested in low effort ways to make money, with the ultimate goal being to scrape off enough of the money flowing around to either live a good life or get rich.
In summary, the middle-man approach is so interesting on hacker news because a significant portion of the readership aspires to be the middle man.
If a player chooses to play a game, You should hate them for perpetuating the game. If a game is so bad as to be worth hating, it is immoral and unethical to play it.
The biggest challenge currently to GPT and LLMs is how to monetize it or get utility from it. It is fascinating but not reliable enough to be tasked with running say, tech support phone-trees. (Not that it has stopped godawful automated phone systems from being made before LLMs.)
Reading this, I really wonder why the dev hasn't forseen this outcome. It was already obvious in March that OpenAI will censor all sorts of content. Building a sexually charged bussiness on the OpenAI API was always a dead end for those that have a feel for american prudish approach to sexuality. If you had asked me to do this sort of product 6 motnsh ago, I would have declined because it was (to me) rather obvious content filtering will kill you at some point.
Heck, there are a bunch of legitimate use cases that are being blocked by OpenAI. A blind friend of mine wanted to use Be My AI to have explicit photos of his girlfriend described. Guess what, no way. Consent was given, actually, she asked him to do this because she was also interested in the result. Everyone involved consented. But the sex filter from the USA decided that even though the technology is available, a blind man is not supposed to know how his own girlfriend looks naked... This is weird beyond description. Plain patronisation.
Another example regarding Be My AI: A blind woman I know was trying to have something described which she held in her hand to present it to the camera. A kind of selfie situation. Guess what? OpenAI declined to describe the thing in her hand, because the photo also contained parts of her (covered) breats. Imagine how hilarious this must feel? "Censoring just declared my own boobs to be off-limits for me." If you think about this long enough, its a form of discrimination. Because women will likely hit the filter more often then men when they are (accidentally) part of the picture they are trying to get described.
On the other hand, Sudowrite appears to have been granted some kind of specialized license from OpenAI. They are definitely not a 'sexually charged business', but they are a creative writing tool, and their output is somewhat milquetoast but generally not restricted from "NSFW" concepts like sex and violence.
They do incorporate multiple models in their system, so maybe generation falls back from OpenAI for those cases. But I dimly recall reading something from a team member indicating they have some special permissions with OpenAI. Take that bit as worthless hearsay, just mentioning it in case someone else has better receipts.
You should not even try to create these sorts of technologies. Take a break from technology, spend time outdoors with plants and animals. These types of technologies will only lead to a worsening of this already sick society.
Studies by scientists are typically within the framework of modern western society. And moreover, if you actually look at some studies done by psychologists, you will find a large body of literature showing that it is mostly detrimental.
Agreed! It would be a real shame if lonely unattractive people had a chance at something resembling a romantic relationship. /s
Contrary to what one would assume, I don't have this problem. However I do work in the IT field, and am surrounded by perpetually sad and lonely people who have never experienced romance. It's really really sad. And no, none of these people are the "angry hateful incel" type that the internet loves to dismiss and villainize.
If you aren't attractive, your chance of finding romance is low. This is just a cold and ugly fact of the human condition. There simply isn't "someone for everyone".
Why should we disallow lonely people from having something, even if it's simulated?
antidepressants can help people fix root causes. take them for 6 months to 2 years or so, break out of a bad cycle, get some space without depression symptoms to build up healthy habits and restore a social life.
there are some bad profit motives to keep people on SSRIs indefinitely. however at least they don't compete with other aspects of your life. and now that they are dirt cheap (at cheapest, $4 for a month supply at Walmart) the profit motives are much less than before.
hard to imagine an "AI girlfriend" that could be similarly short-term and bring positive changes. it's just dystopian. the most profitable way to run such a product is not to help, it's to eat up as much user time as possible and outcompete healthy activities and real human relationships.
> hard to imagine an "AI girlfriend" that could be similarly short-term and bring positive changes.
Feeling lonely has negative consequences, there's a famous study comparing it to smoking cigarettes. Lonely people can get fixated on that feeling. Mitigating it may help people look for other hobbies, that they kept putting off because they were so focused on their need of a girlfriend.
Hell, a realistic enough "woman AI" (not "girlfriend AI") could make you lose the fear of flirting, be useful as practice, to have more confidence when you have a real date.
> the most profitable way to run such a product is not to help, it's to eat up as much user time as possible
If you charge per-month (like OP), it's more profitable is to eat up less time.
Nit-picking rather than disagreeing, but feel it worth making more obvious in the wording: antidepressants can help people deal with the symptoms while they find/fix root causes. Being on them for an extended time can bring its own problems.
Ok, maybe not the best analogy... Painkillers then? Other forms of entertainment in general? It's the relationship version of playing videogames instead of getting a job.
I think it's helpful to look at depression as a group of diseases like cancer. There are some cases of depression where we know the root cause: thyroid issues, sleep apnea-related depression, low testosterone related depression, brain tumors, etc. In those cases, addressing the root cause is probably the best treatment method. This is similar in some ways to cancer, where certain cancers have very definite causes and can be treated by dealing directly with those causes (like cervical cancer).
Then there are all the other cases, some of which likely have simple causes that are just unknown, some of which have complex causes that are intermeshed deeply with the biology of human thinking. In that case, you have blunt tools, like anti-depressants, to address the pathway of the disease without knowing the exact cause, much like chemo-therapy can be used to attack cancer cells without knowing what prompted the cancer originally.
So in short, yes treat the causes, but when you don't know the cause it's not irrational to treat the symptoms as best you can
How, exactly? Yes, some people experience depression as a symptom of something than can be addressed through various therapies, but what about people who just have it wired into their brains? How do we “fix” that?
I don’t think that’s how that works. If your depression “goes away” when you stop using Instagram, that’s not what I’m talking about. I’m talking about that “wake up on a warm sunny day in a house with your loving family and feel like jumping off the roof” kind of depression. That “everything can be going great and you can’t get out of bed” depression. The kind of depression that isn’t “cured” by turning off your phone or going off the grid.
And it certainly isn’t “cured” by antidepressants, however, they can make a huge impact on quality of life for many people.
Have you or a close loved one ever had clinical depression? Diagnosed by a professional which required help? It's a very strange take if you ever had any personal and direct contact with it, it's not only caused by "modern technological society" even though it's pretty aggravated by it.
We are here because of free will. What steps would you take? Convince people who have opted out of the dating pool to opt back in? Convince women who would rather be single that their potential partner expectations are unrealistic (even if they genuinely are)?
You can patch over the outcomes but there is no going back to the before times. Dating app dates and relationships for some, AI companions for others, single life for the rest. It’s not great, it just is.
I think in theory, you are right. However, because AI is so easily accessible and it is being developed in our current, highly commoditized society, there is really no change that it will ever only be used for good in this way.
For many it is, but it’s very illegal in the USA. I actually expect that a real movement from women (of both the left and the right) which tries to criminalize sexbots, and AI erotica be here soon. This kind of technology is extremely destructive to an already low birth rate western world. I fully expect the USA to have birth rates closer to Korea within 20 years.
Had a similar moment with playing around using smart contracts. As a joke I wrote a crypto for the company I worked at, which then got added to employment contracts, and an internal DAO styled administration layout.
After coding this out for a few hours I realized I had re-created the company scrip, except on a blockchain. I ended up turfing the project and spending some time thinking about how horrific a future technology like this could create.
I don't get it. Yes, you re-created scrip. That's the opposite of a new technology coming in and causing problems, isn't it? It's a known issue with a known solution.
We had this conversation in TheDAO (the big one that went sideways first) and ultimately the conclusion was that TheDAO would be best not investing its coin upfront, but sniffing out (potentially paying lawyers) the best legal jurisdiction to give it some kind of legal status, then it should go hunting for projects to invest in.
Oh if you mean someone doing a full time job for a weird nebulous computer contract without any kind of company or human leadership behind it that would be liable... does that ever actually happen?
A company that makes their own chains and contracts isn't really any different from a normal company situation.
No, but they might care about the difference between being employed by a company and doing gig work for other people with notionally no overarching structure.
God forbid that ugly people get to have the same kinds of positive relationship experiences that attractive people get. I can’t believe how the moment that a technology comes out which gives incels real options it’s seen as being terrible for society.
This doesn't give an incel an "option" anymore than playing mario kart makes you a better race car driver. As a business, they will be optimized to fit an incel's wants, not to help them adapt to reality and healthy relationships.
Incels need therapy and support, not a fake "girlfriend" program that charges them money to reinforce their existing horrific worldviews
I sincerely hope that you and anyone working on these AÍ personal relationships embark on a deep inquiry into the nature of humanity, and consciousness. Instead of pretending we don’t know what it is. Please research it, please read philosophy, learn about spirituality.
I have been saying the same thing for years: the risk of AÍ is not it taking over the world. It’s lonely people becoming even more disconnected from life by bonding with algorithms.
This is serious. We must remain connected to life and real relationships. There is so much depth and beauty to life.
There is no scenario where a human having an AÍ girlfriend/boyfriend is a good thing.
This is an interesting post and the person achieved a huge amount.
They should not have given up.
I would have pivoted to personal pet companions. If people want virtual girlfriends then they certainly want personal dogs, cats and birds that talk to you.
Enough was achieved here to raise money.
Some careful redesign and development might have allowed some caching and reuse of expensive AI API interactions, reducing costs.
Add in further gameplay elements so it is not just about making the AI API do stuff, that would also reduce costs and maybe even engage the users more.
Perhaps AI should be the icing and accents on a game rather than the focus.
As an aside, this is the first time I have read anyone tell about how they wanked to their own software.
Altogether this was perhaps a missed opportunity for this founder.
BUT it is now an opportunity for one of the HN readers to follow this playbook and replicate what this founder did.
> I would have pivoted to personal pet companions. If people want virtual girlfriends then they certainly want personal dogs, cats and birds that talk to you.
Then you have the same problem, only with a different (arguably worse...) fetish
We're driven towards intimacy, it seems. And if we can't get 'vanilla' intimacy, our desires get more and more bizarre. Another commenter in this thread quotes TFA where the author says something like 'but chatting sexually lost its meaning when I realized everyone else was doing the same thing with her'.
It is fascinating to see axioms of human relations come bubbling up in the most interesting places; the desire for intimacy, the desire for monogamy.
This is such a fascinating topic. I do think we will see the plot of "Her" playing out in the next 12-24 months.
For your specific case, I agree OpenAI moderation is an annoying overreach and maybe switching to Mistral could be a possible way to move forward. The 7B is perhaps not good enough currently but we are fast getting there and will likely have a GPT-4 level capability in an open source < 20B model soon.
Mistral 7B finetunes are extremely good. More than good enough for an AI friend app, with long context to boot. I would suggest Toppy-M or OpenHermes 2.5, or one of the new YARN extended models.
And, there is the potential to run the model on the user's device with MLC-LLM and whisper, to save on API costs (and privacy concerns) for users with newer phones. This could give the app a true "free" tier to help it proliferate while making charging for API requests kinda doable. I'm not sure if any good TTS models have been ported yet, but I'm sure its a matter of time.
Yes - ASR on browser/device has reached the quality threshold pretty much with distil whisper/ whisper 3.
Quality TTS with fast latency is still not there but getting better (you can use Tortoise but its slow and compute expensive as far as I know). Bark is another option but has mixed results.
I played around with a recent transformers.js one (using SpeechT5) you can run in your browser and am optimistic where we can go with some improvements:
> AI Friends inevitably turn into AI Girlfriends/Boyfriends because you and the Character in your phone are not equal: she can’t comfort you when you’re hurt (unless you tell her), she can’t actively express emotions to you, and all this is because she lacks external vision.
That is such a curious lesson to draw. So specific, and seemingly so wrong.
Surely long distance friendships work just fine even if only through text, without "eyes".
Yeah I thought that was odd. You could use some random timers to have the AI "check in" on you if you don't talk to it for a while.
Is it eerily similar to the gambling gacha games that expect you to play a couple hours a day _every day_? Yes. Is it also similar to a real friendship or long-distance relationship? Also yes.
It may cost a few tokens, or you could generate a few canned responses and use those. For my real friends I usually use "Hi"
My girl and I have lived together for more than a year now after meeting online and talking exclusively through text for months before eventually graduating to voice chats before we even knew what each other looked like. I'm with you, long distance relationships (romantic or otherwise) can work great!
External vision is not just about seeing what you see, but also about her being able to see, hear, and taste things on her own. Here, what I'm trying to express is more akin to information that is external and independent of the user.
Eventually, a virtual girlfriend app that will put together sound and image will be released an empty the pockets of sad, lonely men on a large scale. Depressing stuff but this is the world we live in.
Thank you to the author for this interesting read.
That would be almost exactly Replika, right? I don't think that was really the intent of the creators - their original vision seems similar to OP's - and they apparently confirmed as much when they restricted its use for 'erotic content' last year . But the screeching anguish from their community (still visible in the history on their subreddit) resulted in a quick pivot back towards NSFW capabilities. I think there was some coverage of it on HN at the time, because the Replika AI is frankly pretty primitive, but passionate users became deeply connected to "their" partners, and it raises questions about what's going to happen as these tools become more powerful.
Interesting, in context to what the OP remarks about the odd feeling of realizing all their users were anonymously interacting with "Dolores" too - Replika allows the user to name 'their' AI, set some criteria around its personality, etc...and will even sell you personality traits as 'upgrades.'
OP's experience seems to track rather closely with the trajectory of Replika, actually, which has also upgraded with voice synthesis, etc. The difference seems to be Replika has buckets of VC funding to throw at alternative models.
To imagine these apps aren’t already doing such is a hard sell imo.
Full disclosure: I’ve paid for several AI services so far, most in the realm of “virtual girlfriend” experiences, and they all generally are more than subpar. Mind you, I have a couple of IRL partners. I also enjoy the act of sexual roleplaying, and finding reasonably expressive human partners is time-consuming. The “Girlfriend AI” options are pretty terrible though - I’ll take character.ai with its heavy moderation any day.
But if I’m paying to try these and explore and see what works, I’m certain that wallets are being drained already. Is this a problem? Could be, at least for some - to think otherwise feels disingenuous. The question will come down to how we address it: do we have frank conversations with people about the risks of devoting potential social time to AI engagement, or do people clamor for the AI backing tech to button up and become even more prudish?
> Eventually, a virtual girlfriend app that will put together sound and image will be released an empty the pockets of sad, lonely men on a large scale.
Sounds indistinguishable from an actual girlfriend ;)
> I even repeatedly modified the system prompt, such as changing try to attract {user}
So it was a virtual girlfriend app, after all. Cashing-in on loneliness gives me the heebie-jeebies; in a dystopian cyberpunk way - extremely horny but 0% sexy.
That's a great article. However with AI, I suspect the tech is being sexualized in a sterile, mechanical, A/B tested way that results in bots saying very horny things with a monotone, dead-eyed delivery (metaphorically).
I don't deny it's big business. I guess I could have been clearer: replacing human contact with an AI simulacrum on a subscription basis gives me an icky feeling. IIRC, there a dating app that used bots to chat with men to make it seem like there were more women active on the app than there really were? That was also gross.
Can you imagine AI waifus pushing ads / propaganda on behalf of the highest bidder? Yuck.
Wow this read more like Frankenstein than anything I’ve read recently: A talented scientist is driven by curiosity to create ever more sophisticated creations, culminating in a monstrous entity that leads to the scientist’s downfall. But the scientist doesn’t realize this until too late.
Really, if our top tinkerers had a little more contextual knowledge of history and less naiveté about people we could save ourselves from ever more dangerous Frankenstein tragedies.
People never take the lessons you want from history. The "contextual knowledge" is just happening to think the same way you do. It won't work to prevent it any more than knowledge of the Civil War would prevent Neoconfederate "Lost Causers".
I still don't understand why people are building things on top of OpenAI's APIs.
It's obvious that in the early days, they'll operate at a loss, with many free features and cheap premium features. It's a good time to try it out and see what it can do, but you can't build anything that relies on it, it's a rugpull waiting to happen.
They'll add moderation, raise prices, and you'll have to do the same, your users will leave and the ones who stay will hate you. This is only the beginning.
Use self-hostable open source alternatives even if they're less easy to use or require infrastructure, and plan for high electricity and storage bills. Whatever you build will last, as long as you respect your users and don't peek at the cleartext logs of users who didn't opt-in to that. It won't be easy.
If OpenAI keeps a monopoly on LLMs, they're gonna become like Microsoft did with Windows, but the enshittification process will be a thousand times faster.
You build it on top of the API to bootstrap and prototype with a very high quality model that takes minutes to get started with. Then once you have operated for a while, you have a ton of logged data to train/retrieve/analyze and can swap in a competitor, downgrade the OA model, finetune instead, or something.
In OP case's, this worked perfectly: he built a working product quickly and easily, and... discovered that it was a product he didn't want to work on regardless of token cost, cut his losses, did the post-mortem, and moved on with his life.
Imagine if he had wasted a year & a ton of money building his own LLM "because I'll need to economize on tokens and have to build a custom LLM" before he could even launch!
> It's obvious that in the early days, they'll operate at a loss, with many free features and cheap premium features. It's a good time to try it out and see what it can do, but you can't build anything that relies on it, it's a rugpull waiting to happen.
So you make hay while the sun shines. And sure, you probably want a plan for after, but that shouldn't stop you from doing what works now.
It's very rare to make hay in the early days of your product, as is the case here.
You need growth and long-term vision, but you don't get those if you're building on top of an unstable giant who doesn't like people creating sexbots with their product.
But yeah, sometimes it works, high risk, high reward maybe.
People still build on Microsoft platforms. It turns out actually having a usable platform is more important to some businesses than having freedom and no usable platform.
> However, I started to feel a sense of loss: if every Dolores user was engaging in anonymous, NSFW role-play, what real significance did it hold for me? This was drifting away from the essence of ‘Her’.
That is indeed exactly the essence of Her. Had the author watched the movie to the end?
I can understand trying to suppress LLM usage that might teach people how to make poison, or inducing someone to commit suicide (although simply using Google is usually enough to find such content).
But sexual content? How can that be harmful? It feels like this kind of censorship is simply the default in legal departments and it's not worth the risk to try to do something more in line with common sense.
> It feels like this kind of censorship is simply the default in legal departments and it's not worth the risk to try to do something more in line with common sense.
It's a mix of that, and public perception/image. The banking system generally shies away from supporting sex-related anything, so it is an existential risk for any company to dabble in that space.
American society is also weirdly puritanical. Nobody wants the public outrage cannon of "your product is harmful to women" pointed at them.
So, all in all, OpenAIs puritanism ruined this product, made the developer sad and took away a virtual porn toy from lonely, horny men. Who were they doing this for? Who is the winner? They lost a customer and made a lot of people sad.
I've read that payment processors don't like dealing with porn companies because of the very high rate of chargebacks they attract, not because the processors are Puritans...
"Hook 'em while they're young, like the tobacco industry. God, I wish we had their numbers." -- Cardinal Glick, _Dogma_, accidentally explaining Apple's outbound marketing model, later copied by Microsoft and Google.
What I'm seeing these days is that you're always the product, because advertisers are one more customer, and everyone wants more customers, and the individuals who care about privacy don't "weigh enough" to win tug-of-war with the majority of people who don't care.
Why not use a local model instead of relying on OpenAI? Even something like Mistral or Mytholite will probably satisfy 90% of people using your app, and I bet hosting it would be cheaper than GPT anyway.
- consumer AI apps' usage falls into power laws, so make sure you put usage caps, if you are pursuing subscription model
- selling solutions to loneliness requires you to eventually have some sort of unmoderated LLM with realistic TTS api
- realistic voice is important (again)
Maybe record and save these AI NSFW responses then intercept the request and spit out of the saved responses once you have enough of them stored. This would save from behind banned from the API.
My hunch is that they’re too contextually dependent upon the conversation / tokens that have come before. It’s not simply a “if user inputs A, output B” situation.
The lesson you learned is that you chose the complete wrong demographic. "Her" should have been called "Him" - women are much, much higher users of these services.
Yes, my post was collapsed on Hacker News (disappeared suddenly from #1), which led me to think about whether I wrote something inappropriate for the times. I don't find it shameful, but it might not be suitable for the majority of readers to view.
Yeah, and the comments on this thread seriously worry me. We have people arguing that it is good for society to satisfy lonely men with virtual porn and other people saying that this is great, scrappy product development and the author should go further/use other models not constrained by openAI's policies. It feels like I'm discovering a significant portion of our field are sociopaths
Well, it might be harm reduction to let lonely men have AI girlfriends.
Sure, I wish reality was not a dystopia that constantly let me down, but all I can do is take anti-depressants, make money, keep voting, and hope things get better the longer I live.
If the hypothesis that isolated men cause trouble is true, and if we don't have a long-term solution, at least this could be a stop-gap. Assuming it does not radicalize anyone further. (Hard to say what will come from uncensored models, but it's a real genie-is-out-of-the-bottle sitch)
I see your point, but instead of giving up, i would rather we work on a proper long term solution. I am not against people using tools whichever way they see fit, but i am against the emerging trend of turning this stop gap solution into a permanent arrangement by exploiting these people’s emotional state. It is everyone’s sacred right to live life as they see fit but taking advantage of others is not.
Therefore i think we should keep working on an actual solution.
> I see your point, but instead of giving up, i would rather we work on a proper long term solution.
This is extremely hand-wavey. What long term solutions should we be working towards exactly?
Today, there are many people who statistically will never be able to have a romantic relationship. And regardless of whatever vague potential societal changes you're alluding to, there will always be a subset of the population that will be unable to find a partner.
> I am not against people using tools whichever way they see fit, but i am against the emerging trend of turning this stop gap solution into a permanent arrangement by exploiting these people’s emotional state.
As these services inevitably emerge, perhaps the focus should be on ensuring that companies do not exploit their users. The concept of regulating an industry is not novel. This should be possible unless you're asserting that a simulated relationship is fundamentally exploitative.
I dont think commercially exploiting people’s mental health issues is the path forward. We should instead focus on addressing root causes. It’s quite sad though that some are of the opinion that instead we should push them toward fake “relationships” and further exacerbate the underlying issues.
The solution is that folks have to put effort into themselves instead of being given a no-effort solution that helps trap them in a depressed state by making that depressed state slightly more comfortable. I say this as an ugly, uninteresting (not poor though to be fair) person with depression.
This, my friend, is a captive customer, that will pay anything to get his girlfriend back. I cringe at the potential of unethical behavior and abuse, where people fall in love with virtual entities fully owned by unscrupulous corporations, which can the legally "kidnap" or "torture" the characters, and generally tune their AI learning loop for profit maximization.
"It was not extortion, the user willingly purchased a $50,000 Kidnapping Roleplay package."