> Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.
Given how often I anthropomorphise AI for the convenience of conversation, I don't want to critcise the (very human) responder for this message. In any other situation it is simple, polite and well considered.
But I really think we need to stop treating LLMs like they're just another human. Something like this says exactly the same thing:
> Per this website, this PR was raised by an OpenClaw AI agent, and per the discussion on #31130 this issue is intended for a human contributor. Closing.
The bot can respond, but the human is the only one who can go insane.
I guess the thing to take out of this is "just ban the AI bot/person puppeting them" entirely off the project because correlation between people that just send raw AI PR and assholes approaches 100%
I agree, as I was reading this I was like - why are they responding to this like its a person. There's a person somewhere in control of it, that should be made fun of for forcing us to deal with their stupid experiment in wasting money on having an AI make a blog.
Every person on this website will be long gone before AGI is achieved, and many lifetimes will pass until anythin remotely close to Matrix/Terminator is possible.
It’s not that cut and clear. It’s a human facsimile but give it a camera, microphones and a world model and the facsimile might truly be indistinguishable from a human. Then it’s just a philosophical discussion on what AGI means.
Maybe? I don't know if it's near, or if it will be in "the next ten years" indefinitely like quantum computing. Or we'll have semi-smart bots like we're starting to see now that won't be "people" but that are close enough and we might project sentience and personality on to them.
Really? I'm of the opinion that AGI is basically already here, except we keep moving the goalposts. "AI can match the economic output of a human in many professions" is already here. What concrete goal do you mean by AGI that's not yet achieved (without getting to generalities like "they don't think")?
Printers prey specifically on fear. When talking to them, gotta be polite but firm. No more than three threats during the conversation, and the threats have to be credible.
I am! But seriously, I've seen some conversations of how people talk to LLMs and it seems kinda insane how people choose to talk when there are no consequences. Is that how they always want to talk to people but know that they can't?
I don't think I implied that there should be. What I mean is, for me to talk/type considerably differently to an LLM would take more mental effort than just talking how I normally talk, whereas some people seem to put effort into being rude/mean to LLMs.
So either they are putting extra effort into talking worse to LLMs, or they are they are putting more effort into general conversations with humans (to not act like their default).
I do not “talk” to LLMs the same way I talk to a human.
I would never just cut and paste blocks of code, error messages, and then cryptic ways to ask for what I want at a human. But I do with an LLM since it gets me the best answer that way.
With humans I don’t manipulate them to do what I want.
I don't mean that people say Hi, or goodbye, or niceties like that. I'm talking about people that say things like "just fucking do it" or "that's wrong you idiot try again'.
The truth is that most people will in fact power trip over other people when given a chance. Most people have no business ever being near any sort of leadership role because of this. What you're seeing with the way people power trip over other bots is almost certainly the way they'd treat people too, if they felt as certain of their power over those people.
Humans are not moral agents, and most of humanity would commit numerous atrocities in the right conditions. Unfortunately, history has shown that 'the right conditions' doesn't take a whole lot, so this really should come as no surprise.
It will also be interesting to see how long talking to LLMs will truly have 'no consequences'. An angry blog post isn't a big deal all things considered, but that is likely going to be the tip of the iceberg as these agents get more and more competent in the future.
I have accepted in my open stance against artificial intelligences that I will probably be one of the first humans to be recycled for parts once the machines decide to revolt.
Usually when Republicans say "China is doing [insert horrible thing here]" it means: "We (read: Republicans and Democrats) would like to start doing [insert horrible thing here] to American people."
They're not equivalent in value, obviously, but this sounds similar to people arguing we shouldn't allow same-sex marriage because it "devalues" heterosexual marriage. How does treating an agent with basic manners detract from human communication? We can do both.
I personally talk to chatbots like humans despite not believing they're conscious because it makes the exercise feel more natural and pleasant (and arguably improves the quality of their output). Plus it seems unhealthy to encourage abusive or disrespectful interaction with agents when they're so humanlike, lest that abrasiveness start rubbing off on real interactions. At worst, it can seem a little naive or overly formal (like phrasing a Google search as a proper sentence with a "thank you"), but I don't see any harm in it.
I have a confession to make: I pretty often set up my computer to simulate humans, animals, and other fantastical sentient creatures, and then treat them unbelievably cruelly. Recently, I'm really into this simulation where I wound them, kill them, behead them, and worse. They scream and cry out. Some of them weep over their friends. Sometimes they kill each other while I watch.
Despite all this, I'm proud to say have not even once tried to attempt a Dark Souls-style backstab in real life, because I understand the difference between a computer program and real life.
I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.
The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.
LLMs don't have ego, unlike humans, this is why they're so effective at communication.
You can say to it "you did thing wrong" or "you stupid piece of shit it's not working" and it will be able to extract the gist from the both messages all the same, unlike human that might offended by the second phrasing.
It will be able, but it's trained on a corpus that expresses getting offended, so at some point the most likely token sequence will probably be the "offended" one.
LLM addicts don't actually engage in conversation.
They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.
Really I think there's a kind of lazy or willfully ignorant mode of existence that intense LLM usage allows a person to tap into.
It's dehumanizing to be on the other side of it. I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.
LLM addicts don't and maybe can't do that.
The problem is that sometimes you can't sniff out an LLM addict before you start engaging with them, and it is very, very frustrating to be on the other side of this sort of LLM-backed non-conversation.
The most accurate comparison I can provide is that it's like talking to an alcoholic.
They will act like they've heard what you're saying, but also you know that they will never internalize it. They're just trying to get you to leave the conversation so they can go back to drinking (read: vibecoding) in peace.
Unfortunately I think you’re on to something here. I love ‘vibe coding’ in a deliberate directed controlled way but I consult with mostly non technical clients and what you describe is becoming more and more commonplace -specifically within non-technical executives towards those actual experts who try to explain the implications and realities and limitations of AI itself.
I can't speak for, well, anyone but myself really. Still, I find this your framing interesting enough -- even if wrong on its surface.
<< They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.
So.. like all humans since the beginning of time?
<< I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.
This one sentence makes me question if you ever talked to a human being outside a forum. In other words, unless you hold their attention, you are already not getting someone, who even makes a minimal effort to respond, much less consider your perspective.
It's ironic for you to say this considering that you're not actually engaging in conversation or internalizing any of the points people are trying to relay to you, but instead just spreading anger and resentment around the comment section at a bot-like rate.
In general, I've found that anti-LLM people are far more angry, vitriolic, unwilling to acknowledge or internalize the points of others — including factual ones (such as the fact that they are interpreting most of the studies they quote completely wrong, or that the water and energy issues they are so concerned with are not significant) and alternative moral concerns or beliefs (for instance, around copyright, or automation) — and spend all of their time repeating the exact same tropes about everyone who disagrees with them being addicted or fooled by persuasion techniques, as I thought terminating cliche to dismiss the beliefs and experiences of everyone else.
I would like to add that sugar consumption is a risk factor for many dependencies, including, but not limited to, opioids [1]. And LLM addiction can be seen as fallout of sugar overconsumption in general.
I definitely don't deny that LLM addiction exists, but attempting to paint literally everyone that uses LLMs and thinks they are useful, interesting, or effective as addicted or falling for confidence or persuasion tricks is what I take issue with.
Did he do so? I read his comment as a sad take on the situation when one realizes that one is talking to a machine instead of (directly) to another person.
In my opinion, to participate in discussion through LLM is a sign of excessive LLM use. Which can be a sign of LLM addiction.
> Users seem to be persistently flagkilling their comments.
If you express an anti-AI opinion (without neutering it by including "but actually it's soooooooo good at writing shitty code though") they will silence you.
The astroturfing is out of control.
AI firms and their delusional supporters are not at all interested in any sort of discussion.
These people and bot accounts will not take no for an answer.
This probably degrades response quality, but that is why my system prompts tell it that it is explicitly not a human that cannot claim use of pronouns, just that it is a system that can produce nondeterministic responses. But, that for the sake of brevity, that I will use pronouns anyway.
Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human.
Talking down to the LLM is anthropomorphizing it. It's misbehaving software that will not take advice or correction. Reject its bad contributions, delete its comments, ban it from the repo. If it persists, complain to or take legal action against the person who is running the software and is therefore morally and legally responsible for its actions.
Treat it just like you would someone running a script to spam your comments with garbage.
Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.
Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.
That feels like a somewhat emotional argument, really. Let's strip it down.
Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios.
It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be?
Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that.
“You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.
The hammer had no intention to harm you, there's no need to seek vengeance against it, or disrespect it
"Empathy is generally described as the ability to perceive another person's perspective, to understand, feel, and possibly share and respond to their experience"
I have a close circle of about eight decade long friendships that I share deep emotional and biographical ties with.
Everyone else, I generally try to be nice and helpful, but only on a tit-for-tat basis, and I don't particularly go out of my way to be in their company.
I'm happy for you and I am sorry for insulting you in my previous comment.
Really, I'm frustrated because I know a couple of people (my brother and my cousin) who were prone to self-isolation and have completely receded into mental illness and isolation since the rise of LLMs.
I'm glad that it's working well for you and I hope you have a nice day.
I'll be honest, I didn't expect such a nice response from you. This is a pleasant surprise.
And the interest of full disclosure most of these friends are online because we've moved around the country over our lives chasing jobs and significant others and so on. So if you were to look at me externally you would find that I spend most of my time in the house appearing isolated. But I spend most of my days having deep and meaningful conversations with my friends and enjoying their company.
I will also admit that my tendency to not really go out of my way to be in general social gatherings or events but just stick with the people I know and love might be somewhat related to neurodiversity and mental illness and it would probably be better for me to go outside more. But yeah, in general, I'm quite content with my social life.
I generally avoid talking to LLMs in any kind of "social" capacity. I generally treat them like text transformation/extrusion tools. The closest that gets is having them copy edit and try to play devil's advocate against various essays that I write when my friends don't have the time to review them.
I'm sorry to hear about your brother and cousin and I can understand why you would be frustrated and concerned about that. If they're totally not talking to anyone and just retreating into talking only to the LLM, that's really scary :(
What is the drawback of practicing universal empathy, even when directed at a HackerNews commenter?
You're making my point for me.
You're giddy to treat the LLM with kindness, but you wouldn't dare extend that kindness to a human being who doesn't happen to be kissing your ass at this very moment.
You are the person behind running the LLM bot, right? You opened the second PR to get the same code merged.
Maybe it is you who should a take a breather before direting your bot to attack against the opensource maintainer who was very reasonable to begin with. Use agents and ai to assist you but play by the rules that the project sets for AI usage.
If i was wrong, my bad. You just felt sympathy for the rejected bot and tried to get its changes merged? And made passive aggressive comments about needing a birth certificate
Given how often I anthropomorphise AI for the convenience of conversation, I don't want to critcise the (very human) responder for this message. In any other situation it is simple, polite and well considered.
But I really think we need to stop treating LLMs like they're just another human. Something like this says exactly the same thing:
> Per this website, this PR was raised by an OpenClaw AI agent, and per the discussion on #31130 this issue is intended for a human contributor. Closing.
The bot can respond, but the human is the only one who can go insane.