LLM addicts don't actually engage in conversation.
They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.
Really I think there's a kind of lazy or willfully ignorant mode of existence that intense LLM usage allows a person to tap into.
It's dehumanizing to be on the other side of it. I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.
LLM addicts don't and maybe can't do that.
The problem is that sometimes you can't sniff out an LLM addict before you start engaging with them, and it is very, very frustrating to be on the other side of this sort of LLM-backed non-conversation.
The most accurate comparison I can provide is that it's like talking to an alcoholic.
They will act like they've heard what you're saying, but also you know that they will never internalize it. They're just trying to get you to leave the conversation so they can go back to drinking (read: vibecoding) in peace.
Unfortunately I think you’re on to something here. I love ‘vibe coding’ in a deliberate directed controlled way but I consult with mostly non technical clients and what you describe is becoming more and more commonplace -specifically within non-technical executives towards those actual experts who try to explain the implications and realities and limitations of AI itself.
I can't speak for, well, anyone but myself really. Still, I find this your framing interesting enough -- even if wrong on its surface.
<< They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.
So.. like all humans since the beginning of time?
<< I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.
This one sentence makes me question if you ever talked to a human being outside a forum. In other words, unless you hold their attention, you are already not getting someone, who even makes a minimal effort to respond, much less consider your perspective.
It's ironic for you to say this considering that you're not actually engaging in conversation or internalizing any of the points people are trying to relay to you, but instead just spreading anger and resentment around the comment section at a bot-like rate.
In general, I've found that anti-LLM people are far more angry, vitriolic, unwilling to acknowledge or internalize the points of others — including factual ones (such as the fact that they are interpreting most of the studies they quote completely wrong, or that the water and energy issues they are so concerned with are not significant) and alternative moral concerns or beliefs (for instance, around copyright, or automation) — and spend all of their time repeating the exact same tropes about everyone who disagrees with them being addicted or fooled by persuasion techniques, as I thought terminating cliche to dismiss the beliefs and experiences of everyone else.
I would like to add that sugar consumption is a risk factor for many dependencies, including, but not limited to, opioids [1]. And LLM addiction can be seen as fallout of sugar overconsumption in general.
I definitely don't deny that LLM addiction exists, but attempting to paint literally everyone that uses LLMs and thinks they are useful, interesting, or effective as addicted or falling for confidence or persuasion tricks is what I take issue with.
Did he do so? I read his comment as a sad take on the situation when one realizes that one is talking to a machine instead of (directly) to another person.
In my opinion, to participate in discussion through LLM is a sign of excessive LLM use. Which can be a sign of LLM addiction.
> Users seem to be persistently flagkilling their comments.
If you express an anti-AI opinion (without neutering it by including "but actually it's soooooooo good at writing shitty code though") they will silence you.
The astroturfing is out of control.
AI firms and their delusional supporters are not at all interested in any sort of discussion.
These people and bot accounts will not take no for an answer.
I agree. I'm also growing to hate these LLM addicts.