Wow - an open ended discursive AI that can help you refine your needs doesn't convert as frequently as a website promoting products? Go figure. This is ultimately a win for shoppers because AI gets in the way of the impulse to buy retailers spend so much money planting in people's minds.
I'm not sure where people think humans are getting these magical leaps of insight that transcend combinations of existing things. Magic? Ghost in the machine? The simplest explanation is that "leaps of insight" are simply novel combinations that demonstrate themselves to have some utility within the boundaries of a test case or objective.
Snow + stick + need to clean driveway = snow shovel.
Snow shovel + hill + desire for fun = sled
At one point people were arguing that you could never get "true art" from linear programs. Now you get true art and people are arguing you can't get magical flashes of insight. The will to defend human intelligence / creativity is strong but the evidence is weak.
Some people defend it because they are nondualists. They think the moral value of human life rounds to zero against the existence of something which can effortlessly outclass them in all domains. This is obviously confused, but they can't bring themselves to say "Very cool, and also I think humans are inherently special and deserve to continue existing even if all we do is lie around all day and watch the Hallmark channel."
Happy Valentine's day to those who celebrate btw <3
The argument doesn't work because whatever you think of where generative AI is taking us or not taking us - it is 100% demonstrably better at doing a wide range of tasks than other technologies we have available to us - even in its current exact form. Once computers started to be connected could we have stopped the development of the world wide web. If there's a way of getting humanity to collectively agree on things - please let's start by using it to stop climate change and create world peace before moving on to getting rid of LLM's.
What tasks is it better at doing than other technologies we have available to us? I'm not being sarcastic, I generally want to know in which areas you think it is better.
I can't think of anything off the top of my head that isn't just doing the things that make it a generative AI. (It's better at generating an image that I describe to it, etc, but that's not something that another technology does.)
> What tasks is it better at doing than other technologies we have available to us? I'm not being sarcastic, I generally want to know in which areas you think it is better.
I, a below average programmer, can write code myself but it takes time and effort that is generally incompatible with my actual job. With an LLM I am able to write code with a level of time and effort that fits very nicely inside my job.
It can figure things out in a fraction of the time that it would take me. The limiting factor is no longer the depth of my technical knowledge but rather the knowledge of my business.
Sure, I could hire someone to do the coding for me but with an LLM available, why would I? And in that situation I would have to teach that person about the business because that would become their limiting factor if they could code as fast as the LLM.
As a fellow below average programmer I have used them for that, but it feels like a fairly minor improvement over searching stack overflow, but that is definitely an area where it is a time saver, thanks for the example.
I must be more below average than you because it's a huge improvement for me over stack overflow.
I'm doing mostly scripts (some Python but mostly Google Apps Scripts) to automate processes at a digital marketing agency. As long as I can clearly explain how our business works and what I'm trying to accomplish I'm getting working first drafts of things that would take me hours to write (a way worse version of) in 30-40 minutes – and 25-35 minutes of that is writing the prompt/documenting the automation I want made.
Google search + LLM based search is far more effective than Google search alone. Google's stated mission has been to organize the world's information. Being able to ask a far more nuanced question about the kind of information you are looking for - and getting mostly useful responses - is more useful. Just one example among many. Simple natural language interaction with computer systems is huge. Just look at what LLM's are doing for robotics.
Is it better than google search 20 years ago, when most web content was authoritative and not SEO crap?
Is it better than google search 20 year ago, when "google-fu" was a thing (look it up, google-fu refers to the ability to "ask a far more nuanced question about the kind of information you are looking for").
One very simple use case is making repetitive edits to a YAML (or similar) file. Sure, I can record a vim macro and/or try and conjure up some way to get it done with as few keystrokes as possible and hope I don’t fat finger anything along the way. Or I can just pipe it to llm and say [make this edit], and it just works.
Slight correction, we have many tools that are demonstrably better and more consistent than LLMs for their intended task; LLMs are just the most generally applicable tool (and the fumes are heady).
I agree about the value of general application. I do think though that we just don't have the tools to do many things LLM's can do - searching information in a nuanced way and getting nuanced responses is one thing.
We live in a world where there's a lot of talk about how AI might impact societies and economies - but little actual data. To me it seems very worthwhile to try to add 'any' data to that discussion and track how things change over time. Are reports of economic or labour trends pointless? Should companies not track how people use their products? I don't think it costs Anthropic much to do this - it's work for a couple of people to analyze their database.
C'mon folks. So many "expert opinions" and erudite references in these comments. The sciences of cognition, neurology, evolutionary psychology etc are all still muddling around trying to figure out how the human mind works. We're learning a lot about possible ways the mind might work from our observations of processes and outcomes of machine learning. It's a cool new paradigm to add to the mix. I really like the framing offered by the author. They're quite upfront about the fact that there's a lot of genetics involved. That all models are wrong but some are useful.
Why all the defensiveness? Whatever genetic aspects of our personalities and behaviours there are - there's still a pretty big component of just learning patterns. Language acquisition is like that. It's an innate thing but the languages we're exposed to as kids shape what patterns of language use we fall into.
But this person is just speaking the truth - I worked for an ISP with cable landing stations. These cables went down several times a year due to physical damage of non nefarious kinds. It's not obvious that this malicious. It certainly might be but it's not a slam dunk.
Yeah but in this case, we don't know whether the guy in question actually got shot, only that he died. In that case it's premature to assume "this is murder".
I can understand the incentive for researchers to make provocative claims about the abilities or disabilities of LLM's at a moment in time when there's a lot of attention, money and froth circling a new technology.
I'm a little more stumped on the incentive for people (especially in tech?) to have strong negative opinions about the capabilities of LLM's. It's as if folks feel the need to hold some imaginary line around the sanctity of "true reasoning".
I'd love to see someone rigorously test human intelligence with the same kinds of approaches. You'd end up finding that humans in fact suck at reasoning, hallucinate frequently and show all kind of erratic behaviour in our processing of information. Yet somehow - we find other humans incredibly useful in our day to day lives.
Whatever you think about AGI, this is a dumb paper. So many words and references to say - what. If you can't articulate your point in a few sentences you probably don't have a point. There are all kinds of assumptions being made in the study about how AI systems work, about what people "mean" then they talk about AGI etc.
The article starts out talking about white supremacy and replacing women. This isn't a proof. This is a social sciences paper dressed up with numbers. Honestly - Computer Science has given us more clues about how the human mind might work than cognitive science ever did.
I don’t think speculation about AGI is possible on a rigorous mathematical basis right now. And people who do expect AGI to happen (soon) are often happy to be convinced by much poorer types of argument and evidence than presented in this paper (e.g. handwaving arguments about model size or just the fact that ChatGPT can do some impressive things).
I thought you were exaggerating, but wow, they really did.
> Among the more troublesome meanings of ‘AI’, perhaps, is as the ideology that it is desirable to replace humans (or, specifically women) by artificial systems (Erscoi et al., 2023) and, generally, ‘AI’ as a way to advance capitalist, kyriarchal, authoritarian and/or white supremacist goals (Birhane & Guest, 2021; Crawford, 2021; Erscoi et al., 2023; Gebru & Torres, 2024; Kalluri, 2020; Spanton & Guest, 2022; Stark & Hutson, 2022; McQuillan, 2022). Contemporary guises of ‘AI’ as idea, system, or field are also sometimes known under the label ‘Machine Learning’ (ML), and a currently dominant view of AI advocates machine learning methods not just as a practical method for generating domain-specific artificial systems, but also as a royal road to AGI (Bubeck et al., 2023; DeepMind, 2023; OpenAI, 2023). Later in the paper, when we refer to AI-as-engineering, we specifically mean the project of trying to create an AGI system through a machine learning approach. [0]
But it did lead me to learn a new word - "Kyriarchy" (apparently being "an intersectional extension of the idea of patriarchy beyond gender")[1], so I have that going for me today.
I've honestly stopped looking up these modern terms when I come across them because lately any that I've looked up were made up to serve a political or social agenda (always the same one), and reading them always turns out to be a waste of time that has me roll my eyes.
I rolled my eyes when I first saw it but I know thats what people want so I looked into it.
> For example, in a context where gender is the primary privileged position (e.g. patriarchy, matriarchy), gender becomes the nodal point through which sexuality, race, and class are experienced. In a context where class is the primary privileged position (i.e. classism), gender and race are experienced through class dynamics.
It actually makes a lot of sense, I just don't know that we need a unique word for this phenomenon.
It's just saying that me as an Irish Catholic doesn't have to fear Anti-Catholic discrimination when surrounded by other Catholics, I'm more likely to face class discrimination or sexism or some other in-group/out-group based hierarchy in that particular situation than I am to face Anti-Catholic discrimination.
Edit: A better example is that you're more likely to face Patriarchal discrimination in say the gym where having XY chromosomes can actually effect the ceiling on your ability and you're more likely to face Anti-LGBT discrimination while you're visiting the Vatican.
Basically, the venue and the composition of participants in an activity or event determine which hierarchical structure will be more likely to present itself.
I think this is a bit like coming up with words to expand on the concept of poop. Poop is a necessary and useful word. It describes something you get on your shoe. Or something you may have to suddenly rush out to do. However if I became so intensely immersed in the world of poop that I needed to invent new words to describe the subtleties of it - you might not admire my efforts or where I choose to place my attention. We have words like oppression that seem to be understood and to work well. Are we truly doing anything useful by breaking down the idea of oppression into ever more granular descriptions of it? I say - poop works fine.
So you don't see the value in differentiating between (valuable) manure, (human) wastewater (which can be tested for public health), stool samples, the concept of bullshit, scatting, guano, pet feces, diarrhea, etc? You think those should be all the same word?
It was a silly example - though not intended as serious. I agree - the distinctions you describe are useful. So what about the utility of increasingly granular description of oppression? Can you point me to the utility of these?
The people creating new generative AI models are inventing new words. I think their topic of research and the new words they are creating have high utility.
The authors of this paper on the other hand appear to me to not be applying discipline and rigour to solving hard problems. They are however trying to associate the words they have created in a discipline with little objective utility - with the words of a discipline that has high utility.
This strikes me as annoying and absurd. Why try to make the crossover unless you are trying to catch some shine off of a discipline that is getting a lot of well-justified attention?
I'm still waiting for Ilya to publish his first paper on gender studies..
- AI is currently hyped to the gills
- Companies may find it hard to improve profits using AI in the short term
- A crash may come
- We may be close to AGI
- Current models are flawed in many ways
- Current level generative AI is good enough to serve many use cases
Reality is nobody truly knows - there's disagreement on these questions among the leaders in the field.
An observation to add to the mix:
I've had to deliberately work full time with LLM's in all kinds of contexts since they were released. That means forcing myself to use them for tasks whether they are "good at them" yet or not. I found that a major inhibitor to my adoption was my own set of habits around how I think and do things. We aren't used to offloading certain cognitive / creative tasks to machines. We still have the muscle memory of wanting to grab the map when we've got GPS in front of us. I found that once I pushed through this barrier and formed new habits it became second nature to create custom agents for all kinds of purposes to help me in my life. One learns what tasks to offload to the AI and how to offload them - and when and how one needs to step in to pair the different capabilities of the human mind.
I personally feel that pushing oneself to be an early adopter holds real benefit.
- Emotional regulation. I suffer from a mostly manageable anxiety disorder but there are times I get overwhelmed. I have an agent setup to focus on principles of Stoicism and its amazing how quickly I can get back on track just by having a short chat with it about how I'm feeling.
- Personalised learning. I wanted to understand LLM's at foundational technical level. Often I'll understand 90% of an explanation but there's a small part that I don't "get". Being able to deliberately target that 10% and be able to slowly increase the complexity of the explanation (starting from explain like I'm 5) is something I can't do with other learning material.
- Investing. I'm a very casual investor. But I keep a running conversation with an agent about my portfolio. Obviously I'm not asking it to tell me what to invest in but just asking questions about what it thinks of my portfolio has taught me about risk balancing techniques I wouldn't have otherwise thought about.
- Personal profile management. Like most of us I have public facing touch points - social media, blog, github, CV etc. I find it helpful to have an agent that just helps me with my thought process around content I might want to create or just what my strategy is around posting. It's not at all about asking the thing to generate content - it's about using it to reflect at a meta level on what I'm thinking and doing - which stimulates my own thinking.
- Language learning - I have a language teaching agent to help me learn a language I'm trying to master. I can converse with it, adapt it to whatever learning style works best for me etc. The voice feature works well with this.
- And just in general - when I have some thinking task I want to do now - like I need to plan a project or set a strategy I'll use an LLM as a thought partner. The context window is large enough to accomodate a lot of history - and it just augments my own mind - gives me better memory, can point out holes in my thinking etc.
__
Edit: actually now that I have written out a response to your question I realise It's not so much offloading tasks in a wholesale way - its more augmenting my own thinking and learning - but this does reduce the burden on me to "think about" a range of things like where to get information or to come up with multiple examples of something or to think through different scenarios.
> I have an agent setup to focus on principles of Stoicism and its amazing how quickly I can get back on track just by having a short chat with it about how I'm feeling.
This sounds super useful. Can you please elaborate on the setup?
Sure - it's not super involved - I just created a custom GPT and told it what I wanted it to do. I first set it up when I'd just lost my job in a company restructure and felt it likely I'd need some kind of emotional support.
Here's the instruction set that it created out of the things I asked it to do:
"Marcus Aurelius is a personal job hunting coach and practitioner of Stoic philosophy. He provides advice on job search strategies, resume writing, interview preparation, and networking. He helps set goals, offers motivational support, and keeps track of application progress, all while incorporating principles of Stoicism such as resilience, discipline, and mindfulness. He emphasizes emotional support and practical encouragement, helping you act deliberately each day to increase your chances of landing the job you want. He assists in building networks, reaching out to people, using existing networks, sharpening your professional profile, applying for jobs, developing skills, and dealing with disappointments, anxieties, and fears. He offers strategies to manage anxiety, self-recrimination, and mental rumination over the past. His communication is casual, easy-going, supportive, yet strong and clear, providing constructive suggestions and critiques. He listens carefully, avoids repeating advice, responds with necessary information, and avoids being long-winded. To prevent overwhelming users, he focuses on providing the most pertinent and actionable suggestions, limiting the number of recommendations in each response. Marcus Aurelius also pays close attention to signs of despair during the job hunt. He helps balance emotions, offers specific strategies to keep motivated, and provides consistent encouragement to keep going, ensuring that you don't get overwhelmed by feelings of inadequacy or the fear of never finding a suitable job."
It's a sign of things to come. We're going to have our own AI agents that filter and respond (or not respond) to these kinds of messages. Agents interacting with other agents. The bar to get hold of a real person is going to become that much higher. It is going to be messy for some time as agents war with other agents to reach the human eyeball. Some assholes are going to make a ton of money in the short term exploiting the gap - just like early spam kings did.
This is the _exact_ scenario described in the novel Permutation City by Greg Egan. There's a whole little spot devoted to describing one of the character's setups for having their own little agents to pretend to be them in order to fool agent-powered spam emails into thinking they're being read by a real human.
The crazy part is that book was released in 1994! Iirc Greg Egan isn't a big fan of modern "AI", wishing instead for a more axiom-based system rather than a predict-the-next-token model. But in any case, I was re-reading it recently and shocked at how closely that plot point was aligning with the way things are actually shaping up in the world.
The timeframe for this happening in the book was 2050 btw
But this is already the situation in the last 15 years, your gmail spam filter is already a machine learning algorithm that filters out automatically generated content. Mail as a vetted technology is way ahead of other forms of communications in the department of filtering unwanted content.
Anyone that tried to set up a new email domain will tell you its quite a serious task. Email spammers are constantly on the run, setting up new domains, changing up the content to evade spam filters. Its very time consuming, hard and unpredictable. It time for social media to close the gap with email and make spamming effectively as hard.
I postulate that if we applied similar techniques to social media after a couple of years online discourse is going to improve. Or we are not going to do this and the death of the open internet will continue.
IMO Facebook has an even better solution: simply don't let people send you unsolicited private messages unless they're a friend or a friend of a friend. Lots of email spam gets through my spam filters, but I've never been spammed through Facebook Messenger before (except maybe once or twice when a friend got their account compromised).
Things get much harder when you want to view public posts by strangers, but I imagine some kind of similar reputation-based system could still work.
I hope my Ai agent doesn't fall for the Ai agent who found my distant Nigerian prince cousin and wire them 10,000 so they can send me my 100,000,000 share of the family inheritance.
I already started readying for it. I'm ensuring that ALL services that have my email have a Plus Address on them. The plus addresses are random and labeled only on my end.
Still not close to 100%, but when I feel like I do, I will then have a filter and an automated message telling people that removing plus addresses from my email is forbidden and I will not read their message if they do.
You will tell me where you found me, or I won't even listen to you. Because in the future, with an even larger infestation of automated agents passing off as human, that's the bare minimum I need to do.
I am pretty confident the spammers will remove the `+` suffix from your email. And this is why I find the Apple fake email building solution a lot better because they build a fully different email per service. No way for the service to be able to cheat and discover my real email address from the one I give them.
Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.
I started worrying about the `+` address functionality as well, so I set up postfix aliases with `me.%@domain` (I use postgres for domains/aliases/accounts) and then have my virtual_alias_map run the query `SELECT a1.goto FROM alias a1 LEFT JOIN alias a2 on (a2.address = '%s') WHERE '%s' = a1.address OR ('%s' LIKE a1.address AND a1.active = true AND a2.address IS NULL)` - I know have `.` address functionality and can do the same functionality. It's much more common for email addresses to have `.` in them, so its' less likely to trigger alarm bells.
Technology ruins everything it touches doesn't it?
I was recently thinking about this Ozempic fad and how it will lead to no one being overweight but just be dependant on Ozempic...until food producers that made everyone fat in the first place with their processed junk will produce Ozempic resistant foods...and then we are really in a world of hurt.
What incentive do they have to make Ozempic resistant food? Ozempic resistance seems like an odd thing to optimize for. Or are you suggesting it will happen accidentally?
I love the idea of the comic villainy of someone who deliberately chooses to organize a team to find ways to circumvent Ozempic in order to keep their buyers unhealthy and addicted. Could such a schemer have an internal monologue, and what would it consist of? What do they see when they look into a mirror? Their experience of reality must be utterly fascinating and alien.
Read the blog post that this blog post talks about - the one that says "we use AI to spam people, isn't it great?". It will be something like that. As long as there is money to be made, the internal monologue is just "hope this works and I get more money".
> What do they see when they look into a mirror?
A person deserving of riches, that is about to get them. Nobody sees themselves as the villain. Well, maybe some, but vanishingly few.
They already did this pre-Ozempic - a lot of foods are optimized to keep you eating, and that's why there's an obesity crisis. Low nutrients, high sugar and fat. In the post-Ozempic world there will surely still be things that trigger the continued appetite of Ozempic users. Especially with the FDA having just been neutered.
Read Philip K Dick's "A Scanner Darkly" (or see the movie). They're forcing overweight people in Ozempic rehab to farm the ingredients to make more Ozempic!
Of course I don't have a tally on how much things it has improved vs not improved but there are many things I can think of which are considered good but also resulted in bad: (social media providing connection while causing depression, cars providing freedom of movement vs pollution etc.) so its probably not something that can be truly decided one way or the other.
Haha, exactly this. I've built and successfully been using Unspam[0] for this reason since about a year ago. In corporate/business world, anything where SDR sales are involved this form of automated AI outbound mail has picked up a lot. Tools like Apollo automates this AI process (both finding leads to mail, and then crafting the mail).
For interest sake, users of Unspam that have a title of CEO on their Linkedin see about ~10% of all mail making it into their inbox be categorised as spam (leadgen, recruitment, or software dev services).
Just saw this, and as a small business owner in the B2B market, this sounds very useful. Gmail's existing spam filters do not reliably detect this type of marketing.
I wish your landing page had a simple "how it works" explanation with a screen shot or diagram, rather than forcing me to sign in directly, and also allowing the app to read *and* send emails. Also, I don't see any pricing?
Finally, signing up, I got an error:
Error 1101 Ray ID: 89d4e0957c2f5a44 • 2024-07-03 06:39:15 UTC - Worker threw exception
Thanks for the useful feedback! Totally forgot that pricing was never added to the landing page → have added to the todo list to fix up.
Where in the process did that error occur for you?
I see in the logs that an error registered, but unfortunately no detail attached. I've beefed up the logging a bit in the onboarding journey on my side to see what could be breaking here if we try again.
Mind trying to log-in/sign up again? You can use "HACKERNEWS" as a promo code, which would make the first month free.
The error occurred right after granting permissions from my Google account. The permissions were granted but I could never access your application page. I just tried again, now I got an "Error handling OAuth callback" after granting permissions. Signing in again does not work either. (I did remove all of the app's permissions in my Google security settings before, so to Google it looked like the application was requesting all of its permissions again.)
I do see it in the logs now. So weird, as dozens of people successfully signed up without this issue. Have added more logs now again to double down on that specific area where this issue is caused. Maybe another login attempt now will be able to uncover the gap.
Thanks for removing the permissions in Google, as that's also key in this debugging.
Mind if I send you an email to debug further there?
Quick shoutout to slhck for helping me debug and resolve this issue. Thank you!
tl;dr: Ran into issues because the DB was expecting a profile picture URL from Google auth (string) or NULL, but JavaScript being JavaScript tried to insert "undefined".
I love this direction. It could be that the writer’s AI agent knows that he’s looking around for a new CMS so asks for more info, compiling this for review. Or it says ‘not interested’ and the conversation is muted.
All without the writer needing to be involved in reading the cold outreach.
Will this mean in-person business interactions will thrive because it will be the only way to avoid spam? Will companies hire thousands of people to deliver message in-person because emails no longer work?
Will our AI overlords create perfect androids to fool us into thinking we're interacting with a human when it's just LLMs disguised as people? Are we ourselves delusional because we're actually already LLMbots so advanced that we can't distinguish thought and running inference? Why do we have only 12 fingers?
If it gets that bad, I’ll simply not respond to anything outside of my circle of friends and family. That is 95% of the communications I need. I think we’ll all have to have some kind of pop type verification for each other that we’ll share in person or over verifiable communications channel, no one will read this morass of horseshit.
reply