There's also plenty of argument to be made that it's already here. AI can hold forth on pretty much any topic, and it's occasionally even correct. Of course to many (not saying you), the only acceptable bar is perfect factual accuracy, a deep understanding of meanings, and probably even a soul. Which keeps breathing life into the old joke "AI is whatever computers still can't do".
It's still forgetting what it's talking about from minute to minute. I'm honestly getting tired of bullying them into following the directions I've already given them three times.
I think the main problem with AGI as a goal (other than I don't think it's possible with current hardware, maybe it's possible with hypothetical optical transistors) is that I'm not sure AGI would be more useful. AGI would argue with you more. People are not tools for you, they are tools for themselves. LLMs are tools for you. They're just very imperfect because they are extremely stupid. They're a method of forcing a body of training material to conform to your description.
But to add to the general topic: I see a lot of user interfaces to creative tools being replaced not too long from now by realtime stream of consciousness babbling by creatives. Give those creatives a clicker with a green button for happy and a red button for sad, and you might be able to train LLMs to be an excellent assistant and crew on any mushy project.
How many people are creative, though, as compared to people who passively consume? It all goes back to the online ratio of forum posters to forum readers. People who post probably think 3/5 people post, when it's probably more like 1/25 or 1/100, and the vast majority of posts are bad, lazy and hated. Poasting is free.
Are there enough posters to soak up all that compute? How many people can really make a movie, even given a no-limit credit card? Have you noticed that there are a lot of Z-grade movies that are horrible, make no money, and have budgets higher than really magnificent films, budgets that in this day and age give them access to technology that stretches those dollars farther than they ever could e.g. 50 years ago? Is there a glut of unsung screenwriters?
I will give you an example from just two days ago, I asked chatgpt pro to take some rough address data and parse it into street number, street name, street type, city, state, zip fields.
The first iteration produced decent code, but there was an issue some street numbers had alpha characters in it that it didn't treat as street numbers, so I asked it to adjust the logic of code so that even if the first word is alpha or numeric consider it a valid street number.
It updated the code, and gave me both the sample code and sample output.
Sample output was correct, but the code wasn't producing correct output.
It spent more than 5 mins on each of the iterations (significantly less than what a normal developer would, but the normal developer would not come back with broken code).
I can't rely on this kind of behavior and this was a completely green field straight forward input and straight forward output. This is not AGI in my book.
Did I really need to use the /s tag? But a four-year-old is occasionally correct. Are they not intelligent? My cat can't answer math problems, is it a mere automaton? If we can't define what "true" intelligence is, then perhaps a simulation that fools people into calling it "close enough" is actually that, close enough.
> There's also plenty of argument to be made that it's already here
Given you start with that I would say yes the /s is needed.
A 4 year old isn’t statistically predicting the next word to say; its intelligence is very different from an LLM. Calling an LLM “intelligent” seems more marketing than fact based.
I actually meant that first sentence too. One can employ sarcasm to downplay their own arguments as well, which was my intent, as in that it might be possible that AGI might not be a binary definition like "True" AI, and that we're seeing something that's senile and not terribly bright, but still "generally intelligent" in some limited sense.
And now after having to dissect my attempt at lightheartedness, like a frog or a postmodern book club reading, all the fun has gone out. There's a reason I usually stay out of these debates, but I guess I wouldn't have been pointed to that delightful pdf if I hadn't piped up.
When an agent can work independently over an 8 hour day, incorporating new information and balancing multiple conflicting goals—then apply everything it learned in context to start the next day with the benefit of that learning, repeat day after day—then I'll call it AGI.
There's also plenty of argument to be made that it's already here. AI can hold forth on pretty much any topic, and it's occasionally even correct. Of course to many (not saying you), the only acceptable bar is perfect factual accuracy, a deep understanding of meanings, and probably even a soul. Which keeps breathing life into the old joke "AI is whatever computers still can't do".