We don't even know what 'creativity' is, and most humans I know are unable to be creative even when compelled to be.
AI is 'creative enough' - whether we call it 'synthetic creativity' or whatever, it definitely can explore enough combinations and permutations that it's suitably novel. Maybe it won't produce 'deeply original works' - but it'll be good enough 99.99% of the time.
The reliability issue is real.
It may not be solvable at the level of LLM.
Right now everything is LLM-driven, maybe in a few years, it will be more Agentically driven, where the LLM is used as 'compute' and we can pave over the 'unreiablity'.
For example, the AI is really good when it has a lot of context and can identify a narrow issue.
It gets bad during action and context-rot.
We can overcome a lot of this with a lot more token usage.
Imagine a situation where we use 1000x more tokens, and we have 2 layers of abstraction running the LLMs.
We're running 64K computers today, things change with 1G of RAM.
But what I see again and again in LLMs is a lot of combinations of possible solutions that are somewhere around internet (bc it put that data in). Nothing disruptive, nothing thought out like an experimented human in a specific topic. Besides all the mistakes/hallucinations.
> A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.
I think we are far and ahead from this "mix and match". A human can be much, much more unpredictable than these LLMs for the thinking process if only bc looking at a much bigger context. Contexts that are even outside of the theoretical area of expertise where you are searching for a solution.
Good solutions from humans are potentially much more disruptive.
Yet they never give you replies like: oh, you see how dolphins run in the water taking advantage of sea currents if you are talking about boats and speed.
What they will do is to find all the solutions someone did and mix and match around in a mdiocre way of approaching the problem in a much more similar way to a search engine with mix and match than thinking out of the box or specifically for your situation (something also difficult to do anyway bc there will always be some detail missing in the cintext and if you really had go to give all that context each time dumping it from your brain then you would not use it as fast anymore) which humans do infinitely better. At least nowadays.
Now you will tell me that the info is there. So you can bias LLMs to think in more (or less) disruptive ways.
Then now your job is to tweak the LLMs until it behaves exactly how you want. But that is nearly impossible for every situation, because what you want is that it behaves in the way you want depending on the context, not a predefined way all the time.
At that time I wonder if it is better to burn all your time tweaking and asking alternative LLMs questions that, anyway, are not guaranteed to be reliable, or just keep learning yourself about the domain instead of just playing tweaking and absorbing real knowledge (and not losing that knowledge and replace it with machines). It is just stupid to burn several hours in making an expert you cannot check if it says real stuff instead of using that time for really learning about the problem itself.
This is a trade-off and I think LLMs are good for stimulating human thinking fast. But not better at thinking or reasoning or any of that. And if yiu just rely on them the only thing you will emd up being professional at is orompting, which a 16 year old untrained person can do almost as well as any of us.
LLMs can look better if you have no idea of the topic you talk about. However, when you go and check maybe the LLM hallucinated 10 or15% of what it said.
So you cannot rely on it nayways. I still use them. But with a lotof care.
Great for scaffolding. Bad at anything that deviates from the average task.
I think the terminology is just dogshit in this area. LLMs are great semantic searchers and can reason decently well - I'm using them to self teach a lot of fields. But I inevitably reach a point where I come up with some new thoughts and it's not capable of keeping up and I start going to what real people are saying right now, today, and trust the LLM less and instead go to primary sources and real people. But I would have never had the time, money, or access to expertise without the LLM.
Constantly worrying, "is this a superset? Is this a superset?" Is exhausting. Just use the damn tool, stop arguing about if this LLM can get all possible out of distribution things that you would care about or whatever. If it sucks, don't make excuses for it, it sucks. We don't give Einstein a pass for saying dumb shit either, and the LLM ain't no Einstein
If there's one thing to learn from philosophy, it's that asking the question often smuggles in the answer. Ask "is it possible to make an unconstrained deity?" And you get arguments about God.
do they reason? Where was a video by AI researcher, that showed, that they do not reason but actually come with the result first and then try to invent "reasoning" to match it.
nah, it just seems like that on Twitter. We have more prosperity by far than we've ever had in history, this is a time to celebrate.
We have our 'ducks in a row' more now than in the 1960's when we went to the moon because of a cold war and nuclear annihilation / escalation.
My grandparents were born on farms with no electricity, plumbing, there was no real 'police' no social services, no healthcare, no antibiotics, 10% of children did not make it past age 1. That's in living memory.
Despite the insanity on the news, it's mostly drama, and we still have more people coming out of abject poverty than ever.
We have 'modern world problems', they are real problems for sure, but they are of a different scale entirely.
Frankly, it may never even get that much better as we may be hitting diminishing marginal returns on 'progress' - we now have to figure out how to live 'long lives and stay healthy'.
It is a fine time to be going to the moon, but we could be doing multiple productive things at the same time. It just doesn't surprise me that there are so many people that are not caring so much about this.
What about the workers that will be eventually replaced by said robots? You think they're just going to get free money to exist? Most likely they'll end up in the private prison system or in institutions while the corporations pocket all of the savings. Things are a lot more complicated than they seem I think...
I hope you're right but I think it won't be pretty in all cases. It's easy to forget the industrial revolution wasn't entirely positive for common people or for that matter the environment.
That's upside down. The industrial revolution was more beneficial for 'common people' than it was for anyone else.
The 'industrial revolution' upended the ancien regime of basically feudal order.
For the fist time, it created actual 'surplus' in the economy, and that surplus went into all sorts of things: education, leisure, the arts, medicine, travel.
The very concept of 'working people' taking a vacation - very modern idea.
Then that broke through into basic real emancipation, universal suffrage.
Then medicine, healthcare, social services etc.
All of that only happens because of elevated productivity that's not captured by a passive elite.
The game is different now for sure, but there's almost no argument that can be made for 'less surplus'.
It's almost like saying 'what if energy were free, that would be bad'. No - it would mostly be good.
1/2 the 'cost' of electricity is borne by grid operators, which are usually regulated monopolies. They are generally overstaffed, inneficient bureaucracies. I'm not against public service obviously but I don't think that's the issue, rather it's just related to 'monopoly' provider status.
Hydro One in Ontario was by far the largest occupant on the Sunshine list (>$100K salaries) and have always been. They pay dramatically above market wages, have more staff than they need. It's the 'old boys clubs of old boys clubs'.
If energy prices drop, they will be able to charge more money to justify more 'infra', staff and expanding budgets.
The best thing we could ever do is get rid of our dependency on the energy grid.
If our homes could be powered like our cars ... that would be amazing and open up a ton of competition in a landscape which now has almost no competition.
That said - there are definitely theoretical efficiencies at scale and if we did get rid of the grid, we may never be able to get it back.
It's plausible that 'decentralized energy' may be very advantageous in that it puts a lot of competitive pressure on the centralized elements. Then we get the best of both worlds.
Edit: value chain and institutional power dynamics is the only real way to look at all of these systems. It's incredibly naive to think that some arbitrary technology is going to change any landscape. Case and point is this issue itself - that we 'grow' fuel instead of doing something arguably more efficient is a function of structural power.
"After 9/11, there's no world in which any attack on the US homeland, however small or local, is met with anything other than overwhelming retribution."
Ok, just follow through with the logic.
If the US 'flatteNed' Cuba (like Gaza) in response to a few drones - it would 100% make the US 'The Evil Empire' and turn the world 100% against America as a neo fascist entity.
The costs would be unthinkable, and probably the demise of the nation as a having a 'historical special place'.
It would not ever fully recover, and the 'New World Order' would be something really hard to imagine.
In reality - something else would play out ..
I think the response would be disproportionate, but probably focused, but it depends on the 'populist effect' aka what exactly Cuba attacked, and how it was provoked.
If the US attacked Cuba first, and responded with drones on a US military installation - I'll bet there is populist resistance to escalation.
Event that tussle alone would look really bad on US, would guarantee the DJT regime probably 'last place' for all US presidents, people would be calling for 25th Amendment and for new leadership, even at the same time as they might even support strikes in response.
It'll mean total political chaos until the Admin steps away, probably Congress/Institutions trying to put a 'bubble' around WH Admin.
> If the US 'flatteNed' Cuba (like Gaza) in response to a few drones - it would 100% make the US 'The Evil Empire' and turn the world 100% against America as a neo fascist entity.
It has already happened. Even in west Europe politicians are discussing how to protect their nations from US imperialism. Every remaining alliance the US has is strictly quid pro quo, there's no trust left anywhere (Israel being the singular exception). Meanwhile 50% of the planet is completely fed up and can't wait to have China take over as leader of the international order.
The whole thing is stupid. The US wouldn’t flatten Cuba. Only leftists think the Cuban people support the communists. It’s like that Hasab Piker saying “the good Cubans are still in Cuba but the ones in the US that don’t like communism are crazy.” The reality is we would decapitate their regime, kill all their top brass, blow up their military installations, probably gave some collateral damages, and then in a year there would be reports, modern vehicles, and commerce.
"The reality is we would decapitate their regime, kill all their top brass, blow up their military installations, probably gave some collateral damages, and then in a year there would be reports, modern vehicles, and commerce."
I couldn't imagine a delusional statement, considering we are literally at the moment, failing to 'change a regime' in an active war, once again!
The lack of self awareness here is ... scary.
Iran? Afghanistan? Iraq? Vietnam? Venezuela?
How many more lessons do you need, beyond than the one literally on your TV set right now ?
Here are some historical realities:
Nobody thinks of 'Castro Inc' as 'Communist' other than young folks on Reddit, or people listening to Joe Rogan.
Every adult - those living there, here, and elsewhere - know that Castro Inc. are ruthless authoritarians - their 'nominal communism' is barely relevant. Ideology is barely cover for anything as it is with all regimes.
If they have any residual popularity at all - it's for 'Standing up to America!' and those who held up the ancien regime in Cuba that 'Kept the people down!' - which has at least some historic resonance.
Nobody liked Saddam, nobody likes the Taliban, and the Communists in Vietnam were not popular in the South, and unlikely in the North as well.
Chavizmo had popular support, but that waned, and nobody likes the current regime.
And yet - where is all of this 'modern vehicles and commerce' in all these places?
The lack of self awareness is shocking.
The US ended up killing 100's of thousands in Iraq and Afghanistan.
Almost 1 million peopled died in Saddam's US-supported invasion of Iran.
The Israeli government has now admitted that up to 70K Arabs were killed in Gaza.
Many in the US have no problem bombing the smithereens out of civilians, so long as there can be some kind of populist cover for it even if it's totally disproportional.
If Castro Inc. were so irresponsible that they sent drones into a US base, it's entirely plausible that Trump Inc. bombs Cuba with enormous civilian collateral damage.
Whatever happens, the regime will not fall, thinking as much is a dangerous insult to reality.
The only way Cuba could be liberated by force is a 'full invasion', which is technically very feasible but completely unlikely, or, a long, protracted movement towards detente. That's it.
All of those are different conflicts. There hasn't been change in Venezuela? Have you seen the political prisoners being released? Are you aware of Germany, Japan, or Panama? Every conflict is different. I thought the Iraq war foolish and the Afghanistan war did not have clear goals and too many different presidents. I think it is interesting that you know that no Cuban living in the US that does not listen to Joe Rogan and is not on Reddit doesn't think of Cuba as communist. It's a pretty unbelievable statement. I don't know if this some leftist No True Scotsman thing or what. You think because they are authoritarian, they are not communist? How do you think we ensure that I have a right to your labor if not by force? I guess we will just see what happens.
My baseline was Jina, A Chinese model provider. I had major issues with their reliability. I have no comparison to provide in terms of offline metrics as I had to do an emergency migration because their inference service has extended downtimes.
My experience with Cohere and interacting with their sales engineers has been boring, I say that is the most flattering way possible. Embeddings are a core service at this point like VMs and DBs. They just need to work and work well and thats what they're selling.
Fair point, but there's a logical relationship between 'testing someone' and 'following a set of instructions that don't achieve that effect'.
Your point is fair, but what is really nuanced is that the people who 'stopped' were the best ones at following the rules.
This seems interesting to me - they were conscientious about 'what was happening' - not just blithly following orders.
The 'rule followers' maybe were conscientiously applying the 'spirit of the test' and quit when they realized it was not reasonable.
The others were 'pressing buttons'.
Even then, it's subject to interpretation. There's a perfectly rational reason why people might subject to 'following the rules' if that's what they've been asked to do and have a sense of 'dutiful civic conduct' and 'trust in institutions'.
AI is 'creative enough' - whether we call it 'synthetic creativity' or whatever, it definitely can explore enough combinations and permutations that it's suitably novel. Maybe it won't produce 'deeply original works' - but it'll be good enough 99.99% of the time.
The reliability issue is real.
It may not be solvable at the level of LLM.
Right now everything is LLM-driven, maybe in a few years, it will be more Agentically driven, where the LLM is used as 'compute' and we can pave over the 'unreiablity'.
For example, the AI is really good when it has a lot of context and can identify a narrow issue.
It gets bad during action and context-rot.
We can overcome a lot of this with a lot more token usage.
Imagine a situation where we use 1000x more tokens, and we have 2 layers of abstraction running the LLMs.
We're running 64K computers today, things change with 1G of RAM.
But yes - limitations will remian.
reply