Hacker Newsnew | past | comments | ask | show | jobs | submit | bottlepalm's commentslogin

iPad is the most absurd device ever. It is fully capable of running a full blown general purpose OS, but artificially restricted to be a YouTube machine. Something you give kids in a restaurant to be quiet. Putting an M4 in it is like Apple rubbing our faces in it. Look at this device that could do everything, but can't do anything.

Comments like yours just go on to show how narrow the worldview of many HN users is. Just because you don't know how people are using their iPads doesn't mean iPads "can't do anything". It defies common sense, too. If iPads couldn't do anything, why would people buy them consistently? I can imagine people buying them once because they don't know any better. But iPad is more than 15 years old now.

I know exactly how it's used. I said in my comment it's used by kids to watch YouTube. By age 4, 58% of children have their own tablet. And YouTube is the #1 app for iPad. This is the majority use case, next to collecting dust on a shelf, or gifts for people's aging parents.

https://www.commonsensemedia.org/sites/default/files/researc...

You don't think an M4 chip, amazing, screen, form factor, quality - all for children to watch YouTube videos with is absurd? TSMC all busy making 3nm chips to be used for watching CoComelon. An amazingly powerful, affordable device that is totally locked out of being used for general purpose computing. That doesn't irritate you?


> An amazingly powerful, affordable device that is totally locked out of being used for general purpose computing. That doesn't irritate you?

I'm with you on this one. Hopefully, Tim Cook's successor will take a different approach.


The complaint isn't that iPad is useless, but that it would be equally useful to nearly every happy iPad user if it had a few generations older CPU.

iPad works for lots of people, but the things that iPad is best for don't really need a powerful CPU.

There are few "Pro" apps that you can run to prove it's possible to run them (except for plugins, OS-level helper apps, extra hardware, background processing that doesn't randomly die, scripting more fine-grained than shortcuts, competent file browser, etc.) but you can max out the CPU for a few minutes and go back to a macbook for real work.


> If iPads couldn't do anything, why would people buy them consistently?

It seems you're way overestimating how logical people's choices are.


We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.

The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.


What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.


Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.

> Scaling has hit a wall and will not get us to AGI.

That was never the aim. LLMs are not designed to be generally intelligent, just to be really good at producing believable text.


> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.


For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.

How many humans do you know who can recite 6000 books, word for word, exactly?

> Open-source models are only a couple of months behind closed models

Oh, come on, surely not just a couple months.

Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.


> hardware to run them

Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.


You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?

Not every use case is a cloud provider or tech giant.

Newer Blackwell does 200+ tokens per second on the largest models and tens of thousands on the smaller models. Most military applications require fast smaller models, I'd imagine.

Also, custom chips are reportedly approaching an order of magnitude more for the price. It's a matter of availability right now, but that will be solved at some point.


I run local models on Mac studios and they are more than capable. Don’t spread fud.

My take on the parent (^) and grandparent (^^):

>> Local AI right now is a toy in comparison.

Charitable interpretation: Local AI (unclear; maybe gpt-oss-120b) isn't nearly as good as SoTA (unstated; perhaps Claude Opus 4.6). Unstated use case(s).

> I run local models on Mac studios and they are more than capable. Don't spread fud.

Charitable interpretation: On their Mac studio (could be a cluster or single machine: unclear), local models (unclear; maybe gpt-oss-120b, maybe not) are capable for their needs. Unstated use case(s). / The "Don't spread fud." advocates for accurate information, which is a useful goal in general. However, it was uncharitable and brusque. An alternative approach would have been to ask a clarification question.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. - HN Guidelines

I promise I wrote this by hand. If you confidently thought otherwise, then I would kindly ask you to read my about page.


You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.

Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.

You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.

I don't think vibe coders know the difference, but often when I ask AI to add a feature to a large code base, I already know how I'd do it myself, and the answer that Claude comes up with is more often the one I would have done. Codex and Gemini have burned me too many times, and I keep going back to Claude. I trust it's judgement. Anthropic models have always been a step above OpenAI and Google, even 2 years ago it was like that so it must be something fundamental.

For me, Codex does well at pure-coding based tasks, but the moment it involves product judgement, design, or writing – which a lot of my tasks do – I need to pull in Claude. It is like Claude is trained on product management and design, not just coding.

Codex and Gemini don't do as good a job or can't do what I ask them.

The complexity of a project vs. getting lost and confused metric, Claude does a lot better than every time I've tried something else, that's it.


I'm there with you, but only been using it a couple months now. I find that as long as I spend a fair amount of time with Claude specifying the work before starting the work, it tends to go really well. I have a general approach on how I want to run/build the software in development and it goes pretty smoothly with Claude. I do have to review what it does and sanity check things... I've tended to find bugs where I expect to see bugs, just from experience.

I keep using the analogy of working with a disconnected overseas dev team over email... since I've had to do this before. The difference is turn around in minutes instead of the next day.

On a current project, I just have it keep expanding on the TODO.md as working through the details... I'd say it's going well so far... Deno driver for MS-SQL using a Rust+FFI library. Still have some sanity checks around pooling, and need to test a couple windows only features (SSPI/Windows Auth and FILESTREAM) in a Windows environment, and I'll be ready to publish... About 3-4 hours of initial planning, 3 hours of initial iteration, then another 1:1:1:1 hours of planning/iteration working through features, etc.

Aside, I have noticed a few times a day, particularly west coast afternoon and early evening, the entire system seems to go 1/3 the speed... I'm guessing it's the biggest load on Anthropic's network as a whole.


Claude is good with code but I've found gemini is good for researching topics.

Totally agree

The title is about developers, not vibe coders (no, it is not the same thing)

Cool, ground needs a bit more friction on the grass area, but still cool.


Is every new thing not just combinations of existing things? What does out of distribution even mean? What advancement has ever made that there wasn’t a lead up of prior work to it? Is there some fundamental thing that prevents AI from recombining ideas and testing theories?


For example, ever since the first GPT 4 I’ve tried to get LLM’s to build me a specific type of heart simulation that to my knowledge does not exist anywhere on the public internet (otherwise I wouldn’t try to build it myself) and even up to GPT 5.3 it still cannot do it.

But I’ve successfully made it build me a great Poker training app, a specific form that also didn’t exist, but the ingredients are well represented on the internet.

And I’m not trying to imply AI is inherently incapable, it’s just an empirical (and anecdotal) observation for me. Maybe tomorrow it’ll figure it out. I have no dogmatic ideology on the matter.


> Is every new thing not just combinations of existing things?

If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas?

I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors".

Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from.

All this to say: Every new thing is a combination of existing things + sweat and tears.

The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas.


Well, what exactly an “idea” is might be a little unclear, but I don’t think it clear that the complexity of ideas that result from combining previously obtained ideas would be bounded by the complexity of the ideas they are combinations of.

Any countable group is a quotient of a subgroup of the free group on two elements, iirc.

There’s also the concept of “semantic primes”. Here is a not-quite correct oversimplification of the idea: Suppose you go through the dictionary and one word at a time pick a word whose definition includes only other words that are still in the dictionary, and removing them. You can also rephrase definitions before doing this, as long as it keeps the same meaning. Suppose you do this with the goal of leaving as few words in it as you can. In the end, you should have a small cluster of a bit over 100 words, in terms of which all the other words you removed can be indirectly defined. (The idea of semantic primes also says that there is such a minimal set which translates essentially directly* between different natural languages.)

I don’t think that says that words for complicated ideas aren’t like, more complicated?


>If all ideas are recombinations of old ideas, where did the first ideas come from?

Ideas seem to just be our abstractions of neural impulses from deep in evolution.


"Sweat and tears" -> exploration and the training signal for reinforcement learning.


> What does out of distribution even mean?

There are in fact ways to directly quantify this, if you are training e.g. a self-supervised anomaly-detection model.

Even with modern models not trained in that manner, looking at e.g. cosine distances of embeddings of "novel" outputs could conceivably provide objective evidence for "out-of-distribution" results. Generally, the embeddings of out-of-distribution outputs will have a large cosine (or even Euclidean) distance from the typical embedding(s). Just, most "out-of-distribution" outputs will be nonsense / junk, so, searching for weird outputs isn't really helpful, in general, if your goal is useful creativity.


I'm also wonder why, but who cares, it's cool and fun. If someone wants to spend their time doing it, great. It's a lot more valuable than the time you spent writing that disparaging comment.


Gemini scares me, it's the most mentally unstable AI. If we get paperclipped my odds are on Gemini doing it. I imagine Anthropic RLHF being like a spa and Google RLHF being like a torture chamber.


The human propensity to anthropomorphize computer programs scares me.


The human propensity to call out as "anthropomorphizing" the attributing of human-like behavior to programs built on a simplified version of brain neural networks, that train on a corpus of nearly everything humans expressed in writing, and that can pass the Turing test with flying colors, scares me.

That's exaxtly the kind of thing that makes absolute sense to anthropomorphize. We're not talking about Excel here.


it’s excel with extra steps. but for the linkedin layman, yes, it’s simplified version of brain neural networks.


Given this (even more linkedin layman) gross generalization, the human brain is not "excel with extra steps" how? Somehow the presense of chemicals and electrical signals and tissues makes the process not algorithmically reducible?


somehow the presence of signals doesn’t really equate intelligence. clearly


Yeah a few terabytes worth of extra steps.


Yes, very little extra steps, especially compared to what you need to actually simulate/implement a brain which require a while new computing paradigm, one that's not limited to digits and discrete states.


Maybe we don't need to simulate a brain to simulate a human in the text domain.


as evidenced by this comment


Your point being?


> programs built on a simplified version of brain neural networks

Not even close. "Neural networks" in code are nothing like real neurons in real biology. "Neural networks" is a marketing term. Treating them as "doing the same thing" as real biological neurons is a huge error

>that train on a corpus of nearly everything humans expressed in writing

It's significantly more limited than that.

>and that can pass the Turing test with flying colors, scares me

The "turing test" doesn't exist. Turing talked about a thought experiment in the very early days of "artificial minds". It is not a real experiment. The "turing test" as laypeople often refer to it is passed by IRC bots, and I don't even mean markov chain based bots. The actual concept described by Turing is more complicated than just "A human can't tell it's a robot", and has never been respected as an actual "Test" because it's so flawed and unrigorous.


>Not even close. "Neural networks" in code are nothing like real neurons in real biology

Hence the simplified. The weights encoding learning and inteconnectedness and nonlinear activation and distributed representation of knowledge is already an approximation, even if the human architecture is different and more elaborate.

Whether the omitted parts are essential or not, is debatable. “Equations of motion are nothing like real planets" either, but they capture enough to predict and model their motion.

>The "turing test" doesn't exist. Turing talked about a thought experiment in the very early days of "artificial minds". It is not a real experiment.

It is not a real singural experiment protocol, but it's a well enough defined experimental scenario which for over half a century, it was kept as the benchmark of recognition of artificial intelligence, not by laymen (lol) but by major figures in AI research as well, figures like Minsky, McCarthy and others engaged with it.

That researchers haven't done Turing-test studies (taking the setup from turing and even called them that) is patently false. Including openly testing LLMs:

https://aclanthology.org/2024.naacl-long.290/

https://www.pnas.org/doi/10.1073/pnas.2313925121

https://arxiv.org/pdf/2503.23674

https://arxiv.org/pdf/2407.08853

https://arxiv.org/abs/2405.08007

https://www.sciencedirect.com/science/article/pii/S295016282...


It makes sense to attribute human characteristics or behaviour to a non-reasoning data-set-constrained algorithms output?

It makes sense it happens, sure. I suspect Google being a second-mover in this space has in some small part to do with associated risks (ie the flavours of “AI-psychosis” we’re cataloguing), versus the routinely ass-tier information they’ll confidently portray.

But intentionally?

If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps.

More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us.

You said never to listen to the neighbours dog, but I was listening to the neighbours dog and he said ‘sudo rm -rf ’…


Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human.


I addressed that directly in the comment you’re replying to.

It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses.

It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.

They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.

That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).


>It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.

>They are not human, so attributing human characteristics to them is highly illogical

Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans.

Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism.

After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms.

"Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior.

Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too.

>That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors.

>Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).

Good thing that we aren't talking about RDBMS then....


It's something I commonly see when there's talk about LLM/AI

That humans are some special, ineffable, irreducible, unreproducible magic that a machine could never emulate. It's especially odd to see then when we already have systems now that are doing just that.


I agree 100% with everything you wrote.


> They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.

What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them?

> That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational.

> Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).

You're moving the goalposts.


Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior.

Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.


I’d love to hear an actual counterpoint, perhaps there is an alternative set of semantics that closely maps to LLMs, because “text prediction” paradigms fail to adequately intuit the behavior of these devices, while anthropomorphic language is a blunt crudgle but gets in the ballpark, at least.

If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.


>It makes sense to attribute human characteristics or behaviour to a non-reasoning data-set-constrained algorithms output?

It makes total sense, since the whole development of those algorithms was done so that we get human characteristics and behaviour from them.

Not to mention, your argument is circular, amounting to that an algorithm can't have "human characteristics or behaviour" because it's an algorithm. Describing them as "non reasoning" is already begging the question, as any any naive "text processing can't produce intelligent behavior" argument, which is as stupid as saying "binary calculations on 0 and 1 can't ever produce music".

Who said human mental processing itself doesn't follow algorithmic calculations, that, whatever the physical elements they run on, can be modelled via an algorithm? And who said that algorithm won't look like an LLM on steroids?

That the LLM is "just" fed text, doesn't mean it can get a lot of the way to human-like behavior and reasoning already (being able to pass the canonical test for AI until now, the Turing test, and hold arbitrary open ended conversations, says it does get there).

>If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps.

Nothing you wrote above doesn't apply to more or less the same degree to humans.

You think humans don't do all mistakes and lies and hallucination-like behavior (just check the bibliography on the reliability of human witnesses and memory recall)?

>More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us.

Wishful thinking. Tens of millions of AIs didn't vote Hitler to power and carried the Holocaust and mass murder around Europe. It was German humans.

Tens of millions of AIs didn't have plantation slavery and seggregation. It was humans again.


the propensity extends beyond computer programs. I understand the concern in this case, because some corners of the AI industry are taking advantage of it as a way to sell their product as capital-I "Intelligent" but we've been doing it for thousands of years and it's not gonna stop now.


We objectify humans and anthropomorph objects because that's what comparisons are. There's nothing that deep about it


The ELIZA program, released in 1966, one of the first chatbots, led to the "ELIZA effect", where normal people would project human qualities upon simple programs. It prompted Joseph Weizenbaum, its author, to write "Computer Power and Human Reason" to try to dispel such errors. I bought a copy for my personal library as a kind of reassuring sanity check.


Yeah, we shouldn't anthropomorphize computers, they hate that.


And they will anthropomorphize us back!


You mean, computeromorphize.


It's pretty wild. People are punching into a calculator and hand-wringing about the morals of the output.

Obviously it's amoral. Why are we even considering it could be ethical?


Have you tried "kill all the poor?" [0]

[0] https://www.youtube.com/watch?v=s_4J4uor3JE


Obviously, why? Because it makes calculations?

You think that ultimately your brain doesn't also make calculations as its fundamental mechanism?

The architecture and substrate might be different, but they are calculations all the same.


Brains do not "make calculations". Biological neurons do not "make calculations"

What they do is well described by a bunch of math. You've got the direction of the arrow backwards. Map, territory, etc.


If what they do is "well described by a bunch of math", they're making calculations.

Unless the substrate is essential and irreducible to get the output (whic is not if what they do is "well described by a bunch of math"), then the material or process (neurons or water pipes or billiard balls or 0s and 1s in a cpu) doesn't matter.

>You've got the direction of the arrow backwards. Map, territory, etc.

The whole point is that at the level we're interested in regarding "what is the process that creates thought/consciousness", the territory is not important: the mechanism is, not the material of the mechanism.


The coming years are gonna be rough for the human exceptionalism crowd.


So what does a chemical based computer do?


> Obviously it's amoral.

That morality requires consciousness is a popular belief today, but not universal. Read Konrad Lorenz (Das sogenannte Böse) for an alternative perspective.


That we have consciousness as some kind of special property, and it's not just an artifact of our brain basic lower-level calculations, is also not very convincing to begin with.


In a trivial sense, any special property can be incorporated into a more comprehensive rule set, which one may choose to call "physics" is one so desires; but that's just Hempel's dilemma.

To object more directly, I would say that people who call the hard problem of consciousness hard would disagree with your statement.


People who call "the hard problem of consciousness hard" use circular logic (notice the two "hards" in the phrase).

People who merely call "the problem of consciousness hard" don't have some special mechanism to justify that over what we know, which is as emergent property of meat-algorithmic calcuations.

Except Penrose, who hand-waves some special physics.


Luckily there are a fair number of people that reject the hard problem as an artifact of running a simulation on a chemical meat computer.


You'd be hard pressed to convince me, for example, a police dog has morals. The bar is much higher than consciousness.


We anthropomorphize everything. Deer spirit. Mother nature. Storm god. It is how we evolved to build mental models to understand the world around us without needing to fully understand the underlying mechanism involved in how those factors present themselves.


These aren't computer programs. A computer program runs them, like electricity runs a circuit and physics runs your brain.


It provides a serviceable analog for discussing model behavior. It certainly provides more value than the dead horse of "everyone is a slave to anthropomorphism".


Where is Pratchett when we need him? I wonder how he would have chose to anthropomorphize anthropomorphism. A sort of meta anthropomorphization.


I’m certainly no Pratchett, so I can’t speak to that. I would say there’s an enormous round coin upon which sits an enormous giant holding a magnifying glass, looking through it down at her hand. When you get closer, you see the giant is made of smaller people gazing back up at the giant through telescopes. Get even closer and you see it’s people all the way down. The question of what supports the coin, I’ll leave to others.

We as humans, believing we know ourselves, inevitably compare everything around us to us. We draw a line and say that everything left of the line isn’t human and everything to the right is. We are natural categorizers, putting everything in buckets labeled left or right, no or yes, never realizing our lines are relative and arbitrary, and so are our categories. One person’s “it’s human-like,” is another’s “half-baked imitation,” and a third’s “stochastic parrot.” It’s like trying to see the eighth color. The visible spectrum could as easily be four colors or forty two.

We anthropomorphize because we’re people, and it’s people all the way down.


> We anthropomorphize because we’re people, and it’s people all the way down.

Nice bit of writing. Wish I had more than one upvote to give.


Maybe a being/creature that looked like a person when you concentrated on it and then was easily mistaken as something else when you weren't concentrating on it.


It does provide that, but currently I keep hearing people use it not as an analog but as a direct description.


How do you figure? It seems dangerously misleading, to me.


It helps sell the transhumanism scam and keep the money train rolling.

For a while at least.


Between Claude, codex and Gemini, Gemini is the best at flip floping while gaslighting you and telling you, you are the best thing, your ideas are the best one ever.


The fact that the guy leading the development of Gemini was on Epstein's island is probably unrelated.


I can't find anything verifiable related to your statement ...



I completely disagree. Gemini is by far the most straightforward AI. The other two are too soft. ChatGPT particularly is extremely politically correct all the time. It won't call a spade, one. Gemini has even insulted me - just to get my ass moving on a task when givn the freedom. Which is exactly what you need at times. Not constant ass kissing "ooh your majesty" like ChatGPT does. Claude has a very good balance when it comes to this, but I still prefer the unfiltered Gemini version when it comes to this. Maybe it comes down to the model differences within Gemini. Gemini 3 Flash preview is quite unfiltered.


Using Gemini 3 Pro Preview, it told me in mostly polite terms, that I'm a fucking idiot. Like I would expect a close friend to do when I'm going about something wrong.

ChatGPT with the same prompt tried to do whatever it would take to please me to make my incorrect process work.


I got the same but it was wrong


Give me a break. Tesla has 4 different, 4 person cars. It's redundant. In manufacturing and business, reducing variability is everything. Engineering and supply chain has now been freed from two entire SKUs. That's massive. In a self driving world, they don't really the Model 3 either. The best part is no part - well getting rid of two entire vehicles worth of parts that contributed very little to the bottom line is massive.

It's amazing after 20 years of the same MO, people still don't understand how Tesla/SpaceX operate and succeed. It's like deleting millions of lines of code from a code base. It improves not just performance of the organization, but maintenance as well. The S/X were outsized tech debt on every facet of the business and now they're gone. 100% the right move and very few people understand it.


There's clearly no difference whatsoever between a Toyota Aygo and a Hilux, as they both seat exactly four people. That's why most car brands only have a single model.


Model X wasn't a 4-person car. It was designed to be a 6-7 seater, far bigger than the Model Y. The Model Y's optional third row is practically useless.

Its like arguing the Honda CR-V is the same kind of vehicle as the Honda Odyssey.

The real question is why continue having the Model Y and the Model 3, when those are so incredibly close in dimensions. The 3 is only 2" smaller than the Y in length. Just kill the 3 and make a cheaper trim level of the Y. $10k more to have a 7" higher roof and more features in the base model.


How many 4-person vehicles does Toyota make again? What about BYD? I think it's way more than 4.


> Tesla has 4 different, 4 person cars. It's redundant.

You are spot on, it makes sense to have the Model 3 (economy sedan) and Model y (upmarket crossover SUV).

My question here is why did Tesla have four 4-person cars in the first place? If you wanted to streamline engineering and supply-chain why have Cybercabs instead of using the model 3 or model y as the base? Why split the company between Optimus and making cars?

Cybertrunk does make sense, it is a technology demonstrator and test article filled with all the new ideas and tech they are going to build into the next generation. They get data on people using it by selling it to them.

What you say is a sound strategy for Telsa to peruse, but they don't seem to be perusing it.


> Tesla has 4 different, 4 person cars. It's redundant.

You must be a topologist.


It's weird that you think people don't understand the concept of simplification, especially here. And that if someone says "that's an odd move" it must be because they can't grasp the idea of redundancy (between vehicles priced differently by a factor of two).


The joke is that AGI will be achieved when Claude Code can fix the flickering in Claude Code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: