Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those Macs you are talking about are still very niche and mostly used by loyal customers that do basic/common things or very vocal fanboys who always find a way to shill for whatever Apple comes up with, no matter how flawed and lackluster the product is.

Even if you want to run local AI, Macs are not really a good deal when you account for the price of soldered RAM and the limitations of AI tools on macOS. But as always, the minority is very vocal, so it looks like it's all the rage but for the most part, people doing work are still using PCs and they don't have that much time to argue about it on the internet.

 help



I think you’re underestimating or not understanding why Macs have taken off so much for AI. It has nothing to do with fanboys shilling for Apple. You can get a MBP today with 128GB of unified memory or a Mac Studio with 512GB of unified memory. Then you get to run MacOS, which is vastly superior to Windows for AI productivity and far more accessible/convenient than Linux even today. There’s a reason so many AI apps are Mac native first (or exclusively). No other company offers so much memory and convenience in a consumer product for these purposes. These have become genuinely unique products with almost no competition, and by all accounts it seems Apple is just getting started in this direction.

Local AI is very limited and mostly a waste of time/capital compared to subscribing to access the good stuff provided by the key players.

But by all means, throw more money at Apple for a problem they can't even solve themselves.

If AI was that good on Apple hardware, they wouldn't need to buy access from their competitors to finally make Siri not completely useless.


Your statements are a couple months out of date. The space is evolving rapidly. It's definitely not a cost-efficient approach today, but models like Kimi K2.5 can be run on dual 512GB Mac Studios with performance rivaling (though still not fully matching) cloud frontier models. That's $20k of hardware, so it's certainly not going to be common today. My point is more about the trend: hardware to run serious models is starting to become more affordable, and open weight models are slowly but surely closing the gap with cloud models. Project this forward in time a couple years, and I think you'll be surprised how many folks will be interested in running AI locally outside of cloud environments. This trajectory will also intersect and interact with the trajectory of advertising and other monetization methods with cloud-based vendors necessarily becoming far more aggressive over time.

I'm not contesting that AI will become worthwhile on local hardware at some point. With software optimization and hardware costs falling over time, that's pretty much a given.

But I'm arguing that it's not going to be worthwhile doing on Apple hardware. GPU sharding is already a thing for PCs, and you don't need to stupidly buy multiple full computers for it to work.

Apple has put themselves into a corner with their Apple Silicon strategy. It's good for efficiency and thus quite nice for mobile usage, but it makes no sense for desktops that do not need to be power/space constrained.

Their GPUs are still weak, and their strategy of gluing 2 together gives poor results in general workloads. They are limited by the die size and the RAM bandwidth they can allocate to the whole thing because of physics.

If Apple manages to get good results by aggregating multiple computers, PCs will get even better results by using multiple GPUs in the same box, interconnected by the PCIe bus, which will always be faster than Thunderbolt no matter what, because of physics. In fact, they could even come up with a new interconnect if need be.

There is just no realistic way for Apple to become a dominant player in AI. They cannot compete properly on the hardware side because they won't get the cash flow/key players NVidia and AMD are getting, and they cannot compete properly on software because it will always be ports of stuff made to run on better/faster hardware. They'll lose AI basically for the same reason they have lost gaming: uncompetitive performance for the price. People who actually want to do stuff care less about how things look and a lot more about how good/fast they run.

And whenever datacenters start offloading older GPUs, their price will fall, making it the cheapest way to do local AI. Apple hardware keeps a stupidly high price even when it's completely obsolete because of the status it confers; it will never be cost competitive.

It's basically a replay of their PPC mistake, where they thought they could compete by going at it alone but in the end fell pretty hard because they couldn't compete against the volume PCs were getting.

Now Apple has money but cannot attract enough talent because they have no vision, and the management style is basically mean girls running the show.

You are arguing about purchasing a solution that would cover 8 years of top-tier AI subscription. Seriously, who in their right mind would do that? Apple hardware for AI makes no sense; either you have prosumer-level needs that are going to be served just fine with cheaper hardware (like, for example, Ryzen AI) or you have large needs, and investing in a real AI solution is going to be better because it's going to be much faster. Being able to fit large models is useless if they run too slow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: