The current top results for single core performance are ~3,100 so it is, on paper, a substantial gain. The M3 in the iMac achieves around 3,053.
However, Geekbench is not that great (imo) so it does not necessarily indicate what you can expect overall performance to be or improve by between CPUs.
Heya! Sorry, I would've added some links but I was on mobile and still am.
As I hinted at, this is mostly coming from anecdotal wisdom I've heard. The hardest evidence seems to be Apple's chips increasing more (10-20%) in GB6 vs GB5 when compared to how much other manufacturers such as QLC and INTL increased. As another comment said, GB5 is well known to be quite accurate with SPEC (industry standard), thus this discrepancy with GB6 and GB5 is a bit concerning.
Of course synthetics are always skewed (sometimes because a chipmaker tries, sometimes not), most famously Cinebench which has lost most credubility for favoring SIMD perf far too much.
Also might be worth it to check out the top comment to see an example of how a choice can bias syncthetics but not real-world.
I would recommend reading the Geekbench 6 internals document, they explain the rational behind the change.
> Geekbench 6 uses a “shared task” model for multi-threading, rather than the “separate task”
model used in earlier versions of Geekbench. The “shared task” approach better models how
most applications use multiple cores.
> The "separate task" approach used in Geekbench 5 parallelizes workloads by treating each
thread as separate. Each thread processes a separate independent task. This approach scales
well as there is very little thread-to-thread communication, and the available work scales with
the number of threads. For example, a four-core system will have four copies, while a 64-core
system will have 64 copies.
> The "shared task" approach parallelizes workloads by having each thread processes part of a
larger shared task. Given the increased inter-thread communication required to coordinate the
work between threads, this approach may not scale as well as the "separate task" approach.
Nothing about this is biased towards Apple. GB6 simply scales worse with more cores due to increased inter-core communication requirements.
This is correct. AMD and Intel CPUs had many slower cores. GB5 made them look better. Apple has fewer cores but more fast ones.
Most applications can't utilize many cores. Thus, usually, consumer applications perform better with fewer but faster cores than many slow cores. Geekbench is a consumer CPU benchmark.
I think you may be a few steps behind, what you're explaining is the rationale for the ST and MT individual scores. This is something all modern benchmark software has.
Hallo! That was really interesting to read through!
I just wanted to clarify that by bias I mean a skew towards one side that causes scores to misalign with SPEC. In this case, this bias against high-latency ICC is one of the causes of such 'bias'.
Thanks for the thoughtful and informative response though.
I do find my Macs increasingly difficulty to use, but like many things, Apple would consider quite a few of these features 'lost' as upgrades or part of the natural evolution of platforms. For example, getting rid of kexts is probably a good thing while the Settings app is straight garbage.
On the internet, and especially on Hacker News, any change to anything is automatically bad. There was a lot of complaining about the new settings app, and in the betas it was genuinely buggy.
But then, if you don’t choose to get hung up on it, it’s fine. Layout and location of things might be a bit different than before, sure, but it’s fine.
My time with PCVR streaming on the Quest 2 dramatically improved when I upgraded my wireless network, to the point now where I only wirelessly stream except for very high movement games e.g. F123. I thought there was nothing wrong with my previous setup and streaming (PlayStation and Xbox streaming included) was just terrible, but it turns out it was my setup all along.
Pi Zero: Nov 2015 (1x ARM1176JZF-S @ 1 GHz, 512 MB RAM)
Pi Zero 1.3: May 2016 (now you can use cameras)
Pi Zero W: Feb 2017 (Wifi and bluetooth 4.1)
Pi Zero WH: Jan 2018 (omg, soldering the gpio pins? Much wow)
Pi Zero 2 W: Oct 2021 (4x ARM Cortex-A53 @ 1Ghz, still 512 MB, now bluetooth 4.2)
I'm not at all convinced they are caring about this market. Realistically there have only been 3 models and there really hasn't been much push into this area. The Zero 2 upgrade wasn't anywhere near the leap that the normal pis are making. I know there is more limitations, but they also have more competitors and it isn't like the zeros are sitting on shelves. There's till a good market for <$20 computers (and especially for a $5 one)
This is the same for e-scooter rideshare schemes. Much of the evidence points to escooter use replacing walking short distances, so environmentally it can be a net negative overall.
Honestly the fact that they all become trash is why scooters annoy me. Like sure fine on a long enough time scale pretty much everything becomes trash but man they are not built to last.
Are they not built to last, or are they just abused and neglected? I suspect that personally-owned scooters tend to be kept somewhere safe and taken care of. VC-owned rental scooters tend to get trashed, because no one suffers a personal loss if a particular scooter becomes trash.
And meanwhile, more and more USFF PCs will roll out into second hand markets, which offer a much more compelling option than the Pi, unless idle draw is your main consideration. Its an unfortunate state of affairs, and I hope a Pi 5 can come around soonish to reinvigorate this market.
There have always been good small form factor options out there. But the big deal with the Pi is not just price and performance. It's all the software development and custom builds that use raspberry pi. Usff PCs don't usually have gpio pins on them, and they don't have tons of purpose built projects for them.
I have a small raspberry pi zero w running linux CUPS printer software and literally just stuck to the back of an old label printer that is plugged in on a high up shelf in my office. There isn't any other $10 device that would be reasonable for this.
I also have another zero w running piKVM in my server rack, connected to a 4 port KVM switch. So I can get full KVM access for my home servers, none of which have IP KVM built into the consumer mobos I built them with. PiKVM only works on raspberry pi and would require porting to other hardware options.
I have another raspberry pi 3b running octoprint to control my 3d printer and provide a camera feed. Octoprint also does not work on other hardware, that I know of.
Some things are much better on a small form factor intel NUC type device. I moved my home assistant off of a PI for a much more performant NUC. But that was easy only because home assistant has made their software to work on other platforms.
I think the raspberry pi W units are the real hero. But they are so difficult to get at the $10 msrp.
There are tons of alternative single-board-computers available. The Orange Pi 5 is great for MUCH higher end performance but still under (usd)$100, and there are other options (Orange Pi 3? Banana Pi? Radxa RockPi?) that still match or beat the Raspberry Pi 4 performance for the same recommended retail price, but new stock is available for the sticker price.
Raspberry Pi OS is pretty much Debian with Broadcom drivers they haven't up-streamed yet. It runs on other SBCs, or there's Armbian, Arch for ARM, RebornOS, et cetera et cetera, all packaged for ARM uboot SBCs.
If you're just using it as a simple SBC, OrangePi may be a good option... but the hardware support with drivers and whatnot is, to my knowledge, far superior on Raspberry Pi's thanks largely to the Broadcom chipset (vs the Rockchip stuff). For many applications where it's used to do hardwarey things, it may not be a substitute.
If you're just using it to run Home Assistant, or some server application, then sure the OrangePi is probably better.
My Kirb. What is this negging "yeah if you're into basic stuff" tone? I'm a fan of the genre, I own quite a few SBCs with various chipset manufacturers, and I use them for all kinds of things, from AI-vision voice-trained mobile robots to "simple" Kodi/emu boxes, and I can guarantee that so long as you notice the specific sizes and pinouts - CSI is CSI, DSI is DSI, Vulkan is Vulkan, et cetera.
Some of the more complex Adafruit / Pimoroni / Seeed hats are very specifically written for Raspberry GPIO, sure, but they have the problem of keeping you on an old OS after a year or two, unless you're willing to put in the same amount of effort as porting their examples to a different GPIO layout.
If you're into making custom stuff that fits whatever connector/pinout/etc exists, then yeah you can make almost any of these SBCs work just fine, no debate there. My point being there's an infrastructure built around Raspberry Pis that as of yet is not nearly as robust for other SBCs.
Is it worth like, $150 for a scalped Pi4? Almost certainly not. But I'd hesitate to say they're just strictly better. They're just different, and have different limitations.
I have several RPi4's all purchased at MSRP. I also have a few 2nd hand USFF computers. They server very different purposes and one isn't a replacement for the other.
I find it fascinating to see what the Foundation is turning its mind to now it has a seemingly high-volume, reliable supply of its house-design ARM chip.
I wonder if the future of Raspberry Pi products will be going even lower-end, rather than dealing with the complexities of making a competitive Pi n+1
The problem - as always - with all the clones is that their ecosystems are either immature or are complete crap. Most of the Rockchip knockoffs are technically superior at a better price point, and most importantly, available for purchase, but their websites are hacked-together Chinese clones with broken downloads, non-verifiable server OSes, and zero documentation.
RPi has a foothold primarily due to the community and software support. I'd love to see that get broken up, but building that took years, which is not what most fly-by-night places and clone shops are interested in doing. I say this as someone who largely depends on RPi 3 and 4 models for embedded work and would prefer to switch to something like oDroid (and we have to some degree), not necessarily as someone who is an RPi lover.
I hope that the foundation's supply is for real and the pricing gets down to normal market rates.
For the raspberry Pi 4b, the publicly available documentation for the BCM2711 SoC is laughable, barely 160-odd pages long and missing key details (such as which timers are accessible from the ARM core, and which are accessible from the GPU).
P.S. if anyone knows where to find the full TRM for the broadcom BCM2711, I'd be really grateful if you got in touch or sent me the pdf.
I've switched to using the pine64 boards for exactly that reason. All of the RPI model <X> boards feel like they're targeting hobbyists who want to build a retropi setup or school project and just need working software.
The pine boards are far more open, and the whole company is just "we give you the hardware, you self serve the software". They have a few good "pi alternatives", and some more "exotic" stuff. I've been playing around with their ox64 board and am enjoying it so far
good luck. not to mention not even full schematics of rpis.. being in broadcoms bed and not even being able to buy standalone chips. and making hobbists think its an open platform when its one of the furthest away from that, ugh.
the effort is better spent on nearly any other arm soc
If you’re interested in this odd Australian time zone, Lord Howe Island (10km long, population 350, hundreds of kilometres east of the mainland) also has its own time zone for half the year.
However, Geekbench is not that great (imo) so it does not necessarily indicate what you can expect overall performance to be or improve by between CPUs.