I understand that part of the "magic" behind the M1 is how it has some cores that are "performance" cores and other cores that are highly efficient "low power" cores.
My question is, how much of the sublime performance of M1 Macs comes from MacOS being fine-tuned to take advantage of these two different type of cores?
If you simply get the bare minimum of NetBSD booting on an M1, will it not achieve nearly the same performance unless the OS is fine-tuned to schedule properly across the "performance" cores and the "efficient low power" cores?
I remember reading a recent article [0] about how future Intel chips plan to have similar "perf" and "low power" cores, and part of the presentation included someone from Microsoft saying they spent lots of time on the Windows team making sure Windows could schedule across these properly. So I wonder how much work it really takes.
ARM big.LITTLE[1] SoCs have been a thing for about a decade now, and most operating systems have schedulers that take advantage of each set of cores. macOS isn't doing anything special that Linux et al. aren't doing.
> macOS isn't doing anything special that Linux et al. aren't doing.
MacOS isn't doing anything Linux and others aren't doing, or MacOS isn't doing anything those others can't do?
That is, do we actually know how well tuned MacOS is for these cores and their capabilities, or is that an assumption? I thought I had read there were some specific instructions in the chip that were either new to it or were more aggressively used by MacOS to get additional energy savings or performance gains.
I don't know of anything really magical but for years Apple has been steadily pushing apps towards APIs that give the OS a lot of latitude to manage energy [1]. Grand Central Dispatch, AVFoundation, etc. Then on iOS BackgroundTasks etc (and iPhones have had little cores for quite a while now). I would imagine a lot of that experience transfers to macOS.
The centralized + draconian approach they take has a lot of problems but does help sweeping changes like this.
Care to share what special things macOS is doing? Because according to Apple's documentation, it doesn't seem like they're doing anything special when it comes to heterogeneous multiprocessing and scheduling that Linux hasn't been doing for quite some time.
At high level yes, but on much lower level that is another story.
When you manufacture your own chips and code your own OS, there are no limits on microtuning. You can design them to work together, instead of when you usually need to make compromises.
These kind of tuning might never end up to Linux kernel being too chip specific.
Apple has also moved a lot of driver code to another layer, and you don’t need that on kernel for example.
The core reasoning is in the design, and in power to manufacture and update all parts (device, firmware, drivers, OS). You can design them to work flawlessly together in bigger picture. You can leave out properties from kernel to be done by OS apps. You can make hardware based submodules, such as DCP interface on M1 macs, main target of discussion on Asahi Linux (https://asahilinux.org/2021/08/progress-report-august-2021/). You can add own instruction sets for own purposes. Something which is hard to add for Linux kernel.
In theory, you might be able to the same with Linux kernel, but in practice driver development and other stuff is relying on reverse-engineering, black box testing or written specs without access to source code. How time consuming is that compared that to Apple? Is it more likely that main line code in Kernel is acceptable when it works, not when it is perfectly optimized and works? You can't rely that some OS app handles something, when Apple has full power for that.
Android and iOS are better example for this. I post few links which might give an idea.
> You can leave out properties from kernel to be done by OS apps.
Doable on Linux as well. If this is better for performance this most likely already would have been implemented. Besides this is not concrete. With concrete I mean things that are known to be implemented with M1 that for example Asahi won't be able to replicate.
This is just hardware. Even so this example is a non-starter since DCP will be supported by Linux.
> You can add own instruction sets for own purposes. Something which is hard to add for Linux kernel.
It's actually not hard. It's trivial if you add compiler support(which Apple most likely would for, you guessed it, LLVM). There are actually some custom instructions on M1 afaik, mostly used for being able to run X86 more efficiently.
Where the top 4 points being pure hardware. Point 5 being about specific design decisions done in Android which doesn't mean anything. Point 7 even says that the most likely performance increase would be custom co-processors which again is pure hardware. I'm not sure what this link is supposed to achieve but it arguments are opposed to Apple being better because of Software hardware magic.
This link again mentions the design decisions why Android is less responsive. The main culprit mentioned on why Android uses more RAM is that Android implemented by vendors have a load more bloat. This has nothing to do with magic hardware software combo. This is Android from vendors being trash.
If that is the case though, I wouldn't be surprised if newer Linux and BSD releases gain additional support for per-core-type performance scheduling and optimizations therein.
It's not entirely new - Remember pretty much all ARM processors that aren't MCUs have big.LITTLE, but there is no doubt additional work to be done in the area.
This answer seems optimistic. Unless you have a single execution state cpu bound, which has no parallelism, and no other tasks exist needing runtime, having more cores, even little ones, seems like a win.
Even just pedestrian clock processing for interrupts could exploit the other cores. Or keyboard and mouse processing, whatever. Playing an mp3 while you compile? That other core sure would stop context switching in the compiler...
My Ubuntu VM on my Mac Mini gets outstanding performance which validates the point that macOS isn't essential for the performance. I'm sure however that macOS is very helpful in ensuring the power efficiency on laptops.
Same here. For many years, a MacBook Air 11 was my daily driver. After some time, I wiped up Mac OS X and I ran a minimal Linux configuration: XMonad, Emacs, Firefox and XTerm.
With a few tweaks, mostly those suggested by powertop, my battery range was indistinguishable from Mac OS. Which is impressive, given that Safari is known to be very optimized towards low energy usage. I guess I compensated that with a simpler graphics stack that generated less CPU wakeups.
I’m with you up through XMonad, Emacs, and XTerm but... Firefox? Right now I’m currently struggling with an attempt to use a circa 2015 Dell XPS 13 as a Linux-based daily driver, and Firefox is nigh unusable with even only a few tabs, 4GB of RAM apparently doesn’t go far enough, swap degrades performance even with SSD but turning it off just means stuff dies. I’d love to find out I just set things up wrong but I’m shocked to discover I was getting better performance out of Windows.
I wonder what else you're running on that machine. I have an i5 X201 from 2010 with 2GB of ram (and an SSD), and I regularly push it with 50-ish Firefox tabs.
However, I'm using i3 instead of gnome, and void instead of debian et. al.
It's Lubuntu, so I think the desktop is LXQt; I'd assume it's not that.
About the only unusual thing I can think of is that I'm trying to use dropbox. It dies periodically, so maybe it's hungry, but even without it running, less than a dozen FF tabs can bog down the machine (and I gave up on Chromium entirely).
Would totally welcome any tips from people confident I can do better.
Honestly, I wouldn't discount the desktop or the OS. On a fresh reboot, htop shows my cpu usage across the four cores as 0, 0.7, 0, and 1.3%. That's not a lot of background activity.
I won't claim to have late-model-Ryzen performance; there is an SSD performance hit when the machine uses some of the 16gb of swap I gave it. The website data has to go somewhere. But I haven't found it to become unusable, except when I restore and load all my tabs simultaneously. After it's all downloaded though, pulling web pages out of swap is pretty fast.
Personally, I found the best things for performance were an SSD, i3+void, and a ton of swap space. Pretty much in that order.
Edit: I looked up the processors of the two machines. Ironically, all else being equal, that X201 is a full 20% faster than yours (2.2 vs 2.66).
macOS/iOS have API for marking jobs as background which will run on slow cores. And this API is used, AFAIK. I'm not sure if widely used Windows or Linux software routinely marks its threads for background jobs. I know that I never did that in my software.
On both macOS and Linux, process scheduling goes further than just niceness. On macOS in particular, it has a concept of process priorities[1] and I/O policies, and the OS itself defines special priorities and policies for background processes.
Apple system developers definitely deserve a lot of credit for optimising ios / macOS Big Sur for its ARM hardware platform. If we could run another OS on it, it would be evident that part of the performance boost of Apple's M1 ARM processor is definitely due to the optimised software it runs.
I used an intel macbook pro with an older version of Mac Os that had lots of background processes and features disabled just to get the performance I wanted.
My M1 Air was noticeably faster even with spotlight indexing and a massive build inside a virtual machine out of the box.
The system software (OS) is highly optimised for M1 and thus greatly adds to its performance. Note that Apple has been developing ios / iPad OS on ARM platform for many years now.
I won't lie, I'll be looking forward to a SoC upgrade on that, but I'm just done with throwing $$$ at companies that sit on mountains of cash yet can't be arsed to dump some info on how to even use their hardware.
Money and work needs to go into HW support where it is easier to get to the full potential of the device, not burn money and brain cycles on giving the richest company in the field à lift. Apple needs to lose mindshare. They have never cared about FLOSS and they never will, just accept that and move on to more open platforms or things will never change.
When Linux and (fingers-crossed) BSD get full M1 support in the next year or two, these machines will be fantastic Linux and BSD computers. Not just from the unique hardware advantages (quiet operation, performance, etc.), but also from the upstream open-source kernel support and community documentation that is being written.
And MacOS handles security on a partition level rather than on a system chip level, so I could have a Full Security MacOS install dual-booting with Linux/BSD someday. Exciting stuff.
I'd be willing to help a small company form around this, a-la System76. I think that Mac hardware is great, and a better price-to-quality ratio than any other laptop. A hardened Macbook would be fantastic for millions or knowledge workers.
If you or someone else reading is seriously interested, let's have a chat. @HN_username at gmail
M1 macs are not T2 macs, nor does that have anything to do with the issue at hand. What the parent means is that it's not like Android where you "unlock your bootloader" and it's global and locks you out of certain features. The Apple Silicon secure boot mode is per OS install, so installing Linux does not taint your macOS security (i.e. you can still use iOS apps and watch Netflix in 4K on macOS).
This is the M1, which does not quite work like the T2 does.
The partition-level system means that you could have a Full Security MacOS install (with Secure Boot equivalent, System Integrity Protection, AMFI, etc. turned on), a Permissive Security MacOS install (no Secure Boot, SIP, or other measures, or some on and some off), and a Linux install all dual-booting on the same system.
Unlike other architectures, there's no "this chip is in full security mode or no security mode," like unlocking the bootloader on Android, where the entire system is secure or insecure. On the M1, you can just have OS installs with different security.
Very interesting info, thanks! Do you know if all the partitions would be encrypted via hardware, like the T2 does? (by that I mean even a Linux partition)
Patrick Wildt posted an early video of OpenBSD booting on the M1 Mac Mini back in January. A lot of the internal hardware is supported now as of 7.0-beta, such as the Broadcom Gigabit, Wi-Fi and USB.
I continue to be skeptical of all the "boots on Apple M1" efforts. Ultimately, I don't expect them to get much further than the "proof-of-concept" phase, as far as usability goes.
Apple's chips are SoCs, and as such are extremely complicated systems. There are tons of parameters to make it work efficiently that either need to be carefully reverse-engineered, or out-of-reach entirely. Apple not only won't be any help, but will be actively hindering these efforts.
For example, is anyone dealing with the performance modes of the chip, tuning them for performance-vs-dissipation? What about the DRAM interface?
Alyssa Rosenzweigs effort on reverse-engineering the GPU is impressive and laudable, but it's just one component of the dozens or so IP blocks in the chip, each of which require that level of work.
Being able to boot your OS on the CPU is like landing on the beaches of Brittania and calling it conquered. (I couldn't come up with a better analogy)
I understand the appeal of the technical challenge, and that's awesome! But people should lower their expectations if they ever expect to run an open OS with the full power (or even significant fraction) of the hardware.
Seriously. Apple have absolutely fractional actual reasons to limit non macOS boots on M1 devices. The fact they've made the transition & taken macOS to a whole version beyond X and kept supporting non apple OS boots on the devices proves (to me at least) that they're not going to remove it. If there ever was a time to lock that down, it was on the architecture shift.
Suppose there's a macOS exploit found that makes use of the unlocked bootloader. Are you certain Apple would actually patch it, and not just lock the bootloader? Sony famously did that to the PS3.
They do officially support the Permissive security policy, and it is intended to allow you to boot custom kernels. Running a fully-untrusted operating system kernel is an intended feature, even if they aren't going to give you support if something breaks in the upper layers.
They do make a profit on the hardware when they sell it to you. But that's a one time profit. The new buzzword of capitalism is "recurring income". And that is where Apple's software ecosystem comes into the picture. Apple today makes billions of dollars from its App stores, paid iCloud services, search bundling etc. These are profitable income they continue to earn for the whole life-cycle of the device.
And this is why Apple wants a stranglehold on both its hardware and software. And precisely why it will sabotage any other viable OS that emerges on its platform (which it can now do much more easily with its ARM SoCs).
For apple fans who crib about how Linux / xBSD OS sucks because of all the configurations / "extra steps" they require to do something, I'll wager that running a Hackintosh on an Intel / AMD processor will be a less buggy experience than running Linux / xBSD on the Mac M1 as a Desktop OS. (Simply because AMD and Intel are actually happy to see macOS running on their CPUs, unlike Apple that sees other OSes on its ARM cpus as a threat.)
Yeah, Apple has mastered modern planned obsolescence. They intentionally build their hardware to not be usable after a few product cycles to keep new hardware sales up. If it were easy to run alternative OSes to give renewed life to what they may deem obsolete hardware, that would interfere with their bottom line. https://en.wikipedia.org/wiki/Planned_obsolescence
You truly expect every IP in the SoC to require GPU level complexity? That's a far out assumption imo. The speed of progress made on getting Linux to run is extremely high and once the GPU runs they will focus on power management. I think you are over way over estimating the impossibility of this task.
You missed the point. Apple may decide to make it more difficult to reverse-engineer their hardware. For now, we (the Linux/BSD community) are allowed to use their hardware, but this may not always be the case, e.g. when M1's successor hits the market.
I don't think Apple cares about the relatively tiny number of people who want to boot an alternate OS vs the markets they sell into. It is just not financially important. It has always been possible to boot alternate OSes on the Mac and Apple did specific work to make sure that was still true on the M1 Mac. No one can predict the future of course and it is possible for this to change, but I think the status quo of mostly indifference will continue.
However, even if Apple does make it harder later, I think the people who are working on boot systems are doing it mostly for the joy of the challenge. If we end up with a nice alternate OS all the better. I think it is incredibly cool work.
So we are now speculating about future hardware? Apple macs were never locked down. Why would they now suddenly start being locked down?
If Apple wanted to lock them down there would not have been a better moment then with the M1. They already have all the tech ready to go. But they obviously decided not do to that. Assuming a product line that has been not locked down for decades to suddenly start being locked down AFTER the best moment to do it is the wrong assumption.
You mean like they supported bootcamp "officially" and then just took it away suddenly with M1? I don't see how that is different then having unofficial support and apple then locking down a new Mac. In either case you lost support with a new hardware platform.
> In either case you lost support with a new hardware platform.
If we're talking about "new hardware platforms" --- yes, any vendor or vendor ecosystem may drop support for anything at any time in future products. If ARM Ltd's next core is a mechanical abacus, it will not run Linux.
Boot Camp was always Intel-specific. There was no Boot Camp on Power Macs, and they couldn't run Windows either. MS has not made available a version of Windows on ARM for the M1 platform, either.
No it couldn't. The ARM ecosystem is a lot more fragmented than x86, and M1 does not follow the SystemReady specification (which mandates certain hardware beyond the CPUs), which means OSes need core kernel patches to support it (which is what we're doing with Linux). To run Windows on the M1 requires Microsoft's cooperation. You can't do it just by writing drivers.
This isn't new; Microsoft did it with the Raspberry Pi 3, which is also a nonstandard SoC. But it can't be done by Apple alone.
How is that a problem. Microsoft would jump at the chance to run windows on M1. Make no mistake here. It's only Apple not wanting to invest in features that have existed for years.
> If Apple wanted to lock them down there would not have been a better moment then with the M1. ... But they obviously decided not do to that.
That's a laugh - what other OS can you run on the M1 today apart from macOS? It is as good as locked down already!
Apple understands very well that this is a very lacking feature of the M1, and that is why Apple is very cleverly using those reverse engineering effort as part of its online marketing to mislead some into beliveing that Linux or xBSD OSes will be available on the M1 in the very near future. The sad reality is that even if the reverse engineering is successful all you are going to get are buggy versions of other OSes, with features missing, which will make you regret purchasing the M1 if you hoped to run other OSes on it.
You can run linux TODAY, you can run netBSD TODAY. it's not fully featured yet but you can run it. Locked down has a clear meaning and macs aren't locked down. Ipads are locked down, Iphones are locked down. It's impossible to boot a different OS on those.
Linux has ran for years fine on macs. I don't see any reason why it won't run fine on the M1 macs.
> You can run linux TODAY, you can run netBSD TODAY. it's not fully featured yet but you can run it.
Apple understands very well that this is a very lacking feature of the M1, and that is why Apple is very cleverly using those reverse engineering effort as part of its online marketing to mislead some into beliveing that Linux or xBSD OSes will be available on the M1 in the very near future.
I can run a fully featured gui based Linux / xBSD / Windows XP on my 15+ year old single core Pentium machine even today ... that highlights how useless and crippled the M1 is in supporting alternative OSes. And that is simply because the M1 is as good as a locked down machine.
Can you point me towards marketing from Apple that highlights the ability to run linux or BSD?
The reverse engineering effort has only been in full swing for 6-7 months, that's nothing. And yet it's already quite usable with the main thing not working being hardware acceleration.
Apple does "Shill Marketing" on online / social media platforms.
"Someone who works for a business but pretends not to in order to seem like a reliable source is a shill. Shill marketing is the act of using a shill to try and convince the public that a product is worth buying ... The concept of shill marketing is simple. People tend to feel more comfortable with a product or service if they know someone else who has a good experience with it. If someone who isn't associated with the company tells you how good it is, the claim will probably be more convincing than if it came from the company spokesman ... Oftentimes, multiple shills work together, reinforcing each other's story and engaging in a conversation about how great the product is. ( https://www.infobloom.com/what-is-shill-marketing.htm ).
Only a locked-down system needs to be "reverse engineered". All Apple has to do to prove that the M1 isn't locked down is to either release hardware information that will help other system developers easily port Linux / xBSD or other other OSes to it. Or release drivers for other OSes (like it used to do for Intel based Macs with "Bootcamp" to run Windows).
What "online marketing?" I have only seen them mention the very real OS level support for virtual machines. That is available today and works very well. I have not seen any marketing referring to bootable alternate OSes. I would be interested in a link to such material.
Apple does "Shill Marketing" on online / social media platforms.
"Someone who works for a business but pretends not to in order to seem like a reliable source is a shill. Shill marketing is the act of using a shill to try and convince the public that a product is worth buying ... The concept of shill marketing is simple. People tend to feel more comfortable with a product or service if they know someone else who has a good experience with it. If someone who isn't associated with the company tells you how good it is, the claim will probably be more convincing than if it came from the company spokesman ... Oftentimes, multiple shills work together, reinforcing each other's story and engaging in a conversation about how great the product is. ( https://www.infobloom.com/what-is-shill-marketing.htm ).
You are absolutely right that the M1 supports running other OSes through virtualisation. And Apple has promoted that fact and even suggested that as the new "Apple way" of running alternate OSes on the M1. Obviously many are unhappy about this fact, as older Intel based Mac even had official support for running Windows and could run many other OSes.
Thus, Apple's online campaigns for M1 seek to counter the really negative fact that the M1 is a locked down system where you can't really meaningfully replace macOS with another OS (run another full featured OS on the bare machine, and not with virtualisation). They also seek to convince ignorant users into believing that the M1 fully supports Linux and other OS (or that it will in the very near future), which is plain bullshit because it is not in Apple's interest to do so. I am willing to bet that once the M1 sales reach a particular threshold, Apple will confidently lock the bootloader of the M1 - that's the only major differeciator between the iPhone / iPad and the M1.
Yeah, but one day Apple might pull the plug on non-Apple OSes. By then, all of you people who supported Apple will have to find another solution, which may not be as appealing since the competition is now far behind because of all your support of Apple hardware. Therefore it's better to switch earlier than sooner even if it means a small step back in performance.
You don't actually have to find another solution. The state it's in will continue working. The only thing apple can do is release a new piece of hardware that is locked down. But that could always happen regardless of vendor. Besides I don't think that's a serious risk with macs.
Maybe so. But then again, maybe not if Apple decides to blow a bunch of fuses remotely. Also, your laptop could break. Or you might want to expand to more laptops but the one you are using now runs out of production.
This is nonsense. There is nothing out of reach. This is obviously provable by the fact that macOS itself runs just fine in permissive security mode, which is indistinguishable to the machine to running Linux. We even have it running on a bespoke hypervisor, booting to desktop, which is how we reverse engineer its interactions with the hardware.
> is anyone dealing with the performance modes of the chip, tuning them for performance-vs-dissipation?
Yes, I'll be working on that as soon as the basic GPU kernel driver prototype is done. I already wrote the clock gating driver last week, which was one of the things on my immediate TODO list.
> What about the DRAM interface?
On the M1, unlike typical SoCs from other vendors which are a giant mess in this area, most of these gritty power management details are handled by onboard coprocessors running Apple firmware. This makes porting an OS a lot easier, as you only need to deal with a much higher level interface than you would on other SoCs.
Even CPU sleep modes have an automatic mode. You set it to auto, issue wfi, and some heuristic inside the CPU itself decides if it should do a deep sleep or a shallow sleep. Not sure if we'll use that since Linux is perfectly capable of handling CPU c-state decisions itself (and all we really have to do there is measure the latencies so Linux knows what to expect), but just to give you an idea.
The CPU frequency scaling is abstracted out behind a few simple registers. Poke this register to set the CPU core cluster voltage, poke this other register to set the p-state for a core. No need for thousands of lines of code to deal with calling out to an external I²C PMU to set the vcore voltage or doing a complex PLL frequency switching dance. It's stupid easy. We haven't written a proper driver for that yet, but Corellium wrote a PoC version and it's 390 lines of code.
> each of which require that level of work.
Nope. We already have DART, NVMe, USB, PCIe, I²C, UART, SPI and parts of power management working. Things like I²C and SPI are trivial little blocks. The GPU is by far the biggest thing. I just got DCP display modesetting working last night on our prototype driver; Alyssa is implementing it in Linux soon.
> Apple not only won't be any help, but will be actively hindering these efforts.
I guess they spent a huge amount of development time writing the entire BootPolicy framework that enables other OSes just for fun then, and are planning on throwing it all away? This doesn't make any sense. I keep hearing this FUD and we have seen zero evidence that Apple plan to hinder anyone. The Mac is an open platform and always has been. They are neither helping nor hindering anyone trying to port an OS to this thing. They are tacitly saying "have fun".
I'm not sure why nobody trusts us when we say we're going to make this all work properly. We aren't nobodies; the major team contributors have been reverse engineering hardware for a decade or so. We've been staring at this thing for the past 9 months. We know what to expect. We haven't been putting together a PoC all this time; we've been engineering and reverse engineering things to put together proper, full, upstreamable support. That's why it takes time. But once it comes together you'll see it's far from a mere PoC.
Hey, bravo for replying and all, but your time is valuable. Please don't feel obligated to play whackamole here, unless you enjoy it :)
The skeptic posts will keep appearing (heck I myself wrote one months ago) until people can boot something low-performance with keyboard, unaccelerated framebuffer, network, and internal storage, and trust that it won't corrupt filesystems. We all know you're working on that. It will shut up the skeptics, and it's the only thing that will do that.
That said, at some point a fully 100% comprehensive inventory of all these "onboard coprocessors running Apple firmware" (that we can't reprogram or recompile the firmware for) is going to be needed. And a clear, coherent argument for each of them individually, explaining why it isn't a bugdoor risk. I'm sure most of those answers will be elaborations on "because IOMMU". But for the DRAM controller the story will be much more nuanced.
I'll buy an M1 someday, and it will beat the pants off my RK3399, but I doubt I'll ever trust it to the same degree.
Thanks for the detailed answer! I didn't realize you Apple left firmwares running on a bunch of the peripheral cores, that certainly makes things much more achievable, if only for reverse-engineering what they're doing.
I'm surprised and impressed you got so many peripherals running, including DART! For USB, do you mean USB4?
I hadn't heard of BootPolicy for other OSes, and I can't find anything about that online. Sounds like that gives you a much more solid foundation to work on than I thought.
For USB I mean USB2 on the Type C ports (device/host) and USB3 on the Mac Mini's A ports (which are just a standard discrete PCIe controller). USB3 and Thunderbolt/USB4 require more work to integrate the Type C port management, muxing, and bringing up the associated PCIe controllers; we're working on those things (and Corellium also have a PoC for that, it's not that hard either, but as everything it takes time to do it right). We're also working on making changes to the Linux IOMMU subsystem to enable DART with 16K pages to work with kernels built with a 4K page size, which will solve a lot of use cases (Android apps, x86 apps under FEX emu, and not having to pester distros to ship 16K page size kernels to get TB3/USB4 to work safely). Sven sent out a draft patch series for that to Linux a few days ago, and the first take on DART itself is already upstream. Also, just hours ago Alyssa lit up display modesetting with DCP under Linux with KMS, for the first time outside of macOS and our prototype m1n1 driver. So 4K displays now work with proper page flipping, and we're moving past the bootloader-provided framebuffer.
After taking a crack at the GPU kernel driver, I should probably direct my attention to audio (which nobody has looked at yet); if whenever we demo a fully accelerated 3D desktop I can have that working too it'll avoid a lot of people coming out and saying "they don't even have working audio, nobody's ever going to use it" ;)
For reverse engineering, as I mentioned, one big tool I spent a month writing is a custom thin hypervisor that can run macOS (the native version, not the VM kernel version) with direct access to the hardware. It boots to desktop with all drivers intact, and this allows us to write hypervisor hooks to inspect and log all MMIO traffic. We already have a working tracer for the DCP (display controller coprocessor) that logs all inter-processor calls, which is how I figured out that bit, as well as generic parts of the interface they use for all coprocessors. Next up is the GPU, so I'll just point the base copro tracer at the GFX core and also trace all MMIO registers in those ranges and see what macOS does. It really does make life much easier. It's also great for debugging Linux; it implements a virtual UART over USB-gadget, so you don't need special cables to get a physical UART, and you get low-level debugging features for when your kernel crashes. Our kernel test cycle is on the order of 7-10 seconds, including rebooting the machine and loading a new kernel from a host machine on top of the hypervisor. I made a long video about all the HV stuff, if you're interested:
That's the "official" line why this all exists, being able to run self-built XNU kernels, but it is evident that internally Apple engineers are happy to see efforts to port Linux, and Apple themselves have no reason to oppose that use case. It's all very clearly a "we won't help you, but have fun" kind of understanding. And we intend to have fun :)
Realistically, once this runs well enough and people are actually using it, that does put pressure on Apple not to do anything that gratuitously would seriously make our lives harder, whether they ever officially admit to that or not.
On x86 platforms, that is usually initialised by the BIOS (which also does things like memory testing.) On Apple's ARM platforms, I'm not really familiar with how boot works but I believe they have an equivalent firmware to handle that.
"... if they expect to run an open OS with the full power (or even a significant fraction) of the hardware."
When using open OS I never expect that.1 To me its a tradeoff worth making.
1 In some cases I dont want/need all the hardware features.
Is there a name for the common practice where a vendor produces a low-priced and a high-priced version of a product that actually contain the exact same chip, only the lower-priced version has some features disabled. Vendors seem not to fear that buyers will reject the lower-priced version because it does not access all the chip's capabilities
> Is there a name for the common practice where a vendor produces a low-priced and a high-priced version of a product that actually contain the exact same chip, only the lower-priced version has some features disabled.
There's binning, and market segmentation, I think.
Binning is the part where you produce all your units "identically", but then test them and put them in different output "bins" depending on their performance (or size or functional features or whatever).
Market segmentation is where you break up your product into different artificially-differentiated quality/feature-tiers, in order to extract maximum value from the market (cheap tier for those budget items, high tiers for those who have more money to spend).
The thing with Binning for silicon is that eventually your manufacturing process is tuned well enough that most of your parts are full-featured, so then you have to explicitly disable features to preserve your market segmentation. I don't know the term for that exact action.
Netbsd was interesting around 2000 when there were a bunch of living computer architectures. Now when there is just basically ARM, AMD64 and maybe IBM Power it makes less. As an avid Unix afficionado I have yet to install NetBSD. OpenBSD does whatever I need it to do. Maybe the ability of NetBSD to support commericial linux software makes it still relevant but I suspect that is rapidly going away with SystemD dependence. Does anybody know if you can run a SystemD layer on Netbsd?
Otherwise the ability to support Alpha and Vax and M68k while interesting once is for all intents and purposes irrelevant today, at least for me.
>NetBSD/next68k System Requirements and Supported Devices
NetBSD/next68k 9.2 will run on the 25 MHz 68040-based NeXT workstations. The Turbo (33 MHz) models are not supported. The 68030 model is not supported. NetBSD/next68k 9.2 does not have any local disk support, so you must netboot and run diskless.
>The minimum configuration requires 4 MB of RAM and a network server capable of netbooting NetBSD/next68k. Serial consoles are poorly supported by the hardware, see the FAQ for help...
In 2021 people spent developer hours testing NetBSD 9.2 on the NeXT cube...like what the www started on before the USSR fell. Anyway I guess everybody needs their hobbies...
> In 2021 people spent developer hours testing NetBSD 9.2 on the NeXT cube...like what the www started on before the USSR fell. Anyway I guess everybody needs their hobbies...
Do you see no utility in this? I think we should keep old machines alive-- both their original operating system / environment and also something like NetBSD which is quasi-supported and quasi-modern have their roles.
For society as a whole I see negative utility in time wasted. Not all history is important to keep alive. You don't see people firing up coal fire steam engines in their back yard.
For the individual there may be some utility in such a hobby though.
In life since we have a limited amount of time, we got to pick our priorities. Maybe maintaining a modern operating system on 30 year old esoteric hardware is rewarding to you. It wouldn't be for me. Neither of us have to justify our positions, however if I was looking at a resume and one kid put "Helped maintain netbsd/NeXT 2018-2021" and another put "Set up and maintained PostgreSQL based system for small business CRM," I would probably take the latter more seriously.
> For society as a whole I see negative utility in time wasted.
Personally I find that a bizarre take. I imagine plenty of people could feel the same way about a comment you recently made:
> Your phone could have one app...a web browser and maybe a few utilities like a calculator, calendar, clock. Even these could be implemented as web apps running in the browser.
Also:
> I was looking at a resume and one kid put "Helped maintain netbsd/NeXT 2018-2021" and another put "Set up and maintained PostgreSQL based system for small business CRM," I would probably take the latter more seriously.
The former could almost certainly do the latter in a heartbeat, but the reverse is almost certainly not true in most cases.
> Not all history is important to keep alive. You don't see people firing up coal fire steam engines in their back yard.
Actually, people do! I'm pretty grateful that people keep old steam engines going. The context of where we've been and how we've gotten here is important.
Also if there's some sort of global semi-apocalypse, we will likely need to fall back to some older technologies in order to keep some semblance of civilization going, where the alternative might be dropping back to a completely pre-industrial state.
Most people have "non-contructive" hobbies and that's fine. Everybody is out there for taking pleasure in life. What do you do during your leisure time? I do hike among other things. That probably does not help society too much.
If you need to fulfill a OS-related position you should definitively the former, all other things being equal, because they will probably be more qualified. If you need a DBA, sure, take the latter.
My understanding is that systems programming is not in demmand these days. Microsoft and Apple products are mature. There is not a proliferation of different hardware architectures to port to anymore. Almost nobody gets paid for systems programing in unix or linux unless you want to become an AIX programmer or maybe work in academia. Again, OS programming was the big thing in the late 1990s early 2000s.
Now it is important that your OS just works and gets out of the way and lets you get real work done. Anyway I am done with this thread because I don't want to be accused of flaming.
While I agree that systems programming has nowhere near the demands of business application developer these days, “almost nobody” seems like a hyperbole. There are a number of jobs out there for Unix/Linux systems programmers. Driver development at hardware companies, all sorts of security software, etc.
Just a comment to say that I had no intent to piss you off, and I don't think you are flaming, just exposing your perspective. Maybe I should not have included my question on your hobbies. It was more for the readers, including you, to answer in their head.
There are solid engineering reasons to keep support for these systems alive. They're comparatively simple and very well understood, so a well-maintained support for these "toy" systems can be used as a trusted reference even when working on more complex platforms.
Just wait until he finds out that most of the software that powers the Internet was originally written as somebody's hobby, from the OS kernel all the up to web frameworks.
What do you mean by pure pain? Netbsd and both openbsd are great operating systems. Neither one of them is bad,they just work differently and have other goals. Openbsd is more security minded and netbsd works on more computers/architectures
My question is, how much of the sublime performance of M1 Macs comes from MacOS being fine-tuned to take advantage of these two different type of cores?
If you simply get the bare minimum of NetBSD booting on an M1, will it not achieve nearly the same performance unless the OS is fine-tuned to schedule properly across the "performance" cores and the "efficient low power" cores?
I remember reading a recent article [0] about how future Intel chips plan to have similar "perf" and "low power" cores, and part of the presentation included someone from Microsoft saying they spent lots of time on the Windows team making sure Windows could schedule across these properly. So I wonder how much work it really takes.
[0] https://www.pcworld.com/article/3629502/intels-alder-lake-wh...