Makes sense; according to Geekbench, 9955XX has about a 25% lead in multi-core over the base M4, and about a 5% lead in multi-core over the base M5. And more cores, so better for parallel Rust compilation.
Steam thinks I was born Jan 1, 1970. Not that I needed to lie when I did my age verification back 15 years ago, I just randomly scrolled the year down and selected one.
As the years have marched on, though, that "birthdate" becomes significantly closer to my real birthday.
Only when chatting in a large channel at work, did I realise nearly 1/3 of the people there also set theirs as 1/1/1970. Which I presume is the first date that phisers will try to enter to reset people's accounts.
I am fully aware that my standard fake birthday is now used by me in some many places, that I have started to have a fake fake birhday. I should really just randomise and store it in my password manager.
But obviously the context of this OP story ruins all that.
When you're 10, a year is a long time, when you're 60 it is not. There's an implicit "relatively" here, which is unusual but not unknown in English. Almost poetic, I like it.
Thanks now I understand. I am "only" 26, but I remember being 20 like yesterday. I can't believe I'm on the second half of the way to 50. COVID lockdowns and responsibilities didn't help.
I feel time has gone faster since I got a job, if that makes sense. Every day yearning for it to be 5o clock so I can check out, every week yearning for the weekend, every month yearning for the last day to get paid. Doing this is just asking for time to be over sooner.
When a 10-year-old registers for an adult website, they pretend they're 100 years old. Their age is 90 years different from the stated birthday. Eighty years later, the birth date is just as far off—but the implied age is now only 10 years off.
Mikado is really only powerful when dealing with badly coupled code. Outside of that context you’re kinda cosplaying (like people peppering Patterns in code without an actual plan).
Refactoring is generally useful for annealing code enough that you can reshape it into separate concerns. But when the work hardening has been going on far too long there usually seems like there’s no way to get from A->D without just picking a day when you feel invincible, getting high on caffeine, putting on your uptempo playlist and telling people not to even look at you until you file your +1012 -872 commit.
I used to be able to do those before lunch. I also found myself to be the new maintainer of that code afterward. That doesn’t work when you’re the lead and people need to use you to brainstorm getting unblocked or figuring out weird bugs (especially when calling your code). All the plates fall at that point.
It was less than six months after I figured out the workaround that I learned the term Mikado, possibly when trying to google if anyone else had figured out what I had figured out. I still like my elevator pitch better than theirs:
Work on your “top down” refactor until you realize you’ve found yet another whole call tree you need to fix, and feel overwhelmed/want to smash your keyboard. This is the Last Straw. Go away from your keyboard until you calm down. Then come back, stash all your existing changes, and just fix the Last Straw.
For me I find that I’m always that meme of the guy giving up just before he finds diamonds in the mine. The Last Straw is always 1-4 changes from the bottom of the pile of suck, and then when you start to try to propagate that change back up the call stack, you find 75% of that other code you wrote is not needed, and you just need to add an argument or a little conditional block here and there. So you can use your IDE’s local history to cherry pick a couple of the bits you already wrote on the way down that are relevant, and dump the rest.
But you have to put that code aside to fight the Sunk Cost Fallacy that’s going to make you want to submit that +1012 instead of the +274 that is all you really needed. And by the way is easier to add more features to in the next sprint.
Do they really do it just because it's cheaper? I thought they did it for each generation to offer the best of that generation; it makes sense for more powerful chips to have more cores and higher capacity, but it doesn't make sense for each core to arbitrarily be less efficient or less performant just because you didn't buy more of them. Especially because this approach makes the base models an extraordinarily high value compared to base models from competitors.
I recently started writing for macOS in Swift and, holy hell, the debuggability of the windowing toolkits is actually unparalleled. I've never seen something that is this introspectable at runtime, easy to decompile and analyze, intercept and modify, etc. Everything is so modular, with subclassing and delegation patterns everywhere. It seems all because of the Objective-C runtime, as without it you'd end up needing something similar anyway.
You can reach into built-in components and precisely modify just what you want while keeping everything else platform-native and without having to reimplement everything. I've never seen anything like this before, anywhere. Maybe OLE on Windows wanted to be this (I've seen similar capabilities in REALLY OLD software written around OLE!) but the entirety of Windows' interface and shell and user experience was never unified on OLE so its use was always limited to something akin to a plugin layer. (In WordPad, for example)
The only thing that even seems reminiscent is maybe Android Studio, and maybe some "cross-platform" toolkits that are comparatively incredibly immature in other areas. But Android Studio is so largely intolerable that I was never able to dig very far into its debugging capabilities.
I feel like I must be in some sort of honeymoon phase but I 100% completely understand now why many Mac-native apps are Mac-native. I tried to write a WinUI3 app a year or two ago and it was a terrible experience. I tried to get into Android app development some years ago and it was a terrible experience. Writing GUIs for the Linux desktop is also a terrible experience. But macOS? I feel like I want to sleep with it, and I weep for what they've done with liquid glass. I want the perfection that led to Cocoa and all its abstractions. Reading all the really, super old documentation that explains entire subsystems in amazingly technical depth makes me want to SCREAM at how undocumented, unpolished and buggy some of the newer features have gotten.
I've never seen documentation anything like that before, except for Linux, on Raymond Chen's blog, and some reverse-engineering writeups. I do love Linux but its userspace ecosystem just is not for me.
Maybe this is also why Smalltalk fiends are such fans. I should really get into that sometime. Maybe Lisp too.
Writing objective-c code for mac os GUI apps was one of those things that finally made "interfaces"/"protocols" really click for me as a young developer. Just implement (some, not even all) method in "FooWidgetDelegate", and wire your delegate implementation into the existing widget. `willFrobulateTheBar` in your delegate is called just before a thing happens in the UI and you can usually interfere or modify with the behavior before the UI does it. Then `didFrobulateTheBar` is called after with the old and new values or whatever other context makes sense and you can hook in here for doing other updates in response to the UI getting an update. If you don't implement a protocol method, the default behavior happens, and preserving the default behavior is baked into the process, so you don't have to re-implement the whole widget's behavior just to modify part of it.
It's probably one of the better UI frameworks I think I've used (though admittedly a lot of that also is in part due to "InterfaceBuilder" magic and auto-wiring. Still I often wish for that sort of elegant "billions of hooks, but you only have to care about the ones you want to touch" experience when I've had to use other UI libraries.
> Reading all the really, super old documentation that explains entire subsystems in amazingly technical depth
Any links?
> Maybe this is also why Smalltalk fiends are such fans.
I started getting interested in Smalltalk after I tried writing a MacOS program by calling the Objective-C runtime from Rust and had a surprisingly good time. A Smalltalk-style OO language feels like a better base layer for apps than C.
Generally everything in that documentation archive is absolutely amazing. I don't know why it's an archive; presumably they laid off or reassigned the entire team working on it and there will be no more. The closest thing today would probably be Technotes: https://developer.apple.com/documentation/technotes
> I feel like I must be in some sort of honeymoon phase but I 100% completely understand now why many Mac-native apps are Mac-native.
it seems like everybody prefers ios, but i really still think after all these years i prefer appkit; it really is so well documented and the quality of the api is the best i've seen by a long mile
Welcome to Smalltalk, Lisp, Java and .NET, which alongside NeXTSTEP/OS X, share a common linage of tooling ideas, and programming language features.
Hence why given the option I rather stay in such environments.
Now Android Studio is the product of Google's mess, and I am glad to have moved away from Android development, it also doesn't have anything to do with enjoying pure Java development on desktop (Swing, SWT, JavaFX) and server.
> Writing GUIs for the Linux desktop is also a terrible experience.
I've found the DX for GTK to be at least tolerable. Not fantastic, but I can at least look at a particular API, guess how the C-based GObject code gets translated by my language bindings of choice, and be correct more often than not. The documentation ranges from serviceable to incomplete, but I can at least find enough discussion online about it to get it to do what I want.
Also, GTK apparently ships with a built-in inspector tool now. Ctrl-Shift-I in basically any GTK app opens it. That alone is extremely useful, and you basically have to do nothing to get it. It's free.
I've never tried Qt. The applications that use it always seem off to me.
As for OLE, you're actually thinking of COM, not OLE. They were co-developed together: COM is a cross-language object system (like GObject), while OLE is a set of COM interfaces for embedding documents in other arbitrary documents. Like, if you want to put a spreadsheet into a Word document, OLE is the way you have to do that. Microsoft even built much of IE[0] on top of OLE to serve as its extension mechanism.
OLE is dead because its use case died. Compound documents as a concept don't really work in the modern era where everything is same-origin or container sandboxed. But COM is still alive and well. It's the glue that holds Windows together - even the Windows desktop shell. All the extension interfaces are just COM. The only difference is that now they started packaging COM objects and interfaces inside of .NET assemblies and calling it "WinRT". But it's the same underlying classes. If you use, say, the Rust windows crate, you're installing a bunch of language bindings built from WinRT metadata that, among other things, call into COM classes that have been there for decades.
Mac apps are Mac native because Apple gives enough of a shit about being visually consistent that anyone using a cross-platform widget toolkit is going to look out of place. Windows abandoned the concept of a unified visual identity when Windows 8 decided to introduce an entirely new visual design built around an entirely new[1] widget toolkit, with no consideration of how you'd apply any of that to apps using USER.dll/Common Controls. As it stands today, Windows does not have a good answer to "what widget toolkit do I use to write my app", and even Microsoft's own software teams either write their own toolkits or just use Electron.
[0] Petition to rename ActiveX to WebOLE
[1] OK, yes, XAML existed in the Vista era, but that was .NET only, and XAML apps didn't look meaningfully different from ones building their own USER.dll window classes like it's 1993.
9front can mount old DOC/XLS documents as OLE 'filesystems' first and then extract the tables/text from them.
As for sandboxing, 9front/plan9 uses namespaces, but shared directories exist, of course. That's the point on computing, the user will want to bridge data in one way or another. Be with pipes, with filesystems/clipboard (or a directory acting as a clipboard with objects, which would be the same in the end).
> As for OLE, you're actually thinking of COM, not OLE. They were co-developed together: COM is a cross-language object system (like GObject), while OLE is a set of COM interfaces for embedding documents in other arbitrary documents. Like, if you want to put a spreadsheet into a Word document, OLE is the way you have to do that. Microsoft even built much of IE[0] on top of OLE to serve as its extension mechanism.
Oops, you are right about COM. I got them mixed up because I was thinking of the integration in WordPad.
> Mac apps are Mac native because Apple gives enough of a shit about being visually consistent that anyone using a cross-platform widget toolkit is going to look out of place. Windows abandoned the concept of a unified visual identity when Windows 8 decided to introduce an entirely new visual design built around an entirely new[1] widget toolkit, with no consideration of how you'd apply any of that to apps using USER.dll/Common Controls. As it stands today, Windows does not have a good answer to "what widget toolkit do I use to write my app", and even Microsoft's own software teams either write their own toolkits or just use Electron.
Mac apps are Mac native because the APIs are amazing and the ROI can be really really good. It takes so much effort to do the same from scratch, especially cross-platform, that, you're right, I can smell anything written in Qt (because the hitboxes and layout are off) or GTK (because the widget rendering is off).
With that said though, wxWidgets seems to translate EXTREMELY well to macOS, though last I used it, it didn't have good support for Mojave's dark mode. Maybe support is better nowadays. For example, Audacity appears to me as just a crammed Mac-native app rather than blatant use of a cross-platform toolkit, and wxPython used well can be completely mistaken for fully native.
wxWidgets calls the underlying native controls directly; Qt uses it to inform how to render but still does its own thing, at least according to a discussion I had with a Qt engineer some years back.
(I am open to being corrected)
wxWidgets has properly supported dark mode for a bit now.
Dunno if Apple's foldable will support Apple Pencil. (For that matter, not sure a touchscreen MacBook would either.) That's one use case for a properly rigid, solid, flat surface.
It really helps to learn in an environment where failure isn't emotionally catastrophic. If you only talk to people that are interesting or important to you, then you can end up learning the wrong things because failure hits so hard. The desperation this can create will further serve to drive people away!
People need to feel like it's safe to develop relations with you, rather than like you're trying to manipulate them into doing so, which is what happens when you learn only from very hard failures.
Getting downvoted for speaking the truth. HN loves to twist its nose every time someone praises a closed-source solution but falls head over heels for anyone claiming to work for a FAANG. Hypocrisy thy name is HN.
reply