Hacker Newsnew | past | comments | ask | show | jobs | submit | regularfry's commentslogin

Having spent some time kicking around the Delphi space I got quite into WPF in 2007ish. By 2010 I had not just sworn off it, I'd sworn off Windows entirely. The constant stream of rug-pulls as one bit of MS managed to pull off a political heist over another and - oh no - yet another "latest and greatest" technology was effectively deprecated within 18 months of launch, invalidating all the effort you put in to staying up to date just became a pointless treadmill.

Fortunately Rails was taking off at that point so it was fairly easy to change horses and just ignore it.


If I'm writing Windows desktop GUIs I still stick to WPF. Might be Stockholm syndrome but I quite like it.

I don't see the reason to use any of the new ms ui frameworks. Especially if ms themselves don't even really use them.

As far as I know visual studio is still a WPF project so I'm not super worried about it no longer working.


WPF looks much nicer. Personally I find it hard as hell to debug.

Winforms just work, and have a well defined set of behaviors. It does not matter that they do not look as nice for most people.


That’s not so bad. I still stick to win32

Microsoft had a lot of great talent suffering from a lack of leadership and coherent vision. They foreshadowed everything wrong with Big Tech today.

what surprised me is how different the rendering architecture is for each framework.

Win32 -> message loops & GDI

Winforms -> managed C# via P/Invoke

WPF -> throwse all away and uses DirectX

UWP -> Appcontainer sandboxing

WinUI -> decouples from OS entirely

This visual breakdown helped me to see it clearly - https://vectree.io/c/evolution-of-windows-gui-frameworks-fro...


At the same time VB still works and runs, so they don't always rug pull.

They might have forgotten to pull that rug.

They did pull that rug, twice, in two different directions.

1) VB7 (VB.NET) entirely split the VB developer community.

2) VB6 IDE has not worked well and is entirely unsupported in every Windows after XP. It's generally recommend to build VB6 apps in an XP VM and XP being out of security support it's now a huge "Use at your own risk" and "Do your best to isolate the VM from ever having an internet connection". (Not to mention that installers like Install Shield that still understand VB6' super messy version of COM are generally also out of support and security support.)

It was alleged that Microsoft almost dropped the runtime components for VB6 in Windows 11. It starts to feel like only a matter of time before they do.


Definitely not, since it actually takes quite a lot of red tape to ship something as ancient as MSVBVM60.DLL in Windows 30 years later, and guarantee that it is still working.

It's just that it's a piece of tech from back when Microsoft corporate dominance on the desktop was at its peak, and many large companies bought into the then-current tech stack, including VB6. So now Microsoft is stuck maintaining it because those are the customers that bring consistent revenue.


But that's exactly the root of the complaint. Because there's (for the sake of argument) only one syntactic concept, there's no bandwidth for structural concepts to be visible in the syntax. If you're used to a wide variety of symbols carrying the structural meaning (and we're humans, we can cope with that) then `)))))))` has such low information density as to be a problematic road bump. It's not that the syntax is hard to learn, it's that everything else you need to build a program gets flattened and harder to understand as a result.

Even among lisps this has been problematic, you can look at common lisp's LOOP macro as an attempt to squeeze more structural meaning into a non-S-expression format.


My money's on whatever models qwen does release edging ahead. Probably not by much, but I reckon they'll be better coders just because that's where qwen's edge over gemma has always been. Plus after having seen this land they'll probably tack on a couple of epochs just to be sure.

Thinking vs non-thinking. There'll be a token cost there. But still fairly remarkable!

Is there a reason we can't use thinking completions to train non-thinking? i.e. gradient descent towards what thinking would have answered?

From what I've read, that's already part of their training. They are scored based on each step of their reasoning and not just their solution. I don't know if it's still the case, but for the early reasoning models, the "reasoning" output was more of a GUI feature to entertain the user than an actual explanation of the steps being followed.

At this point my bet is that the breakthrough isn't going to be qbits per chip, it's going to be entanglements-per-second in quantum networking. If you could string together simpler processors in a cluster at anything approaching interesting scales then all of a sudden the orders of magnitude become a lot less constrained and it's just a money problem.

Quantum networking is a lesser problem than changing the state and keeping intact long enough. You can already move quantum state over fiber optics pretty reliably, so transport exists, but what then? You need to put the qubits of the connected chip into the corresponding state (which takes time), and do it many times, and all that time is an overhead.

Superconducting QCs are fast, but the state degrades incredibly quickly, so you only have a fraction of a second (maybe a millisecond at best, currently) until the entire state is garbage. Some other modalities like trapped ion are the opposite: state can live long, but each operation is orders of magnitude slower.


Everyone may want the best, but the amount of AI-addressable work outstrips the budget available for buying the best by quite a wide margin.

Looks like an incremental improvement, technically. Seems to benchmark around Kimi K2.5 but it's cheaper and faster.

For quite a long time there will be a greater advantage to local processing for STT than for TTT chat, or even OCR. Being able to do STT on the device that owns the microphone means that the bandwidth off that device can be dramatically reduced, if it's even necessary for the task at hand.

The OS distro model is actually the right one here. Upstream authors hate it, but having a layer that's responsible for picking versions out of the ecosystem and compiling an internally consistent grouping of known mutually-compatible versions that you can subscribe to means that a lot of the random churn just falls away. Once you've got that layer, you only need to be aware of security problems in the specific versions you care about, you can specifically patch only them, and you've got a distribution channel for the fixes where it's far more feasible to say "just auto-apply anything that comes via this route".

That model effectively becomes your ring 1. Ring 0 is the stdlib and the package manager itself, and - because you would always need to be able to step outside the distribution for either freshness or "that's not been picked up by the distro yet" reasons - the ecosystem package repositories are the wild west ring 2.

In the language ecosystems I'm only aware of Quicklisp/Ultralisp and Haskell's Stackage that work like this. Everything else is effectively a rolling distro that hasn't realised that's what it is yet.


Its existence has been used by the devs as a reason not to prioritise fixing user-facing bugs. It really should be in core at this point.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: