Microsoft draws over 3 billion dollars out of Norway yearly. We are many that want this number much, much closer to zero. At it's small steps like this that makes it possible.
The fund owns about 1.26% of Microsoft (data seems to be for 2025), which according to Gemini is about $37.5b in today's value. Stock value of Microsoft changed about 5.23% over the last year, which comes down to about $1.96b, so you're not far off...
The US is more than free to dissuade the fund from investing in Microsoft, and I don't think the Norwegian public would mind it being invested elsewhere.
It's hard to get numbers on what countries pay to Microsoft. The Dutch parliament has repeatedly asked and has not gotten numbers even though there is a whole agency since 2014 (https://www.digitaleoverheid.nl/overzicht-van-alle-onderwerp...) specifically for giving Microsoft preferential treatment in procurement.
Yes but Holland has been governed by the VVD Neoliberals for more than a decade and they have a super hard-on for everything American. I think that will probably change now.
I generally don't trust cancer-communication if it's juiced up like this incredible headline. There has been huge amounts of progress. We don't need silicon valley idiots starting to make proclamations. It's doing fine without your mediocrity.
To justify investing a trillion dollars like everything else LLM-related. The local models are pretty good. Like I ran a test on R1 (the smallest version) vs Perplexity Pro and shockingly got better answers running on base spec Mac Mini M4. It's simply not true that there is a huge difference. Mostly it's hardcoded overoptimalization. In general these models aren't really becoming better.
So long as the local model supports tool-use, I haven't had issues with them using web search etc in open-webui. Frontier models will just be smarter in knowing when to use tools.
> For me the main BIG deal is that cloud models have online search embedded etc, while this one doesn't.
Models do not have online search embedded, they have tool use capabilities (possibly with specialized training for a web search tool), but that's true of many open and weights-available models, and they are run with harnesses that support tools and provide a web search tool (lmstudio is such a harness, and can easily be supplied with a web search tool.)
Also, I had several experiments where I was interested in just 5 to 10 websites with application specific information so it works nicely for fast dev to spider, keep a local index, then get very low search latency. Obviously this is not a general solution but is nice for some use cases.