Hacker Newsnew | past | comments | ask | show | jobs | submit | nilshauk's commentslogin

What a clever trick to throw more money (governmental subsidies) into a sinking ship (xAI and "AI" in general). Perplexingly this maneuver will probably boost stock prices thus creating more monopoly money to burn resources with.


I detest AI on the grounds that it causes an overuse of our planet's resources as well as stealing IP and exploiting underpaid data workers.

However, if Mozilla can launch something capable that steals the thunder from all the closed source AI alternatives that might make the bubble pop finally.

Stock markets shook when Deepseek came on the scene and proved that clever coding might make up for using older hardware. The market leaders' moat suddenly didn't seem so impenetrable. In that same vein Mozilla might make a real dent by truly commodifying AI. AI stocks are highly valued currently because there's an idea that the leaders have something no one can copy.


Ok so Google ignores W3C on WEI.

But could someone else create a W3C proposal that could counteract WEI? It wouldn't have to implementation-specific but rather one or more principles drawing a line in the sand that shouldn't be crossed like what WEI is built to achieve?


Not with a "White Hat" on, I think.

If a user who is not you uses a browser using WEI (implicitly approving of this attestation tech) and connects to a website that uses WEI, that's entirely up to third-parties and there's nothing legal that you can do.

The most you can do is protest this with:

1. Using a browser without WEI or with WEI disabled.

2. Modifying your own site to talk the WEI protocol but for any browser that can talk that protocol, you ban the user from using your site (or redirect them to a site explaining how WEI is DRM of the entire internet, etc)

Moving beyond White Hat to Grey Hat and Black Hat, you get things like:

1. Modifying your own hosting company to apply this WEI-blacklisting mechanism to your clients' websites.

2. Convincing (or "convincing") owners of core backend libraries in popular programming languages to introspect connections and blacklist WEI-compatible browsers.

3. Take advantage of XSS vulnerabilities to interfere with WEI operations on other tabs within the same browser on the user's machine if they happen to be using your website.

4. Take advantage of vulnerabilities in the WEI protocol to corrupt the underlying attestation system so it fails to function in all future WEI requests for that physical machine.

5. Hack/Crack attestation system security and publicly release the keys, making any hardware using that version suspicious/blacklisted by users of WEI.

6. Probably some other things I haven't thought of, but as you can see they quickly go from dubiously legal to straight-up illegal. It would be best to nip WEI in the bud before such measures are deemed necessary.


So happy to learn of this and I wish them best of luck in their efforts. And I'm surprised to find so many people klinging to Copilot.

We shouldn't shed any tears for a megacorporation which shows such blatant disregard for the licensed works of people's labour.

Yes, AI is here to stay but we should be able to build AI that respects copyright. Yes, it's easier to just steal data and call it fair use. Whether or not that's stealing will be interesting to try in court.


Foolish take. If ML training is not fair use then all ML progress is dead in the water.

ML training is akin to reading or learning, and licenses do not apply to that.

You’re not thinking past “megacorp = bad”.


AI progress won't be dead in the water if it respects copyright laws. Yes, being free to just freely grab any data is infinitely easier. But having to rely on properly licensed datasets or asking users for consent should be the norm for ML development IMHO.

Also, If we had trained some A.I. on the Windows codebase and started freely using suggestions given by it I bet Microsoft would scream copyright infringement in a heartbeat.


I love that they included an ortholinear keyboard and open schematics. Definitely keeping my eyes on this.


I’m not happy about people losing money.

However, I’m happy if this means money will be invested in innovation towards saving the planet, rather than finding new ways to burn resources with proof-of-work-based crypto.


Wow. Thanks for the headsup! I’ll have to look into this.


Very much agree. Though there’s a line between implementing cryptographic algorithms yourself vs. implementing cookie based authentication.

Cryptography is something I’d leave to standard libraries. However, when it comes to authentication it might not be that hard to implement some cookie or token logic as long as the actual cryptography is handled by some well tested library.


I don't think my article shuts the door on domain specific languages even though it doesn't offer it as an answer. It could be an answer. By relying a bit less on frameworks and writing more code ourselves we have the chance to make our code fit the domain we're working with better. For now I wanted to wrestle a bit with the idea of always pulling in frameworks to do things.

> If your thing has been built so often already that much of the logic can be factored out into a framework, great - may as well use that

I agree, though that often comes with some costs and tradeoffs like I tried to outline in the article.


Yes! It's like we've taught people to fish with the new latest in fishing equipment, but we failed to teach them some tried and true fishing principles. Haha, maybe this analogy breaks down. But yeah, I find that we focus almost too hard on keeping up with framework release notes instead of trying to see if we can distill some common patterns relevant across frameworks.


If we've done that it's because overnight we realised we needed 10 billion fish and only had a handful of people currently able to fish.

Mass mobilisation of people able to catch fish has enabled businesses to join the digital landscape.

It may be true that we've not taught people in a way that's best for their long-term prospects, but had the computer science industry taken the approach of other engineering, nursing or even accounting, with strict legislated terms and certificates, and exam boards controlling quota of people able to go into computing, then perhaps it would be an ivory tower of well built and engineered software.

But it would also have risked leaving most businesses behind and things wouldn't have accelerated nearly as fast.


That's true we might not have been able to mobilize so much productivity so quickly for business without throwing frameworks at the problem. However, I think also a lot of effort might have been wasted by reinventing wheels and rewriting application many times over because some framework or language fell out of fashion and hiring for it became difficult.


A lot can be blamed on the "you're only competitive if you do X" mindset causing crashing waves of technological arms races that only indirectly concern the societal functions of computing.

What the end user ultimately wants to do is nothing that special: they either want to enter and edit data in some medium, or process existing data. What that "looks like" just depends on what layer they try to do it in.

For example, you could do your photo editing by writing code that loads the file and pushes bits. You could do similar on the command line by invoking Imagemagik. Or you could use a dedicated photo editing app. Or you could use a photo editing feature baked into a social media app.

Advancing age eventually makes one jaded about this, from two directions: "Back in my day, you edited photos in a darkroom, with toxic chemicals!" And then, more philosophically: "The point of you editing the photo is to gain clout with your buddies or sell some product, isn't it? So having the latest and greatest photo editing feature is just a fashion."

But of course, at some point in your life you're a user who wants to buy some version of this feature, but you're doing it without knowing what you want, exactly. So you ask around and end up with something.

And that doesn't really change just because you invoke a compiler or build system to access the feature. That's just the particular way in which we've professionalized software "development". At the bottom layer we have some Atlas effectively lifting the world by implementing the base algorithms, and the way in which they do so sets up the dependency structure of everything else. Atlas usually doesn't get hired to do his job: he implemented the thing 15 years ago as a research project and then moved on. Everyone depending on him is just some mercenary saying they have the best solution for your photo editing needs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: