So the chefs are preparing food that has the same macros as ultraprocessed meals (I assume like tv dinners or something?) Why do they keep referring to the freshly-prepared food as "ultraprocessed"?
“Is this processed or unprocessed?” I asked.
Kozlosky smiled. “Ultra-processed,” she said. “Lots of participants can’t tell the difference.”
If the term has any meaning, you could tell very easily. Go look at a freshly fried tortilla chip, and compare it to a tostito. You know which one is which instinctively.
I thought I understood the study but now I'm not sure. I thought the idea was to take the exact same thing you'd get in a tv dinner and make it fresh, so no freeze drying, no preservatives, etc. Then if that food on its own causes the same pattern of health issues, we know it's simply a diet problem. It sounds like they replicated that effect. So they got evidence that ultraprocessing doesn't actually matter all that much?
A ton of vector math applications these days are high dimensional vector spaces. A good example of that for arm would I guess be something like fingerprint or face id.
Also, it doesn't just speed up vector math. Compilers these days with knowledge of these extensions can auto-vectorize your code, so it has the potential to speed up every for-loop you write.
> A good example of that for arm would I guess be something like fingerprint or face id.
So operations that are not performance critical and are needed once or twice every hour? Are you sure you don't want to include a dedicated cluster of RTX 6090 Ti GPUs to speed them up?
I'd argue that those are actually very performance critical because if it takes 5 seconds to unlock your phone, you're going to get a new phone.
The point is taken, though, that seemingly the performance is fine as it is for these applications. My point was only that you don't need to be running state of the art LLMs to be using vector math with more than 4 dimensions.
The entire quoted section in the middle adds nothing. It just keeps repeating the same things over and over, and it doesn't answer the question of how we know the offset at all. Makes me think his "friend" is an LLM.
Just based on this blog post it seems like he wanted the project to be more “professional” in some way that the rest of the developer group didn’t. I wonder if that difference in vision, combined with a (probably justified based on your comment) feeling that he was doing a disproportionate amount of the work lead to an unsustainable situation.
Calling it a post-mortem while others are continuing the project still seems kind of petty, though.
Doesn't look like he was, but then looking at the actual commit list: https://github.com/ublue-os/bazzite/commits/main/?before=e49...
He definitely had more than the 14 commits listed. They might've lost their email due to the conflicts & lost ownership over the commits?
Software history is rife with projects that outlive a person like that leaving, though. Ulrich Drepper comes to mind immediately. They don't own the project.
I can make corn too. I go to the supermarket and hand them these little green pieces of paper, and then I have corn.
Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.
You can actually act on the advertisements and coupons, though. And the companies who sent those offers to you are obligated to abide by them. This potentially would be like if you got a BOGO coupon in the mail and when you tried to redeem it, they just pretended like it didn't exist.
Unused RAM is wasted. But used RAM is also wasted, sometimes. If I can accomplish the same thing with less RAM, that's better, because it lets me do other things at the same time. It doesn't mean I'm not going to use that RAM, that would be pointless. My desktop running dwm typically idles at ~50GiB RAM usage from random crap I've got running. But I can prove that the desktop is using no more than like 300MiB.
reply