Hacker Newsnew | past | comments | ask | show | jobs | submit | ottah's commentslogin

"And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows."

- Plato

I think no reasonable person would be against literacy in the modern world and similarly we will continue to adapt to new technology and be the better for it.


OMFG the cognitive dissonance necessary to vibe code a project with the stated goal of preventing AI scrapers. What has happened to your brain.

I believe you may have missed my initial note at the top (you are correct in that it is nonsense).

The concern for my brain is valid though, my thoughts and dreams now only materialize as Markdown task lists.


The justice system claims to be anti-axe murderer, yet axes were involved in the construction of nearly every courthouse in the nation! How can this be?

> author's note

> the following justification is nonsense, i just thought the idea of encoding data in recipe blogs was fun and silly.


> What has happened to your brain

presumably TDS based on the secret message


What's TDS?

Everyone has a right to feel about this however they please. In my opinion, it's an extravagant waste of shared government resources, from a state that is underserving it's citizen's basic needs. I for one am angry at the billions of dollars and engineering capacity put into a vanity project that doesn't improve the daily lives of anyone besides people selling rockets.

Perhaps, but I'm more angry at the 100+ billion gone to fraud in California. This is very far down the list of government "waste" to be worth giving an ounce of outrage about.

How is that quote in any way demonstrative of this being written by LLM? You do know that LLMs were trained on the internet and every digitized text they could get their hands on? You are jumping at shadows, calm down already.

Ah yes, let's destroy the accessible web. We'll all pluck out our eyes to spite them.

This is still unacceptable.

Feels very pseudo academic.

I'm not sure we can say it's accelerating. The techniques that adversarial actors use has always been changing and when they shift tactics it can take a while for an adequate defense is adopted. We're still dealing with sql injection in the owasp top ten. What I think would indicate an acceleration is when the most security oriented organizations continuously fail to defend against new attacks. If we start hearing about JPMorgan and Google getting popped every month or two, we're in trouble.

The acceleration is in the decrease of the cost to produce misinformation.

Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake.

We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months.


> The acceleration is in the decrease of the cost to produce misinformation.

So it's a spam issue. And normally, while annoying it's possible to fight spam, however on these topics we have built structures that disable the very mechanisms allowing us to fight spam. That's worrying.

The fact that someone can instruct their computer to astroturf their flight tracking app on some forum for nerds is irrelevant - people have been instructing "marketing agencies" to astroturf their brand of caffeinated sugar water on tv, radio and press for decades and centuries. For a very long time the "traditional media" was aware that their ability to sell astroturfing capacity was hanging on their general trustworthiness. Then the internets rose to prominence, traditional media followed by selling more and more of their capacity to astroturfers. Now we have a worrying situation that the internets might be spammed by astroturfers a bit too much, but the backup is broken already. Now that's truly frightening.

Welcome to the post-truth world, where objective references outside of your own village cannot exist.


It's an algorithm issue. When people hold a media consumption device in front of their face all day and the algorithms are played, then it's literally a brainwashing device.

It is not an algorithm issue. It would still be a huge problem with zero algorithmic social media.

Possibly this just isn't the generation of hardware to solve this problem in? We're like, what three or four years in at most, and only barely two in towards AI assisted development being practical. I wouldn't want to be the first mover here, and I don't know if it's a good point in history to try and solve the problem. Everything we're doing right now with AI, we will likely not be doing in five years. If I were running a company like Apple, I'd just sit on the problem until the technology stabilizes and matures.

If I was running a company like Apple, I'd be working with Khronos to kill CUDA since yesterday. There are multiple trillions of dollars that could be Apple's if they sign CUDA drivers on macOS, or create a CUDA-compatible layer. Instead, Apple is spinning their wheels and promoting nothingburger technology like the NPU and MPS.

It's not like Apple's GPU designs are world-class anyways, they're basically neck-and-neck with AMD for raster efficiency. Except unlike AMD, Apple has all the resources in the world to compete with Nvidia and simply chooses to sit on their ass.


CUDA is not the real issue, AMD's HIP offers source-level compatibility with CUDA code, and ZLUDA even provides raw binary compatibility. nVidia GPUs really are quite good, and the projected advantages of going multi-vendor just aren't worth the hassle given the amount of architecture-specificity GPUs are going to have.

Okay, then don't kill CUDA, just sign CUDA drivers on macOS instead and quit pretending like MPS is a world-class solution. There are trillions on the table, this is not an unsolvable issue.

Admittedly, my use of CUDA and Metal is fairly surface-level. But I have had great success using LLMs to convert whole gaussian splatting CUDA codebases to Metal. It's not ideal for maintainability and not 1:1, but if CUDA was a moat for NVIDIA, I believe LLMs have dealt a blow to it.

You can convert CUDA codebases to Vulkan and DirectX code, for all the good it does you. You're still constrained by the architecture of the GPU, and Apple Silicon GPUs pre-M5 are all raster-optimized. The hardware is the moat.

Apple technically hasn't supported the professional GPGPU workflow for over a decade. macOS doesn't support CUDA anymore, Apple abandoned OpenCL on all of their platforms and Metal is a bare-minimum effort equivalent to what Windows, Android and Linux get for free. Dedicated matmul hardware is what Apple should have added to the M1 instead of wasting silicon on sluggish, rinky-dink NPUs. The M5 is a day late and a dollar short.

According to reports, even Apple can't quite justify using Apple Silicon for bulk compute: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...


I mean, by any reasonable standard it still is. Almost any computer can run an llm, it's just a matter of how fast, and 0.4k/s (peak before first token) is not really considered running. It's a demo, but practically speaking entirely useless.

Devils advocate - this actually shows how promising TinyML and EdgeML capabilities are. SoCs comparable to the A19 Pro are highly likely to be commodified in the next 3-5 years in the same manner that SoCs comparable to the A13 already are.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: