Hacker Newsnew | past | comments | ask | show | jobs | submit | jbotz's commentslogin

Open Source implementation: https://github.com/scionproto/scion

And that patent looks like it is for an optimization, not a necessary component of SCiON.


> Has the climate collapsed? There are still glaciers in Glacier Nation Park. The Maldives remain islands, not seamounts.

Just to really quickly call out these tired old straw-men... all of these "predicted disasters" are far further along today than they were predicted to be by this date by, for example, the IPCC in 1990[0]. Deniers keep acting as if it scientists have been "crying wolf" for decades when the truth is that the 99% of the scientists doing real work on anthropogenic global warming have always been extremely conservative and reality has outpaced their predictions all along.

[0] https://www.ipcc.ch/report/ar1/wg2/


Yes, in mice, but human cancer cells:

"When we systemically administered our nanoagent in mice bearing human breast cancer cells, it efficiently accumulated in tumors, robustly generated reactive oxygen species and completely eradicated the cancer without adverse effects ..."

So it kills human cancer and doesn't harm the mouse in the process.


Xenografted human tumors in mice != human cancer. The support structure of the tumor (tumor microenvironment) differs between model mice and humans, cells derived from human cancer that can be cultivated in a lab and xenografted differ from typical human cancer cells, and xenografting requires immunodeficient mice, just to name a few factors that affect treatment response.

Mice models of cancer are useful, but you should never be too surprised when something that works in mice doesn't work in the clinic, xenografting or no. Cancer is complicated.


Doesn't harm the mouse. But would it harm the normal human cells?


ELIZA absolutely did not ever pass anything resembling a real Turing test. A real Turing test is adversarial, the interrogator knows the testees are trying to fool him.


Landauer and Bellman, absolutely put ELIZA to an adversarial Turing test, and called it such, in 1999. [0]

But... Over in 2025, ELIZA was once again, put to the Turing test in adversarial conditions. [1] And still had people think it was a real person, over 27% of the time. Over a quarter of the testees, thought the thing was a human.

The "ELIZA Effect" wasn't coined because everyone understands that an AI isn't conscious.

[0] https://books.google.com.au/books?id=jTgMIhy6YZMC&pg=PA174

[1] https://arxiv.org/html/2503.23674v1


Unfortunately I'm not sure the Turing test posited a minimal level of intelligence for the human testers. As we have found with LLMs, humans are rather easy to fool.


Now if you have multiple teams each doing this and then have all those agents talk to each other and then report back to your team, you get "AI Hyperchat"[0], which may actually be a really good idea that has the potential to seriously improve intra-organizational communications (disruptively so). See also [1] for a VentureBeat article about the idea.

[0] https://ieeexplore.ieee.org/abstract/document/11105240

[1] https://venturebeat.com/orchestration/ai-agents-turned-super...



Improbable, the OP is a long-time maintainer of a significant piece of open source software and this whole thing unfolded in public view step by step from the initial PR until this post. If it had been faked there would be smells you could detect with the clarity of hindsight going back over the history and there aren't.


TL;DR: data from 12,000 firms in EU and US finds that AI adoption led to 4% increase in labour productivity without causing significant job losses.


Hmm, I am not sure the missing front fork is worse than the unsteerable front wheel mountings (which look like rear wheel mountings) most models so far have produced. It might be better... sort of an admission of an unsolved problem in design of the bike rather than producing something that looks approximately correct but can't possibly work. Like a "TODO" comment in code.

Also the position of the pelican on the bike would be somewhat awkward, but fits anatomically with a pelican's relatively short legs. In fact I can remember riding (or trying to ride) an adult bike as a young child using a similar position.


> You should read on past the first bit...

Not GP, but... the author said explicitly "if you believe X you should stop reading". So I did.

The X here is "that the human mind can be reduced to token regurgitation". I don't believe that exactly, and I don't believe that LLMs are conscious, but I do believe that what the human mind does when it "generates text" (i.e. writes essays, programs, etc) may not be all that different from what an LLM does. And that means that most of humans's creations are also the "plagiarism" in the same sense the author uses here, which makes his argument meaningless. You can't escape the philosophical discussion he says that he's not interested in if you want to talk about ethics.

Edit: I'd like to add that I believe that this also ties in to the heart of the philosophy of Open Source and Open Science... if we acknowledge that our creative output is 1% creative spark and 99% standing on the shoulders of Giants, then "openness" is a fundamental good, and "intellectual property" is at best a somewhat distasteful necessity that should be as limited as possible and at worst is outright theft, the real plagiarism.


So do you believe seahorse emoji exists?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: