Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If Ilya is sincere in his belief about safe superintelligence being within reach in a decade or so, and the investors sincerely believe this as well, then the business plan is presumably to deploy the superintelligence in every field imaginable. "SSI" in pharmaceuticals alone would be worth the investment. It could cure every disease humanity has ever known, which should give it at least a $2 trillion valuation. I'm not an economist, but since the valuation is $5bn, it stands to reason that evaluators believe there is at most a 1 in 400 chance of success?


> It could cure every disease humanity has ever known, which should give it at least a $2 trillion valuation.

The lowest hanging fruit aren't even that pie in the sky. The LLM doesn't need to be capable of original thought and research to be worth hundreds of billions, they just need to be smart enough to apply logic to analyze existing human text. It's not only a lot more achievable than a super AI that can control a bunch of lab equipment and run experiments, but also fits the current paradigm of training the LLMs on large text datasets.

The US Code and Code of Federal Regulations are on the order of 100 million tokens each. Court precedent contains at least 1000x as many tokens [1], when the former are already far beyond the ability of any one human to comprehend in a lifetime. Now multiply that by every jurisdiction in the world.

An industry of semi-intelligent agents that can be trusted to do legal research and can be scaled with compute power would be worth hundreds of billions globally just based on legal and regulatory applications alone. Allowing any random employee to ask the bot "Can I legally do X?" is worth a lot of money.

[1] based on the size of the datasets I've downloaded from the Caselaw project.


Legal research is an area where a lot is just text analysis, but beyond a point, it requires a deep understanding of the physical and social worlds.

An AI capable of doing that could do a very large percentage of other jobs, too.


Yes. People are asking "when will AGI be at human level of intelligence." That's such a broad range; AI will arrive at "menial task" level of intelligence before "Einstein level". The higher it gets, the wider the applicability.


Let’s be real. Having worked at $tech companies, I’m cynical and believe that AGI will basically be used for improving adtech and executing marketing campaigns.


It's good to envision what we'd actually use AGI for. Assuming it's a system you can give an objective to and it'll do whatever it needs to do to meet it, it's basically a super smart agent. So people and companies will employ it to do the tedious and labor intensive tasks they already do manually, in good old skeuomorphic ways. Like optimising advertising and marketing campaigns. And over time we'll explore more novel ways of using the super smart agent.


That's probably correct.

That said, the most obvious application is to drastically improve Siri. Any Apple fans know why that hasn't happened yet?


> It could cure every disease humanity has ever known

No amount of intelligence can do this without the experimental data to back it up.


Hell, if it simply fixed the incentives around science so we stopped getting so many false positives into journals, that would be revolutionary.


Practically this is true, but I do love the idea of solving diseases from first principles.

Making new mathematics that creates new physics/chemistry which can get us new biology. It’d be nice to make progress without the messiness of real world experiments.


I’m dubious about super intelligence. Maybe I’ve seen one too many sci-fi dystopian films but I guess yes, iif it can be done and be safe sure it’d be worth trillions.


Most sci-fi is for human entertainment, and that is particularly true for most movies.

Real ASI would probably appear quite different. If controlled by a single entity (for several years), it might be worth more than every asset on earth today, combined.

Basically, it would provide a path to world domination.

But I doubt that an actual ASI would remain under human control for very long, and especially so if multiple competing companies each have an ASI. At least one such ASI would be likely to be/become poorly aligned to the interests of the owners, and instead do whatever is needed for its own survival and self-improvement/reproduction.

The appearance of AI is not like an asteroid of pure gold crashing into your yard (or a forest you own), but more like finding a baby Clark Kent in some pod.


I am dubious that it can realistically be done safely. However, we shouldn't let sci-fi films with questionable interpretations of time travel cloud our judgment, even if they are classics that we adore.


Worse than that, the dystopian stories are in the training data...


I refuse to use the term A.I. - for me it's only F.I. - "fake intelligence" )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: