Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn’t an external directive; Anthropic was founded with the mission of creating safe, reliable AI systems. You wouldn’t see the same people working at the company if the company didn’t stand by its acceptable use policy and other internal standards


Isn't a safe and reliable intelligence an oxymoron?


Nobody knows. That’s what they founded Anthropic to find out


Are you saying intelligence is inherently unsafe? That seems like a pretty wild conclusion and I can't see any logical way to jump to your opinions.


I'm saying the capability to reason about novel situations is in tension with guaranteeing it never produces harmful outputs. We are talking about contradictory design constraints.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: