Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and that may exhibit adaptiveness after deployment

So if an AI can't change its weights after deployment, it's not really an AI? That doesn't make sense.

As for the other criteria, they're so vague I think a thermostat might apply.



Keyword 'may'.

A learning thermostat would apply, say one that uses historical records to predict changes in temperature and preemptively adjusts. And it would be low risk and unregulated in most cases. But attach to a self-heating crib or premature baby incubator and that would jump to high risk and you might have to prove it is safe.


So if the thermostat jumps to 105 during the night, that's not considered 'high-risk?'


Maybe you are right and it is still risky for sleeping adults. In any case, even high risk the standard that needs to be followed might be as simple as 'must have a physical cutoff at 30C'.


> As for the other criteria, they're so vague I think a thermostat might apply.

As long as the thermostat doesn't control people's lives, that's fine.


> they're so vague I think a thermostat might apply

Quite.

One wonders if the people who came up with this have any actual understanding of the technology they're attempting to regulate.


It _may_ exhibit adaptiveness after deployment, which would not change it being AI. I think that is the right reading of the definition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: