Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the more realistic direction is exposing API / MCP-style interfaces for agents to interact with a product’s functionality, rather than shipping UI components that an AI client would render.

The "AI renders your components inside chat" idea feels very similar to Facebook’s old canvas apps. That model disappeared for good reasons: abuse, security, and loss of platform control.

It seems far more likely that AI platforms will provide their own interaction primitives (forms, pickers, confirmations, etc.) and simply call third-party tools behind the scenes. That lets the platform retain control over UX and safety, and avoids the risks of embedding arbitrary third-party UI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: