Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried to build something years ago on a similar foundation, and I found maintaining the API list to be impossible as a solo dev. In a pool of say 30 APIs, at least one would break their "contract" as a public resource daily. Shifting endpoints, revoked public tokens, changing outputs.

I was quite disappointed because I loved the product, a dashboard for arbitrary live data sourced from APIs, but the cost of maintaining it was too high.



I would say that would be my use case for LLMs. These should be easily fixed by automation that can reason about documentation and could spit out code fixes.

Of course I assume there is documentation that is updated before new changes go live which might be too much to ask :)


It's interesting engineering problem, I wouldn't imagine LLMs as they currently are could work directly on the whole codebase without breaking it just as often as the APIs. But perhaps you could have it maintain connectors/interfaces for each individual API, such that it can get one very wrong and not ruin the whole program.

You could even have its success depend on a test suite, so that it iterates until the tests pass.


For “API list” that only has tests to be fixed, something like shifting endpoint should be fixable with tests + LLM.

So that’s the idea I proposed that single dev with automation and LLM should be able to maintain “API list” but maintaining any code that depends on the API I expect is above LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: