Somebody should tell the Claude code team then. They’ve had some perf issues for awhile now.
More seriously, the concept of trust is extremely lossy. The LLM is gonna lean in one direction that may or may not be correct. At the extreme, it wound likely refute a new discovery that went against what we currently know. In a more realistic version, certain AIs are more pro Zionist than others.
I meant that LLMs can be trusted to do searches and not hallucinate while doing it. You’ve taken that to mean it can comply with anything.
The thing is, LLMs are quite good at search and probably way way more strong that whatever RAG setup this company has. What failure mode are you looking at from a search perspective? Will ChatGPT just end up providing random links?
I have provided an actual, concrete example of how the security completely backfired with llms - OpenClaw. The reason why I tried to provide something recent is because the usual excuse when providing examples more away in the past is "llms have improved a lot, they don't do that any more".
Yet now I provide an example of a very recent, big, very obvious, very prominent security explosion and now I am "grasping at the latest straw".