The demo on the page actually shows an interesting and worrying tendency of LLMs: even given a tool that can fuzzily search for a place by name, the LLM thinks it knows that "cal academy" is the California Academy of Sciences, and it passes that into the search function instead of faithfully transmitting the user's input.
It worked fine in the example, but what if there's a new school in my town called "Cal Academy" for short? Is the LLM just going to assume it knows what I'm talking about?
Seems like you'd need a pretty good system prompt for this to force the LLM to suppress its "world knowledge" — which it's been heavily trained to use as much as possible — and defer to its tools.
You don't think the LLM made the leap to California Academy of Sciences specifically because Coit Tower was in the context?
If you asked for driving directions from your local walmart to cal academy, the LLM seems just as likely to decide you mean something else by "cal academy" and use the tools available to determine what.
ETA: Out of curiosity, I tested this on ChatGPT. It did ultimately give directions to California Academy of Sciences. However, during research, it also checked the website of the local school district as well as Waze to determine if I meant other nearby schools, even looking up directions for a local acting school and daycare (each with "Academy" in the name).
It worked fine in the example, but what if there's a new school in my town called "Cal Academy" for short? Is the LLM just going to assume it knows what I'm talking about?
Seems like you'd need a pretty good system prompt for this to force the LLM to suppress its "world knowledge" — which it's been heavily trained to use as much as possible — and defer to its tools.