Very Cool, one of the missing pieces to AI being useful in business tasks is dynamic internal validation steps. I would suggest adding a couple of those out of the box. For example if the user expect JSON format out of the LLM, add a validation step that send the output back to the LLM to ask it if it is actually JSON. Then you can expand on that to more validations like "is the output polite". The ultimate solution is having the LLM build the validations itself.
We've slowly been baking more and more logic into the AI nodes to make them easier to use. Adding categorizer and scorers instead of forcing people to define their own functions was a game changer. Definitely the direction we want to head in, thanks for the suggestion.
sequence = generator("Alice had 4 apples and Bob ate 2. Write an expression for Alice's apples:")
print(sequence)
# (8-2)
Then, there's a whole process around feeding the output of one LLM into another LLM for checking the checker .. I'm glad it works for some people some of the time to get some gains over 'doing it the old way'
What would be the benefit of “send the output back to the LLM to ask it if it is actually JSON” instead of using JSON validation in whatever language they are programming in?