Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very Cool, one of the missing pieces to AI being useful in business tasks is dynamic internal validation steps. I would suggest adding a couple of those out of the box. For example if the user expect JSON format out of the LLM, add a validation step that send the output back to the LLM to ask it if it is actually JSON. Then you can expand on that to more validations like "is the output polite". The ultimate solution is having the LLM build the validations itself.


This is a wonderful idea.

We've slowly been baking more and more logic into the AI nodes to make them easier to use. Adding categorizer and scorers instead of forcing people to define their own functions was a game changer. Definitely the direction we want to head in, thanks for the suggestion.


This might be useful for this: https://github.com/outlines-dev/outlines


https://github.com/outlines-dev/outlines/blob/7fae436345e621... squares with my experience using LLMs for anything real

  sequence = generator("Alice had 4 apples and Bob ate 2. Write an expression for Alice's apples:")
  print(sequence)
  # (8-2)
Then, there's a whole process around feeding the output of one LLM into another LLM for checking the checker .. I'm glad it works for some people some of the time to get some gains over 'doing it the old way'


Woah never heard of this repo but looks spot on. Checking it out now, thanks for the share


What would be the benefit of “send the output back to the LLM to ask it if it is actually JSON” instead of using JSON validation in whatever language they are programming in?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: