The short version: Tally is not doing LLM-based classification at runtime. It’s a local, deterministic rule engine. Rules live in files, run offline, and are fully inspectable and hopefully explainable.
LLMs are optional and only used to help author and refine rules, because the hard part isn’t applying regex — it’s maintaining and evolving rule sets as new messy merchant strings show up. Once rules exist, there are zero model calls.
This grew out of me initially using coding agents to generate one-off scripts for my own CSVs. That worked, but all the logic lived in prompts. The pivot was realizing the rules are the real artifact worth keeping and sharing.
If you want to hand-write rules, run offline, or use local models, that all works. Docs with the concrete workflow are here: https://tallyai.money/guide.html
Aspire has a code-based application model that is used to represent your application (or a subset of your application) and its dependencies. This can be made up of containers, executables, cloud resources and you can even build your own custom resources.
During local development, we submit this object model to the local orchestrator and launch the dashboard. This orchestrator is optimized for development scenarios and integrates with debuggers from various IDEs (e.g. VS, VS code, Rider etc, it's an open protocol).
For deployment, we can take this application model to produce a manifest that (which is basically is a serialized version of the app model with references). Other tools can use this manifest to translate these aspire native assets into deployment environment specific assets. See https://learn.microsoft.com/en-us/dotnet/aspire/deployment/m...
This is how we support Kubernetes, azure, eventually AWS etc. Tools translate this model to their native lingua franca.
Longer term, we will also expose an in-process model for transforming and emitting whatever manifest format you like.
You can update dependencies on your own. The model is quite extensible and nuget is great. You can override container image tags, or any settings we default to. That's the great part about using code!
If I made mistakes feel free to file an issue or even send me a PR. It's open source! That said, you're right that I don't say how best to call sync from async methods (the latter is more difficult as there's no good way do it well).
The short version: Tally is not doing LLM-based classification at runtime. It’s a local, deterministic rule engine. Rules live in files, run offline, and are fully inspectable and hopefully explainable.
LLMs are optional and only used to help author and refine rules, because the hard part isn’t applying regex — it’s maintaining and evolving rule sets as new messy merchant strings show up. Once rules exist, there are zero model calls.
This grew out of me initially using coding agents to generate one-off scripts for my own CSVs. That worked, but all the logic lived in prompts. The pivot was realizing the rules are the real artifact worth keeping and sharing.
If you want to hand-write rules, run offline, or use local models, that all works. Docs with the concrete workflow are here: https://tallyai.money/guide.html
Happy to answer concrete questions.