Hacker Newsnew | past | comments | ask | show | jobs | submit | alrocar's commentslogin

We made the exercise to building a "wrapped" kind of feature for our users.

Being an analytics data platform on top of CLickHouse made it "simpler" because we are used to develop and support that kind of uses cases, but still that's the full story for those interested on the internals of those kind of features.


just ran the LLM to SQL benchmark over opus-4.1 and it didn't top previous version :thinking: => https://llm-benchmark.tinybird.live/


How does running it multiple times performs?

LLMs are non-deterministic, I think benchmarks should be more about averages of N runs, rather than single shot experiments.


Learn how to effectively use partitioning, sorting and compaction, and when Apache Iceberg is not enough for real-time analytics



Awesome stuff! Just published something similar today

Just curious, what is the most challenging thing in your opinion when building such log viewer?


That sounds great! Do you have a link? I'd love to check it out.

For me, the most challenging parts are still ahead - live tailing and a plugin system to support different storage backends beyond just ClickHouse. Those will be interesting problems to solve! What was the biggest challenge for you?


I cloned the Auth0 activity page - and made it better - by connecting the Auth0 logs to Tinybird and sprinkling on some LLM magic.


mmm didn't know about that, but my server may have big latency so it could happen. I see there's a reproducer in the issue, I'll take a look.


We recently built an MCP server (https://github.com/tinybirdco/mcp-tinybird/tree/main) so our users could ask questions to their workspace. We released fast and since MCP servers run locally we lacked observability on product metrics, error monitoring and all that stuff you usually want when you are in production.

Built something generic to monitor MCP servers (https://github.com/tinybirdco/mcp-tinybird/tree/main/mcp-ser...) using events + tinybird + prometheus + grafana, but I'm wondering what others are using for that purpose?


Are you finding building MCP integrations to be worth it? We've been using agents (e.g. langchain), which are pretty good at bringing in context and taking actions. Tool results become part of the context --it just works.


Good thing to me (besides being an open spec) is their simplicity, with libraries such as FastMCP you can just bring stuff you already have implemented into Claude (or any MCP client).


Some individuals in the AI space mentioned they didn't quite understand what AnthropicAI MCP is about.

Here's a complete walkthrough of how I built an MCP server to connect Tinybird and Claude, so you can build yours


In the specification, you have to read it carefully to understand the purpose. In the Getting Started guide, there's a diagram that lifts the confusion a little better.

The Model Context Protocol helps you build a translation layer between an LLM system, and some other system or data resource, such that the LLM may operate that system on your behalf or scan that resource for additional context in generation. You wrap a resource or some system local or external with an "MCP Server." The LLM can operate that MCP Server, which in turn operates the system or reads the resource.

The text kind of says that... but the diagram in the guide drives it home. The diagram or some form of it should probably be added into the top page of the spec.


I'm really interested in different points of view.

I guess you mean the kind of trunk based development? But still some sort of CI happens, maybe locally.

Never worked in a different way than using a local / remote CI pipeline, that's why I'm curious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: