Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really sad to hear. Probabilistic programming languages are IMO one of the coolest things ever: if you have an idea about how your data could be plausibly generated given some massive amount of hidden state and inputs, and an arbitrarily complex rendering function, you just write the rendering function and it determines probability distributions over the state variables that most likely map your inputs to your output.

For instance, say you want to be able to vectorize logos, e.g. find the SVG representation of a raster image. If you wanted to link a text model of the characters that make up SVG files to their raster representation via a modern deep learning system, you'd need a heck of a lot of data and training time. But if you could instead just write a (subset of a) SVG parser and renderer as simply as you'd write it in any other programming language, but where the compiler instead creates a chain of conditional probability distributions that can be traversed with gradient descent, you can reach a highly reliable predictive model with significantly less training time and data.

This is where the massive cost savings come in. You get a forward-deployed engineer who knows this stuff and can dig into the compiler for features not yet implemented, they can work magic on any domain problem. I would have loved to have seen the spinoff they mentioned. Sigh.

https://news.ycombinator.com/item?id=12774459 is an old comment that goes more into detail on the tech and has a number of links!

EDIT: see also https://beanmachine.org/ which is OP's team's work



> But if you could instead just write a (subset of a) SVG parser and renderer as simply as you'd write it in any other programming language, but where the compiler instead creates a chain of conditional probability distributions that can be traversed with gradient descent, you can reach a highly reliable predictive model with significantly less training time and data.

It's a balance between engineer time (headcount costs) and training time/costs (infra costs.) Usually engineer time is more valuable than training costs. Embedding engineers into teams and building cost models is one of those cases where probabilistic programming makes a lot more sense than a DL approach, but most situations favor the economics of a DL approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: