Hacker Newsnew | past | comments | ask | show | jobs | submit | dban's commentslogin

SMTP is gated behind the $20/mo Pro plan to reduce spam on the internet.

It sounds like you were running a production workload on the Hobby plan


that's rich, from Railway employee.

your company's mod in that thread said there was supposed to be no SMTP at all for any plan but it was enabled by a bug. then once you saw people were using it you decided to milk your bug via the most expensive payment plan.

but that's your internal dealings. from your paying customers perspective, company had a change in the environment where something was working and now it's not. which could be even okay if it was a legit bug that was fixed, but what makes it worse is instead you said "just pay us 4x more and you get it back". for some users it probably just broke production, is there a more perfect time for blackmail right?

don't try to paint it as altruistic attempt to reduce spam in the internet, this is sleazy af


You'd have to ask the author or editor or whoever wrote the title, I can't say I get what that phrase means either. Broadly the primitives we're building are all aimed at shortening the distance between generating code and deploying it


That's either the fun or the insane part of the challenge, depending on who you ask, going up against some of the most profitable companies in the history of the world

We're happy to answer any questions btw :)


Author here. Last time we had a free plan, we didn't have PMF so we ended up (at one point) losing $16 for every $1 of top line revenue. This is the story of what we learned the first time and what's different this time


Hi HN, it's the last day of our launch week and to celebrate we're announcing a $1M matching cash kickback for open source developers

The way it works is you make a template, other developers run it on our platform, and you get $0.50 for every $1 we collect in usage-based billing against it

We've already distributed ~$60K and when the $1M runs out we'll go back to the standard 25% kickback


> But, after selling Insomnia in 2019 and watching it expand into the broader feature set of Postman, I was left wanting a simpler tool again. Yaak was my answer to that

I'd love to read about your experience building two distinct but similar products in the same space years apart


I'd say the only real differences are that (1) I know what I'm building from the start and (2) I can avoid the fundamental technical mistakes.

It's actually been way harder to gain traction this time, I think because APIs are no longer sexy and there are so many good tools out there.


Author here, I personally wrote more than 100 of the weekly changelogs mentioned in the post, so I'm happy to answer any questions around mechanics, tooling, etc.


We pulled some cost stuff out of the post in final review because we weren't sure it was interesting ... we'll bring it back for a future post


This is our first post about building out data centers. If you have any questions, we're happy to answer them here :)


I thought it was an interesting post, so I tried to add Railway's blog to my RSS reader... but it didn't work. I tried searching the page source for RSS and also found nothing. Eventually, I noticed the RSS icon in the top right, but it's some kind of special button that I can't right click and copy the link from, and Safari prevents me from knowing what the URL is... so I had to open that from Firefox to find it.

Could be worth adding a <meta> tag to the <head> so that RSS readers can autodiscover the feed. A random link I found on Google: https://www.petefreitag.com/blog/rss-autodiscovery/


How do you deal with drive failures? How often does a Railway team member need to visit a DC? What's it like inside?


Everything is dual redundancy. We run RAID so if a drive fails it's fine; alerting will page oncall which will trigger remote hands onsite, where we have spares for everything in each datacenter


How much additional overhead is there for managing the bare-metal vs cloud? Is it mostly fine after the big effort for initial setup?


We built some internal tooling to help manage the hosts. Once a host is onboarded onto it, it's a few button clicks on an internal dashboard to provision a QEMU VM. We made a custom ansible inventory plugin so we can manage these VMs the same as we do machines on GCP.

The host runs a custom daemon that programs FRR (an OSS routing stack), so that it advertises addresses assigned to a VM to the rest of the cluster via BGP. So zero config of network switches, etc... required after initial setup.

We'll blog about this system at some point in the coming months.


How did you select the hardware? Did you do a bake off/poc with different vendors? With the intention of being in different countries, are you going to leverage the same hardware at every DC? What level of support SLA did you go with for your hardware vendors and the colo facilities? And my favorite, how are your finances changing (plus pros cons) by going capex vs opex?


These are wonderful, thanks for sharing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: