Hacker Newsnew | past | comments | ask | show | jobs | submit | grugdev42's commentslogin

It's a handy skill to have if you interact with Linux machines.

You'll need to edit files sometimes, and Vim (or Vi) is usually present. I don't think I've seen an install without it.

The basics (opening files, writing, and closing) can be learnt in an hour. It's enough to make simple changes to .conf files.


Using vim to do this seems silly. Nano is also nearly always present, and doing those “basic” things is 10x more straightforward in an editor that isn’t modal and just gets out of your way.

I’ve often in my career witnessed engineers who’ve cargo culted the need for vim, but they only know how to hit ESC !wq or whatever, and one errant keystroke puts them in modal hell of some sort that, often requiring they just close the terminal and try again.

I don’t begrudge those who want to become power-VIM-users, though it seems wildly awkward to me, to each their own. But if you just want to use it to do the “basics” on ssh sessions, using nano makes more sense. PGUP and PGDN and Home and End and arrows work just fine to navigate, and the bindings for most things are printed right on the screen (except Ctrl-S to save… for some reason, but it works).


The car version of this stopped being produced 15 years ago.

Old petrol Toyotas and Hondas met your criteria.

And the back catalogue of parts is huge and supported for a long time.

Modern cars aren't built as well.

Maybe the modern non-turbo petrol Mazdas are the best fallback.


Modern cars aren't built as well.

Can you cite a source for this? There's no question that they're vastly more complex, but I would think that modern car manufacturing is far more exacting (and efficient) than in the past.

If you're saying that older cars are more repairable, I'm happy to agree with you, even without a source to back up that claim.


An easily visible one is air intakes. Many manufacturers have shifted to plastic. Peteo-engineering has advanced a lot, but they will still get brittle and break.

Interior wise, you can look at things like fabric durability-- lower deniers can be cheaper, but will wear sooner. Springs/foam in seats are another example, but this will vary across manufacturers, models and trims.

This isn't exclusive to financial engineering manufacturers like Stellantis or Nissan, either. Toyota has had issues with simple things like rust proofing (whether intentional or not) on 1st generation Tacomas leading to massive recalls and things like plastic timing guides prone to wearing out. Ford with the wet clutches having belts submersed in oil. German cars needing body off access for rear timing chain maintenance at 80k miles. Water cooled alternators (really, VW?). All types of "why?" if you follow cars once they are 3+ years old.

It seems like there are a lot of regressions that probably result from cost cutting, while others may exist to simply drive service revenue.


OK, I went looking for sources and found this[1]:

In the United States, the Environmental Protection Agency assumes the typical car is driven 15,000 miles (24,000 km) per year. According to the New York Times, in the 1960s and 1970s, the typical car reached its end of life around 100,000 miles (160,000 km). Due in part to manufacturing improvements, such as tighter tolerances and better anti-corrosion coatings, in 2012 the typical car was estimated to last for 200,000 miles (320,000 km) with the average car in 2024 lasting 160,545 miles according to the website Junk Car Reaper.

[1] https://en.wikipedia.org/wiki/Car_longevity#Statistics


I think you're talking about apples and oranges, as parent appeared to be cataloguing recent design defects. Which are pretty common too.

That'll influence the average reliability minimally, unless you were unlucky enough to buy one of those models.

Personally, why I'd rather get something at 120k mileage w/ 250k+ max examples on the road by that calendar date. You'll know whether they designed a lemon.

Add: undersized Tacoma rear leaf springs, multiple manufacturers' head gaskets, a few early aluminum engines (? from memory)


There are many other considerations, too. Years ago I scraped Craigslist and Autotrader, grouping cars by generation/make/model/drivetrain to be able to predict longevity based on quantity for sale versus original sales figures. If a model sold 100k per year for 10 years and only 3 were for sale in year 13, that isn't a great sign. Cheap cars will tend to have cheap owners who are more likely to skimp on maintenance, typically leading to more accrued issues and a shorter lifespan for the vehicle. Some cars are just poorly engineered, and the markets are relatively efficient in pricing resale value. The definition of "high mileage" is going to vary by who you ask. Domestics 150k, German 80k, Japanese 200k, Korean 100k. These are subjective averages (some cars like Theta engines, Darts, even late model GM 6.2s have engine failures <40k), based on when they start disappearing due to repairs being more than the vehicle is worth, but based on what I saw then and kind of observe still.

Leaning on those prior mentioned product mixes, keep in mind that Japanese manufacturers weren't in the American market 60 years ago, so market mix would be wildly different. (Multiple 400k+ mi Toyotas in my family, along with 60 year old GMs, but with aftermarket or rebuilt engines.) The cost of vehicles (and repairs) relative to prevailing wages will impact the repair vs replace balance. Trade publications like Cox/NADA/Adesa/etc. are always cited by financial blogs when mentioning consumer spending/state of economy by average age of cars on the road. Why cars get junked or totaled has shifted drastically, too. Steel bumpers were easy to replace, modern bumper covers with styrofoam backing and aluminum crumple zones, not so much. Tolerances is a vague term in that veiled PR piece on that wiki article. Machining has improved. Tech like direct injection and improved lubrication (synthetics) have done much more in terms of efficiency and longevity. In a lot of cases, manufacturers try to get more and more horsepower from the same displacement by pushing tighter engine tolerances (crank/main bearings, pistons/rings, valvetrain) and things like higher compression ratios and revs, leading to more heat and earlier failure. So while you have better initial engineering, you are closer to the point of failure. For another example, interference engines will grenade themselves if you ignore timing belt maintenance, but in the meantime, you get more horsepower by getting more air into the cylinders.

A v6 Camry or Accord is going to be have more hp, be faster,more reliable at same age, be quieter and get 3x the mpg than nearly any muscle car of the past. Unfortunately it seems that many Americans prefer giant vehicles that place more emphasis on their size (and status) than materially important factors like reliability engineering or fuel economy.

Obviously these are ancedotal examples, they can be confirmed by wasting hours reading about cars and watching mechanic review videos from people who work on them daily (I am partial to the CarCareNut on YT).


Efficient manufacturing means exactly building stuff as cheaply as you can get away with.

There's a reason why roman architecture is still standing: it is massively overbuilt, the very opposite of efficient (they also used to make the architect stand under his own arches as they removed the temporary support, that could have contributed to the overbuilding).


>> roman architecture is still standing

Is it? Every city in Roman empire had temples and forum. Where are they still standing? Maybe half a dozen survived, like pantheon in Rome or temple in Nimes, but it's extremely rare. Maybe they weren't overbuilt at all?


It seems like you both are looking at different definitions of built well. One pertaining to how well the car will perform over its lifetime. The other describing the build process. Not necessarily exclusionary, but different.


10 year old article about Toyota realizing it was over engineering parts and making the cars too expensive and parts didn’t need to last as long.

https://web.archive.org/web/20150122235642/http://www.busine...


There is only so much damage a human assistant can do.

But an AI assistant can do so much more damage in a short space of time.

It probably won't go wrong, but when it does go wrong you will feel immense pain.

I will keep low productivity in exchange for never having to deal with the fallout.


Human beings are also liable for the results of their actions.


Regarding anything code/data:

  git commit 
  aws ec2 create-snapshot --volume-id ...
  git reset --hard
  git clean -fdx
  aws ec2 create-volume --snapshot-id ...
  robocopy "C:\backup" "D:\project" /MIR 
  ...
I agree there are a lot of things outside the computer that are a lot more difficult to reverse, but I think that we are maybe conflating things a bit. Most of us just need the code and data magic. We aren't all trying to automate doing the dishes or vacuuming the floors just yet.


Web dev based answer:

I know it's cliche to say it, but most of the tech debt I've seen is on the frontend.

Most backends are relatively simple. Just a DB with lots of code wrapping it. But even the worst backends are relatively simple beasts. Just lots of cronjobs and lots of procedural code. While the code is garbage, it can be understood eventually. The backend is mature... even the tech debt on the backend is a known quantity!

But the frontend... damn the complexity and the over engineering are something unique. I think there is a fetish among frontend developers to make things as complicated as possible. Packages galore and SO MANY COMPONENTS.

As soon as people start inventing their own design system, UI framework, and sub packages I think the frontend is doomed for that project.


Async really turns FE into a nightmare. Simple concept: user logs in, get userID, get feed associated with ID, get posts on feed, get reacts on post.

Sometimes the tech debt is that BE can't pass this data all at once yet. Fine. Let's fetch it.

But then FE gets creative. We can reduce nesting. We can chain it. We can preload stuff before the data loads. Instead of polling, let's do observers. Actually these aren't thread safe. And you know what, nothing should be in the UI thread because they'll cause delays. And this code isn't clean, one function should do only one thing.

Actually why are these screens even connected? We should use global variables and go to any screen from anywhere. Actually everything can be global. But global is an anti-pattern. So let's call it DI and single page application and have everything shared but everything must also be a singleton because shared immutability is bad too.


Maybe I'm not the target market for this, but how hard is it REALLY to manage a RDBMS?

Any Linux distro can have MySQL or Postgres installed in less than five minutes and works out of the box

Even a single core VPS can handle lots of queries per second (assuming the tables are indexed properly and the queries aren't trash)

There are mature open source backup solutions which don't require DB downtime (also available in most package managers)

It's trivial to tune a DB using .conf files (there are even scripts that autotune for you!!!)

Your VPS provider will allow you to configure encryption at rest, firewall rules, and whole disk snapshots as well

And neither MySQL or Postgres ever seem to go down, they're super reliable and stable

Plus you have very stable costs each month


> Maybe I'm not the target market for this, but how hard is it REALLY to manage a RDBMS?

It depends:

- do you want multi region presence

- do you want snapshot backups

- do you want automated replication

- do you want transparent failover

- do you want load balancing of queries

- do you want online schema migrations with millisecond lock time

- do you want easy reverts in time

- do you want minor versions automatically managed

- do you want the auth integrated with a different existing system

- do you want...

There's a lot that hosted services with extra features can give you. You can do everything on the list yourself of course, but it will take time and unless you already have experience, every point can introduce some failure you're not aware of.


> There's a lot that hosted services with extra features can give you.

I totally agree with that, but in my experience 99% of "application developers" don't need all these features. Of those you listed, I only see "backups" as a requirement. Everything else is just - what I said - features for when your application is successful and you want something streamlined.


I would have no concerns around reliability uptime running my own database.

I would have concerns around backups (ensuring that your backups are actually working, secure, and reliable seems like potentially time intensive ongoing work).

I also don't think I fully understand what is required in terms of security. Do I now have to keep track of CVEs, and work out what actions I need to in response to each one? You talk about firewall rules. I don't know what is required here either.

I'm sure it's not too hard to hire someone who does know how to do these things, but probably not for anything close to the $50/month or whatever it costs to run a hosted database.


As for the CVEs: you just need to install from your OS’s package manager and run periodic updates. The communities take care of this very well.


> Maybe I'm not the target market for this, but how hard is it REALLY to manage a RDBMS?

It is not. You can provision a free Postgres instance with a single click: https://neon.new/


Yes, but there is nothing about pricing on this page. That doesn't make sense to me.


Neon is from Databricks. Here's their pricing page: https://neon.com/pricing


Look into the capabilities of what I consider the leading edge of open source RDBMS managed solutions, Yugabyte: https://www.yugabyte.com

And tell me how easily you can achieve this "out of the box"

If you don't care about business continuity or high availability then everything gets easier

> And neither MySQL or Postgres ever seem to go down, they're super reliable and stable

The box they're on goes down


> The box they're on goes down

So? Not everyone needs 99.999999% availability.


The vast majority of products with paying customers need better availability than “database went down on Friday and I was AFK until Monday, sorry for the 3 day downtime everyone”


If you're offering a hosted service, I've got bad news for you.

Serverless, managed databases and even multicloud won't save you. You'll still have to be on call.

Don't want to be on call? Design your stuff so it works local first.


Local first stuff can also break, so that's not a foolproof plan.


> If you don't care about business continuity or high availability then everything gets easier


And some do, so what's your point?


Read the entire thread to find out.


It's not about it being hard, it's about delegating. Many companies are a bit less sensitive to pricing and would rather pay monthly for someone else to keep their database up, rather than spending engineering hours on setting up a database, tuning it, updating it, checking its backups, monitoring it and making it scale if needed.

Sure, any regular SME can just install Postgres or MySQL without even setting much up except with `mysql_secure_install`, a user with a password and an 'app' database. But you may end up with 10-20 database installs you need to back up, patch and so on every once in a while. And companies value that.


On the pricing bit, I have to say edge driven SQLite/ libsql driven solutions (this is a lot of them) can be a mixed bag.

Cloudflare, Fly.io litestream offerings and Turso are pretty reasonably priced, given the global coverage.

AWS with Aurora is more expensive for sure and isn’t edge located if I recall correctly, so you don’t get near instant propagation of changes on the edge

The bigger thing for me is how much control you have. So far with these edge database providers you don’t have a ton of say in how things are structured. To use them optimally, I have found it works best if you are doing database-per-tenant (or customer) scenarios or using it as a read / write cache that gets exfiltrated asynchronously.

And that is where I believe the real cost factors come into play is the flexibility


Or at least they should. I’ve worked many places where thousands of dollars in engineering hours were wasted on something after they refused to use a service for a fraction of the cost. Some companies understand this but others don’t.


Backups are a PITA I wanted to go exactly this route but even though I had VMs and compute I can't let any production data hit it without bullet proof backups.

I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.

If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.


One instance maybe but multi regional?


> Any Linux distro

What is the upgrade path?

How often do they release?

Do I have to worry about CVEs?

Who is doing network security?

Who is testing that security?

Where are my credentials stored?

Do I have a dashboard that tracks the hundreds of resources I'm responsible for including this new one?

> Plus you have very stable costs each month

I'm sick and tired of managing linux boxes. It simply doesn't scale in any reasonable way.


Came here to see if anyone would make a reference to the Yggdrasill. I was not disappointed!

Hyperion is a great read for anyone looking for their next scifi book BTW. :)


Why do frontend developers eat lunch alone?

Because they don't know how to JOIN tables.


Brilliant!

Did anyone else ramp up all the settings to try and fill the screen with snow?

I saw a cool "bubbling" effect. Some of the air gaps by the trees would bubble up as the snow pilled on.


You might laugh, but selling cheap marketing websites is an easy $10,000.

Selling ten $1,000 websites to small businesses is easy. It isn't fun or exciting, but it works.

It's 50% sales, 30% chasing people, and 20% building.

Find small local businesses with bad websites, or better yet no website. They honestly do exist.

Resist the urge to make your own anything. Just use Squarespace or Wix!

You don't need to hide SS or Wix from the client. Tell them you just charge for your time to set it all up. If they complain then move onto the next customer, they would likely be a pain anyway.

People will say "small marketing websites are dead with SS or Wix about", but it's not true. Most small businesses just don't want to learn how!

If you cold call all week I bet you can have a couple of deals done by Friday! Good luck.


This is spot on. I run a small agency, and the number of clients coming from Wix or Squarespace is surprising—especially considering those platforms are marketed as “easy.” I’d recommend using your LLM to research businesses that need websites and start reaching out by email or phone. After that, it becomes a numbers game. Most business owners eventually realize they need professional help in this space. Skip the penny-pinchers and focus on quick wins, and delegate if the margins are there.

Lastly, most of the advice you'll get around here will be technical, but every now and again a gem will popup that sort-of 'fills in the blanks' when it comes to the other part of this, which is sales and marketing. It's not easy, but it's not all hard. I recommend this thread if you want to read more about it, the Op gives some good advice on how to get leads, and eventually customers. https://news.ycombinator.com/item?id=46661167


Friend of mine develops and runs eshops. Through word of mouth he runs like 60-70 right now. Big and small. He sells setup, etc and takes care of hosting and patching and developing new features.

The most money and the easiest ones come from hosting ;-) it’s like X amount per year so that they don’t have to worry about it. He runs two huge servers, and makes pretty good money at are going to increase over time.

But he had patience and will to do the dirty work early on. Now he is riding the wave.


How do you convince a small business or individual that $1k is a good price for their website? As someone that has learned web development as a hobby many years ago, I’ve helped build sites for several people through word of mouth but I can never seem to ask much for the work I’ve done for some reason. I work really fast, it’s easy, and even fun for me. This idea sounds great and I even have access to create unlimited sub-accounts on a CRM platform paid for by my real job. I can make full websites with storefronts, blogs, forms, galleries, email/sms flows, you name it. My issue is knowing how to convince others how valuable the work is. Any suggestions?


You can charge a lot more than you think. IMO, $1,000 for a website is too low. I provide commercial electrical services and $1,000 would get you one of my electrician’s labor for six hours, which does not include material. For specialized electricians who do things like work on generators and do switchgear testing, $1,000 only covers around 5 hours.


This has also always irked me when thinking about switching to web development full time, knowing how much to charge. I've never been able to get a clear answer on how much is enough or too little for projects. OP suggested $1k for a small business site and honestly this seems pretty fair to me, although I think I just interpreted their other comment to have meant $1k annually... I think I'd rather have an initial project fee and then a monthly for any maintenance or changes, but then how much should that be?


Well the first point is don't ask for payment after the work is done. No one will pay because you've already solved their pain. You're in a weaker position at that point.

Tell them how much you charge before you start work and ask if they want you to start work. It can only go one of two ways.

The easiest way to convince them is to compare it to sales. If they are an electrician with an average job of $500, that website only needs to earn them two extra jobs per year to break even.

But the easiest way is to be a sociopath and not care. Ask the question and they will either say yes or no. No one is going to assassinate you for pitching a marketing website to them.

If they say yes, do you care where the money has come from? Would it matter if that was their last $1k? If they're loaded would you feel more confident? What if you do a great job and then it turns out that money came from illegal sources?

What about if they say no? Will you stay awake at night worrying that their business is losing work because people think they're weird for not having a website? What if your marketing website lands them a big client because of the "authenticity factor" of having a professional marketing website?

None of these things actually matter. But getting paid $1k feels good, especially if you've done a good job and earned it. :)


Thank you for the new perspective! I was looking at it the wrong way. They're not paying for the website, they're paying for new customers. I think I can manage taking a bit of time to understand their business a bit and then explain how a site will be valuable to them.

EDIT: Did you mean $1k annually or just for the initial project?


It's been a while since I did these, but I used to charge $1k to get it set up, then $500 per year to keep it going.

I will say I was very cheap though. But I made my money on quantity. I would do two a month on the side of a full time job.

They were very simple websites though. But most of the time that's all people need.


> People will say "small marketing websites are dead with SS or Wix about", but it's not true. Most small businesses just don't want to learn how!

Even if they want to, they have approximately 500 other problems to deal with that are more urgent.

Just figure out how you handle support after the initial project phase: It's a lot easier to get a small business to spend $1000 on a website than to get them to spend $100/year for the constant trickle of small changes they'll inevitable need later.


I would (respectfully) challenge this idea. :)

I'm not certain adding more complexity (which comes with the more powerful solutions you've suggested) will help things right now.

Cron is such a basic tool, it really shouldn't be causing any problems. I think fixing the underlying problems in the scripts themselves is important to do first.

Just my two cents though!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: