Hacker Newsnew | past | comments | ask | show | jobs | submit | mimir's commentslogin

100%, This is the uncomfortable fact of taxes in the US right now. If you are a high earning W2 worker, you are likely paying close to if not above 50% income tax rate. Pretty much on par with all the "high tax" European socialist countries you always hear people complaining about. There is basically no good way to lower this w2 tax burden.

I'm generally a "liberal" and support fair taxation and goverment spending, but the current level, based on the worldwide tax rates I've found, doesn't have much room to grow, especially when it does seem like goverment services are actually supbpar for many, and these taxes don't support some sort of universal health care/cheap university like they do in many European countries. I truly believe that goverment can and should be a force for good in people's lives, but I don't think that means we should give it a blank check. It does feel delulu that so many democrats seems to blindly support raising taxes on the "rich". I do think a wealth tax is very much the wrong approach.

It's pretty frustrating to when folks talk about taxing the rich, since it seems like the policies that get passed often just add even more burden onto the "working" rich vs the capital based rich. Even the long terms capital gains rate is close to 30+% in high tax states + top bracket, but there is way more room for deductions.


I think the line from Abundance is apt- something like, “We’re paying Equinox prices but getting Planet Fitness”


My startup switched from python to Java and saw our productivity explode. Using modern Java versions in a non enterprise way (no frameworks, minimal oop, minimal DI, functional features like immutable objects, optional, etc) is quite nice. Our ability to deliver performant and working features was orders of magnitude faster than python. The ecosystem of libraries is crazy deep which also helps build quickly.

I won’t deny there’s a lot of bad Java written, but IMO it’s actually one of the best languages for a startup if any of your code needs good performance.


100%. Java has an amazing standard library, amazing IDE support, AOT compilation, JIT optimizations, static typing, runs much faster generally, and supports multi-threading... seems like a no-brainer to me.


It sort of baffles me how much engineer time is seemingly spent here designing and running these "gamedays" vs just improving and automating the underlying systems. Don't glorify getting paged, glorify systems that can automatically heal themselves.

I spend a good amount of time doing incident management and reliability work.

Red team/blue team gamedays seems like a waste of time. Either you are so early on your reliability journey that trivial things like "does my database failover" are interesting things to test (in which case just fix it). Or, you're a more experienced team and there's little low hanging reliability fruit left. In the later, gamedays seem unlikely to that closely mimic a real world incident. Since low hanging fruit is gone, all your serious incidents tend to be complex failure interactions between various system components. To resolve them quickly, you simply want all the people with deep context on those systems quickly coming up with and testing out competing hypotheses on what might be wrong. Incident management only really matters in the sense that you want to allow the people with the most system context to focus on fixing the actual system. Serious incident management really only comes into play when the issue is large enough to threaten the company + require coordinated work from many orgs/teams.

My team and I spend most of time thinking about how we can automate any repetitive tasks or failover. In the case something can't be automated, we think about how we can increase the observability of the system, so that future issues can be resolved faster.


If you think of incidents as component failures, and the solution as increasing automation related to getting the faulty component back online again, you're under the old view of system failure. This view works for simpler systems.

More complex systems experience failures due to interactions between fully functioning components. The teams that made them didn't, for one reason or another, foresee that mode of interaction.

These are errors designed deeply into the system, and you can't automate recovery. You need to fix the problem at the cause.

Proper analysis is required, and if a game is what it takes to do that then why not? Additionally, it helps people learn to do that analysis on the fly. That is a crucial skill because those incidents are normal in complex systems. They will happen.


Database optimization posts are always interesting, but it's really hard to do any apples to apples comparison. Your performance is going to depend mostly on your hardware, internal database settings and tunings, and OS level tunings. I'm glad this one included some insight into the SQLite settings disabled, but there's always going to be too many factors to easily compare this to your own setup.

For most SQL systems, the fastest way to do inserts is always just going to batched inserts. There's maybe some extra tricks to reduce network costs/optimize batches [0], but at it's core you are still essentially inserting into the table through the normal insert path. You can basically then only try and reduce the amount of work done on the DB side per insert, or optimize your OS for your workload.

Some other DB systems (more common in NoSQL) let you actually do real bulk loads [1] where you are writing direct(ish) database files and actually bypassing much of the normal write path.

[0] https://dev.mysql.com/doc/refman/5.7/en/insert-optimization.... [1] https://blog.cloudera.com/how-to-use-hbase-bulk-loading-and-...


Oracle, DB2, MySQL, SQL Server, and PostgreSQL all support bulk insert. Two obvious use cases are QA Databases and the L part of ETL which pretty much require it.


For relational databases in the enterprise with large amounts of data, batched inserts work great.

For large deletes it is often better to move the rows that won't be deleted to a new table and rename the table when done.

With large updates it is important to look at the query plan and optimize it with good indexes. Batching also works well in this scenario.


This seems like another odd definition of "exactly once". The underlying assumption appears to be that all you work is happening insides the sql transaction, so all the work the consumer does is safe and can be rolled back if a failure happens. Many systems are going to be doing at least some work that can't be easily rolled back by a transaction (e.g. calling an external API). In this more interesting world, you really can't get exactly once since we already did something outside of our transaction.

In my experience, it's easier to reason about and build systems when idempotency is an application level concern. For example take a bank that has some messaging system to update account balances. While a exactly once system, if designed perfectly, might achieve this, you could also achieve this by building an idempotent "update balance" system. With application level idempotency, you have more flexibility to later add different paths or technologies without as many re-write headaches.

Also -- the messages per day stat seems irrelevant. I've yet to encounter many real world systems that don't have irregular bursty patterns. With this slow processing rate, you could basically have a single large burst and then normal traffic, but be unable to return to realtime latency for hours/days.


> In my experience, it's easier to reason about and build systems when idempotency is an application level concern.

This is exactly what we are doing in my company :) But it’s always good to have an option and know that we can do it in specific cases without external calls. In this scenario it can be nice simplification.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: