Hacker Newsnew | past | comments | ask | show | jobs | submit | pinpelipon's commentslogin

I work for govt, and I'd like to buy a lot of open source support contracts. As long as they are under the purchase limits that would force full tendering process things should be easy. But I simply can't donate.


There are two reasons several of my clients have chosen Oracle over Postgres.

#1: Training. Oracle does training very well. If need new DBAs in-house (for security reasons etc), they got huge array of options available. The most effective options are also insanely expensive. Once paid 400k€ to get two new DBAs. They became very good at it.

#2: Clustering. Postgres is nowhere near what Oracle offers. Even with the newest releases. What Oracle offers is real multi-master synchronized replication with proper distributed locks, low latency, and it's maintainable and easy to setup. Postgres has still 5-10 years to do before they will reach feature parity on today's Oracle.


To be fair, most of the developers raving about ACID I have met have had trouble naming other valid use cases requiring full ACID besides the "transferring money from an account to other" that nearly every course and school uses. The thing is, most of the applications just don't need that kind of guarantees, if they can at least detect (when using BASE) when something went wrong.


Doing joins?

More seriously, I'd turn the question around. BASE requires you to think hard about a lot of things that ACID gives you for free. The benefits are better performance and scalability. But if you're not certain you need that performance, why go through the extra trouble?

(That said, it also depends of the data model. If you're certain your data is key/value without any relationships and will always be key/value, a key/value store is likely appropriate. If you have something more complex, I don't see it.)


ACID means you have transactions. And transactions are a fantastic feature. It's like a save point in a video game.

Use case for yesterday:

- create a user and it works, create a default access right for this user for one of our most important service;

- if this doesn't work, cancel everything.

Now I had the creation of user working, but the creation of the access right had some problems unrelated to the code.

Without a transaction, this bugs led to 100 of user accounts created without a default right access, and we had to clean it up after ward.

With a transaction, this bug would have not affected us in any way.

Transaction are safety net for data corruption. When you are using mongo, you have to manually do that logic again, and again and again.

In my experience, most mongo coders don't do it, and their DB is riddle with duplicates, incomplete inserts and such. Transactions give it to you, for free.

There are many other wonderful things about transaction:

- free manual rollback. Even when you don't have an error, reverting an operation is one line of code. Even if it affects many tables. Doing that with mongo or redis is terrible.

- free race condition handling. This one is huge because race condition are very, very hard to deal with. E.G: you want to limit the number of comment per day for untrusted users. You just add the comment, THEN check if you have more comments that acceptable, and if yes, roll it back. This way if your system if flooded with comments, you never have one slipping through.

- free ability to split your process in small units. You can create subtransaction inside transactions, and just say you want to rollback one but accept the other part. This give you huge granularity on your error handling.

Basically you get precise, clean, strong and easy reliability for close to nothing.

As soon has you have many untrusted users using your system, it becomes a fantastic asset.


You might want to read this article: http://hackingdistributed.com/2013/03/23/consistency-alphabe...

For instance, "C as in ACID" enables a CEO to say things like "the number of records in the personnel database should be N, the actual number of people I have hired." Or "number of cars produced should be less than or equal to the number of engines purchased." Or "at the end of the data analytics run, there should be a single summary object for each region, containing accurate statistics of sales figures." Or "every Bitcoin transfer that has been executed should be reflected in the corresponding users' wallets." In general, C as in ACID encompasses any kind of property that can be expressed over collections of records, where these properties are application-specific, user-defined, and checkable functions over total system state.


That is a very good article! I bookmarked it in 2013 for that reason. :)

The thing just is... A CEO doesn't really care whether there are 1500 or 1502 workers. Some idiot drops one engine on the assembly floor breaking it. Accurate statistics is an oxymoron. The bitcoin transfers are reliable (mathematically proven even), but does it matter if your wallet shows a wrong number in the UI for a while? I had a good friend go to ATM on a Friday night and see that his current balance was 2.4 billion euros. Next morning it was corrected, and nothing really bad happened.

For the business it is more important to find the (very rare) screw-ups and correct them. You don't really have to do it in real-time either. A 1960s style batch job is just fine.

The reason for previous is that the full ACID model system will fail with the same starting values. The difference is that it tells that end user something like "uh-oh, something went wrong and we can't process your transfer or whatever" (common with banking applications) when eventually consistent model allows the process to continue, and the screw-up must be handled at later stage. From the end user's point of view the ACID version is usually much worse.


The flip side of that is your friend could have gone on a buying spree that night.

Yes, they would have tracked it down and eventually recovered all or most of the money, but at significant cost.

Had the issue never occurred due to constraints of the system not allowing it to, time and money would have been saved.

Allowing a gap between the error condition and its remediation is an opportunity for exploit.


Tritium & tritiated water can be somewhat easily separated from water. Just freeze the mixture, and it forms layers. However, there probably is quite a bit of tritiated water.


However, there probably is quite a bit of tritiated water.

Yes. Here's the tank farm.[1]

It could probably be let out into the ocean without much harm, if not done all at once. But there's opposition to that. Meanwhile, it decays with a half-life of 12 years, so eventually it will be harmless. Frustratingly, the concentration of tritium is too low for commercial extraction.

[1] http://atomicinsights.com/wp-content/uploads/fukushimatanks_...


Also, SELinux is not about just files. It controls some capabilities, and can be used to control the data inside applications as well.


Ditto! We are approaching temperatures that could actually support the hardiest of palms. We can start replacing the spruces with palms soon.

The downsides being ticks becoming common, warmer temperatures enabling more diseases spreading in both human and animal populations, a few native animals dying (they can't breed without certain things such as ice, and their natural defense mechanisms do not work), and Gulf stream probably slowing and causing a new ice age. Ah yes, there's always that. But then again, not going to happen on my lifetime so who cares... :)


Does this mean that some day I might be able to buy a cell phone that has a radio chip that is not full of buffer overflows and other types of flaws?


Combining both the ancestral and the ACL system was botched.

Both are good on their own. The hybrid model is complex and hard to manage, even though it can technically fulfill most requirements you can think of.

It's a real shame because it means ACL usage is not going to take off anywhere soon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: