Hacker Newsnew | past | comments | ask | show | jobs | submit | nkmskdmfodf's commentslogin

Huh?

If you're going to write a book, for other people to read, you ultimately want people to understand and recognize your ideas/the point of your work. It has nothing to do with morality.


I'm talking of the poster, judging the Lolita author for being famous for Lolita. Thinking that judgement is through fame is a morally depraved, evil outlook


Oh I see. I don't disagree with you point then, but the context here is 'immortal works' and that's definitely strongly correlated with the popularity of the work. 'Immortal work' ~= 'still popular long in the future'


How should one judge a writer if not by the body of their work?


You didn't judge the writer by their body of work. You judged the author by which works are famous.


If I write a cautionary tale about the seductive evils of fascism,

with an unreliable narrator who's a cog in the evil machine, and obviously deluded about it,

and deeply unpleasant detailed descriptions of the awful cruelty perpetuated by the nazi regime,

and some fascists really like my book, because detailed descriptions of awful nazi cruelty are their jam, and they really identify with my evil, unreliable, deluded narrator

and a lot of people haven't read my book, but they know the kind of person who like my book - fascists

should I be judged by the popular reception of my work?


With this chain of events, at minimum you should be judged as someone who failed and accidentally created a fascist book.

But also, it is a bit suspect chain of events. Because it is quite unlikely that your books describe Jews in much sympathetic human way. Fascists would not like that. You wrote a book about suffering fascist and per your book ideology, fascism is bad when the fascists themselves suffer. That is just critique of concrete fascist regime from the point of view of the fascist.


Yes if your work encourages there to be more fascists


100%


Because the body can only extract so much energy per minute from all of the fat in your body. If that's not enough, muscle is used, etc.


> Because the body can only extract so much energy per minute from all of the fat in your body.

Was curious about this, went hunting for some rough data, this [0] suggests every kilogram of fat held can be drawn down at ~70 food-calories per day.

So someone with 25% body fat weighting 100kg (~220lb) could draw 1750 food calories per day, which strikes me as pretty ample unless they're also adding a bunch of physical activity.

[0] https://pubmed.ncbi.nlm.nih.gov/15615615/


> which strikes me as pretty ample unless they're also adding a bunch of physical activity.

It seems likely we've evolved to reduce energy expenditure in other ways when we regularly induce physical activity, too. Walk 20,000 steps or spend a couple of hours on the treadmill? Your body finds ways to reduce your energy expenditure elsewhere.

https://pmc.ncbi.nlm.nih.gov/articles/PMC4803033/


It's not going to be linear though. 1750 cal per day ~= 73 cal per hour. If, for example, you're already in a calorie deficit for the day, and then do a nice hour long workout (or demanding mental work), you're going to burn some muscle.


> to a large extent, it's directly because of hardware based privacy features.

First, this is 100% false. Second, security through obscurity is almost universally discouraged and considered bad practice.


Security though obscurity is highly effective.

Think of some common sense physical analogies: a hidden underground bunker is much less likely to be robbed than a safe full of valuables in your front yard. A bicycle buried deeply in bushes is less likely to be stolen than one locked to a bike rack.

Without obscurity it is straightforward to know exactly what resources will be required to break something- you can look for a flaw that makes it easy and/or calculate exactly what is required for enough brute force.

When you add the element of well executed obscurity on top of an also strong system, it becomes nearly impossible to even identify that there is something to attack, or to even start to form a plan to do so.

Combining both approaches is best, but in most cases I think simple obscurity is more powerful and requires less resources than non obscure strength based security.

I’ve managed public servers that stayed uncompromised without security updates for a decade or longer using obscurity: an archaic old Unix OS of some type that does not respond to pings or other queries, runs services on non-standard ports, and blocks routes to hosts that even attempt scanning the standard ports will not be compromised. Obviously also using a secure OS with updates on top of these techniques is better overall.


I think the scenario that security through obscurity fails is when the end user is reliant on guarantees that don't exist.

For example Intel's Management Engine, it was obscured very well. It wasn't found for years. Eventually people did find it, and you can't help but wonder how long it took for bad actors with deep pockets to find it. Its this obscured cubby hole in your CPU, but if someone could exploit it, it would be really difficult to find out because of intel's secrecy on top of the feature.


It seems like people are really talking about different things with obscurity. Some are referring to badly designed weak systems, where secrecy and marketing hype is used to attempt to conceal the flaws. Others, like my comment above, are talking about systems carefully engineered to have no predictable or identifiable attack surfaces- things like OpenBSDs memory allocation randomization, or the ancient method of simply hiding physical valuable things well and never mentioning them to anyone. I’ve found when it is impossible for an external bad actor to even tell what OS and services my server is running- or in some cases to even positively confirm that it really exists- they can’t really even begin to form a plan to compromise it.


> where secrecy and marketing hype is used to attempt to conceal the flaws.

That's literally the practical basis of security through obscurity.

> Others, like my comment above, are talking about systems carefully engineered to have no predictable or identifiable attack surfaces- things like OpenBSDs memory allocation randomization,

That's exactly the opposite of 'security through obscurity' - you're literally talking about a completely open security mitigation.

> I’ve found when it is impossible for an external bad actor to even tell what OS and services my server is running- or in some cases to even positively confirm that it really exists- they can’t really even begin to form a plan to compromise it.

If one of your mitigations is 'make the server inaccessible via public internet', for example - that is not security through obscurity - it's a mitigation which can be publicly disclosed and remain effective for the attack vectors it protects against. I don't think you quite understand what 'security through obscurity[0]' means. 'Security through obscurity' in this case would be you running a closed third-party firewall on this sever (or some other closed software, like macos for example) which has 100 different backdoors in it - the exact oppposite of actual security.

[0] https://en.wikipedia.org/wiki/Security_through_obscurity


You're mis-representing my examples by shifting the context, and quoting a wikipedia page that literally gives the same examples to two of the main ones I mentioned at the very top of the article as key examples of security through obscurity: "Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number"

If you're not understanding how memory allocation randomization is security through obscurity- you are not understanding what the concept entails at the core. It does share a common method with, e.g. using a closed 3rd party firewall: in both cases direct flaws exist that could be overcome with methods other than brute force, yet identifying and specifying them enough to actually exploit is non-trivial.

The flaw in your firewall example is not using obscurity itself, but: (1) not also using traditional methods of hardening on top of it - obscurity should be an extra layer not an only layer, and (2) it's probably not really very obscure, e.g. if an external person could infer what software you are using by interacting remotely, and then obtain their own commercial copy to investigate for flaws.


> You're mis-representing my examples by shifting the context,

Specific example of where I did this?

> literally gives the same examples to two of the main ones I mentioned at the very top of the article as key examples of security through obscurity: "Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number"

I mean, I don't disagree that what you said about changing port numbers, for example, is security through obscurity. My point is that this is not any kind of defense from a capable and motivated attacker. Other examples like the OpenBSD mitigation you mentioned are very obviously not security through obscurity though.

> If you're not understanding how memory allocation randomization is security through obscurity- you are not understanding what the concept entails at the core.

No, you still don't understand what 'security through obscurity' means. If I use an open asymmetric key algorithm - the fact that I can't guess a private key does not make it 'security through obscurity' it's the obscuring of the actual crypto algorithm that would make it 'security through obscurity'. Completely open security mitigations like the one you mentioned have nothing to do with security through obscurity.

> The flaw in your firewall example is not using obscurity itself, but: (1) not also using traditional methods of hardening on top of it

Sooo... you think adding more obscurity on top of a closed, insecure piece of software is going to make it secure?

> if an external person could infer what software you are using by interacting remotely,

There are soooo many ways for a capable and motivated attacker to figure out what software you're running. Trying to obscure that fact is not any kind of security mitigation whatsoever. Especially when you're dealing with completely closed software/hardware - all of your attempts at concealment are mostly moot - you have no idea what kind of signatures/signals that closed system exposes, you have no idea what backdoors exist, you have no idea what kind of vulnerable dependencies it has that expose their own signatures and have their own backdoors. Your suggestion is really laughable.

> not also using traditional methods of hardening on top of it

What 'traditional methods' do you use to 'harden' closed software/hardware? You literally have no idea what security holes and backdoors exist.

> if an external person could infer what software you are using by interacting remotely, and then obtain their own commercial copy to investigate for flaws.

Uhh yeah, now you're literally bringing up one of the most common arguments for why security through obscurity is bullshit. During WW1/WW2 security through obscurity was common in crypto - they relied on hiding their crypto algos instead of designing ones that would be secure even when publicly known. What happened is enough messages, crypto machines, etc were recovered by the other side to reverse these obscured algos and break them - since then crypro has pretty much entirely moved away from security through obscurity.


You are operating on a false dichotomy that the current best practices of cryptographic security, code auditing, etc. are somehow mutually exclusive with obscurity, and then arguing against obscurity by arguing for other good practices. They are absolutely complementary, and implementing a real world secure system will layer both- one starts with a mathematically secure heavily publicly audited system, and adds obscurity in their real world deployment of it.

If there are advantages to a closed source system, it is not in situations where the source is closed to you and contains bugs, but when closed to the attacker. If you have the resources and ability to, for example, develop your own internally used but externally unknown, but still heavily audited and cryptographically secure system, is going to be better than an open source tool.


> They are absolutely complementary, and implementing a real world secure system will layer both- one starts with a mathematically secure heavily publicly audited system, and adds obscurity in their real world deployment of it.

Ok, let's start with a 'mathematically secure heavily public audited system' - let's take ECDSA, for example - how will you use obscurity to improve security?

> If you have the resources and ability to, for example, develop your own internally used but externally unknown, but still heavily audited and cryptographically secure system, is going to be better than an open source tool.

Literally all of the evidence we have throughout the history of the planet says you're 100% wrong.


> Literally all of the evidence we have throughout the history of the planet says you're 100% wrong

You are so sure you’re right that you are not really thinking about what I am saying, and how it applies to real world situations- especially things like real life high stakes life or death situations.

I am satisfied that your perspective makes the most sense for low stakes broad deployments like software releases, but not for one off high stakes systems.

For things like ECDSA, like anything else you implement obscurity on a one off basis tailored to the specific use case- know your opponent and make them think you are using an entirely different method and protocol that they’ve already figured out and compromised. Hide the actual channel of communication so they are unable to notice it exists, and over that you simply use ECDSA properly.

Oh, and store your real private key in the geometric design of a giant mural in your living room, while your house and computers are littered with thousands of wrong private keys on ancient media that is expensive to extract. Subscribe to and own every key wallet product or device, but actually use none of them.


> You are so sure you’re right that you are not really thinking about what I am saying, and how it applies to real world situations- especially things like real life high stakes life or death situations.

Nah, you're just saying a lot of stuff that's factually incorrect and just terrible advice overall. You lack understanding what you're talking about. And the stakes are pretty irrelevant to whether a system is secure or not.

> For things like ECDSA, like anything else you implement obscurity on a one off basis tailored to the specific use case- know your opponent and make them think you are using an entirely different method and protocol that they’ve already figured out and compromised.

You're going to make ECDSA more secure by making people think you're not using ECDSA? That makes so little sense in so many ways. Ahahahahaha.


I very well may be wrong, but if so you are not aware of how, and I will need to find someone else to explain it to me. I’ve been interested for a while in having a serious debate with someone that understands and advocates for the position you claim to have- but if you understood it you would be able to meaningfully defend it rather than using dismissive statements.


You do you champ.


> Security though obscurity is highly effective.

If you say so.

> Think of some common sense physical analogies: a hidden underground bunker is much less likely to be robbed than a safe full of valuables in your front yard. A bicycle buried deeply in bushes is less likely to be stolen than one locked to a bike rack.

That's not what security through obscurity is. If you want to make an honest comparison - what is more likely to be a secure - an open system built based on the latest/most secure public standards, or a closed system built based on (unknown)? The open system is going to be more secure 99.999% of the time.

> Without obscurity it is straightforward to know exactly what resources will be required to break something- you can look for a flaw that makes it easy and/or calculate exactly what is required for enough brute force.

The whole point of not relying on obscurity is that you design an actually secure system even assuming the attacker has a full understanding of your system. That is how virtually all modern crypto that's actually secure works. Knowing your system is insecure and trying to hide that via obscurity is not security.

> it becomes nearly impossible to even identify that there is something to attack

That's called wishful thinking. You're conflating 'system that nobody knows about or wants to attack' with 'system that someone actually wants to attack and is defending via obscurity of its design'. If you want to make an honest comparison you have to assume the attacker knows about the system and has some motive for attacking it.

> but in most cases I think simple obscurity is more powerful and requires less resources than non obscure strength based security.

Except obscurity doesn't actually give you any security.

> I’ve managed public servers that stayed uncompromised without security updates for a decade or longer using obscurity: an archaic old Unix OS of some type that does not respond to pings or other queries, runs services on non-standard ports, and blocks routes to hosts that even attempt scanning the standard ports will not be compromised.

That's a laughably weak level of security and does approximately ~zero against a capable and motivated attacker. Also, your claim of 'stayed uncompromised' is seemingly based on nothing.


You are begging the question- insisting that obscurity isn't security by definition, instead of actually discussing it's strength and weaknesses. I didn't "say so"- I gave specific real world examples, and explained the underlying theory- that being unable to plan or quantify what is required to compromise a system makes it much harder.

Instead of, for example in your last example simply labeling something you seem to not like as "laughably weak"- do you have any specific reasoning? Again, I'd like to emphasize that I don't advocate obscurity in place of other methods, but on top of additional methods.

Let's try some silly extreme examples of obscurity. Say I put up a server running OpenBSD (because it is less popular)- obviously a recent version with all security updates-, and it has only one open port- SSH, reconfigured to run on port 64234, and attempting to scan all other ports immediately and permanently drop the route to your IP. The machine does not respond to pings, and does other weird things like only being physically connected for 10 minutes a day at seemingly random times only known by the users, with a new IP address each time that is never reused. On top of that, the code and all commands of the entire OS has been secretly translated into a dead ancient language so that even with root it would take a long time to figure out how to work anything. It is a custom secret hacked fork of SSH only used in this one spot that cannot be externally identified as SSH at all, and exhibits no timing or other similar behaviors to identify the OS or implementation. How exactly are you going to remotely figure out that this is OpenBSD and SSH, so you can then start to look for a flaw to exploit?

If you take the alternate model, and just install a mainstream open source OS and stay on top of all security updates the best you can, all a potential hacker needs to do is quickly exploit a new update before you actually get it installed, or review the code to find a new one.

Is it easier to rob a high security vault in a commercial bank on a major public street, or a high security vault buried in the sand on a remote island, where only one person alive knows its location?


> Instead of, for example in your last example simply labeling something you seem to not like as "laughably weak"- do you have any specific reasoning?

'without security updates for a decade or longer' - do I really need to go into detail on why this is hilariously terrible security?

'runs services on non-standard ports,' - ok, _maybe_ you mitigated some low-effort automated scans, does not address service signatures at all, the most basic nmap service detection scan bypasses this already.

'blocks routes to hosts that even attempt scanning the standard ports ' - what is 'attempt scanning the standard ports' and how are you detecting that- is it impossible for me to scan your server from multiple boxes? (No, it's not, it's trivially easy.)

> Say I put up a server running OpenBSD (because it is less popular)- obviously a recent version with all security updates-, and it has only one open port- SSH,

Ok, so already far more secure than what you said in your previous comment.

> only being physically connected for 10 minutes a day at seemingly random times only known by the users

Ok, so we're dealing with a server/service which is vastly different in its operation from almost any real-world server.

> only known by the users, with a new IP address each time that is never reused

Now you have to explain how you force a unique IP every time, and how users know about it.

> On top of that, the code and all commands of the entire OS has been secretly translated into a dead ancient language so that even with root it would take a long time to figure out how to work anything

Ok, so completely unrealistic BS.

> It is a custom secret hacked fork of SSH only used in this one spot that cannot be externally identified as SSH at all

It can't be identified, because you waved a magic wand and made it so?

> and exhibits no timing or other similar behaviors to identify the OS or implementation

Let's wave that wand again.

> How exactly are you going to remotely figure out that this is OpenBSD and SSH, so you can then start to look for a flaw to exploit?

Many ways. But let me use your magic wand and give you a much better/secure scenario - 'A server which runs fully secure software with no vulnerabilities or security holes whatsoever.' - Makes about as much sense as your example.

> Is it easier to rob a high security vault in a commercial bank on a major public street, or a high security vault buried in the sand on a remote island, where only one person alive knows its location?

The answer comes down to what 'high security' actually means in each situation. You don't seem to get it.


This is a common saying but in reality, security through obscurity is widely deployed and often effective.

More pragmatic advice would be to not rely solely on security through obscurity, but rather to practice defence in depth.


Security by insecurity is also 'widely deployed and often effective'.


Obfuscation is not security.. So there can't be "security through obscurity".

Widely deployed doesn't mean it's a positive action, and effective ? It just can't be as it's not a security. People really need to pay more attention to these things, or else we DO get nonsense rolled out as "effective".


Where did you come up with “ security through obscurity ” in that previous commment? It said nothing about using an obscurity measure. He was talking about hardware based privacy features.


> Second, security through obscurity is almost universally discouraged and considered bad practice.

This is stupid advice that is mindlessly repeated. Security by obscurity only is bad, sure. Adding obscurity to other layers of security is good.

Edit: formatting


No, that's just plain wrong in this case. It makes proper security research much harder and what's going on with your hardware less obvious.


Nah, you have no idea what you're talking about.


What do you mean by considered bad practice? By whom? I would think this is one of the reasons that my Macs since 2008 have just worked without any HW problems.


> even with open-source, you're never going to sit and read the code (of the program AND its dependency tree)

You don't have to. The fact that it's possible for you to do so, and the fact that there are many other people in the open source community able to do so and share their findings, already makes it much more trust-worthy than any closed apple product.


THIS!

Back when I was new to all of this, the idea of people evaluating their computing environment seemed crazy!

Who does that?

Almost nobody by percentage, but making sure any of us CAN is where the real value is.


Jia Tan has entered the chat.


I hope you bring that up as an example in favor on open-source, as an example that open-source works. In a closed-source situation it would either not be detected or reach the light of day.


In a closed source situation people using a pseudonym don't just randomly approach a company and say "hey can I help out with that?"

It was caught by sheer luck and chance, at the last minute - the project explicitly didn't have a bunch of eyeballs looking at it and providing a crowd-sourced verification of what it does.

I am all for open source - everything I produce through my company to make client work easier is open, and I've contributed to dozens of third party packages.

But let's not pretend that it's a magical wand which fixes all issues related to software development - open source means anyone could audit the code. Not that anyone necessarily does.


> Apple has full-disk encryption backed by the secure enclave so its not by-passable.

Any claims about security of apple hardware or software are meaningless. If you actually need a secure device, apple is not an option.


> Any claims about security of apple hardware or software are meaningless. If you actually need a secure device, apple is not an option.

I don't think this is precise, but the constraints seem a bit vague to me. What do you consider to be in the list of secure devices?


I'm not even here to troll, if you can give details on the list and why that'd be awesome


Seconded


It's also possible to say "nothing" and just leave it at that. A lot of people are desperate to defend Apple by looking at security from a relative perspective, but today's threats are so widespread that arguably Apple is both accomplice and adversary to many of them. Additionally, their security stance relies on publishing Whitepapers that have never been independently verified to my knowledge, and perpetuating a lack of software transparency on every platform they manage. Apple has also attempted to sue security researchers for enabling novel investigation of iOS and iPadOS, something Google is radically comfortable with on Android.

The fact that Apple refuses to let users bring their own keys, choose their disc encryption, and verify that they are secure makes their platforms no more "safe" than Bitlocker, in a relative sense.


I do not believe I understand your comment.

Early, you mention people defending Apple security in a relative sense.

Later, you mentioned Apple refusing user actions to verify security makes them no more safe in a relative sense.

Are you just talking about Apple employing security by obscurity?

I just want to understand your point better, or confirm my take is reasonable.

And for anyone reading, for the record I suppose, I do not consider much of anything secure right now. And yes, there are degrees. Fair enough.

I take steps in my own life to manage risk and keep that which needs to be really secure and or private off electronics or at the least off networks.


Using fully open hardware and software I guess ?


> And for me, the idea that they might replace my aging phone with a newer unit, is a big plus.

It's called a warranty and not at all exclusive to apple whatsoever?

> Those people should stick to Linux, so that they can have a terrible usability experience ALL the time, but feel more "in control," or something.

Maybe you should stick to reading and not commenting, if this is the best you can do.


> I am thrilled to shell out thousands and thousands of dollars to purchase a machine that feels like it really belongs to me, from a company that respects my data and has aligned incentives.

You either have have very low standards or very low understanding if you think a completely closed OS on top of completely closed hardware somehow means it 'really belongs' to you, or that your data/privacy is actually being respected.


"completely closed OS" is not accurate. apple releases a surprising amount of source code.

https://opensource.apple.com/releases/


The closed part has full control over your system, so the released code is useless for privacy/ownership.


Whats the alternative? Linux? Maybe OP likes that their OS doesnt crash when they close their laptop lid.


Crash? I understand people's gripes with ui, hardware compatibility, etc, but stability? All my Linux machines have always been very stable.


Me too. So stable that I'm becoming less and less tolerant for annoying issues in Windows, and further more motivated to make consumer linux even more available and reliable.


It's not that bad anymore (e.g. with system 76), but I understand the point.

I disagree with OP celebrating Apple to be the least evil of the evils. Yes, there are not many (if any) alternatives, but that doesn't make Apple great. It's just less shitty.


It feels like a lot of people in these threads form their opinions of what desktop Linux is like these days based on one poor experience from back in 2005.


This an Apple zealot thread. They aren’t reasonable or actually interested in computing outside of Apple. They will talk shit about Linux all day in this thread or any other one but if you criticize Apple it’s to the guillotine for your comment. I’m surprised yours is still standing.


yesterday I booted into my Kubuntu installation. it's on a rusty spinning disk on purpose. even after half of a century since the mother of all demos we still cannot preload the fucking "start menu" or whatever is the more correct technical term for that thing that _eventually_ shows the applications when you click on it.


There’s a bug for this? Or are you just complaining thinking someone in the software contributors world can here you?

I can’t even find how to open the Applications view in Finder 9/10 times on a Mac.


You're ignoring interest. If there is 100T of outstanding debt at 3% annual interest, that's 3T in interest per year. At a certain point it may be impossible to keep the overall interest payments going and the debt graph will collapse like dominoes.

The great recession was also mainly caused by debts within the US economy.


So country A pays 3% to country b, who pays 3% to country c who pays 3% to country A


What happens when country A can't pay?



Very bad things, or maybe very good things, depends on your perspective, a reset in any case and a restructuring of debt, usually just after a crash.


Which isn’t free either. Consider the interest levels of countries considered unstable vs those considered stable (UK/US compared to Türkiye/Argentina for example). It massively changes what actions the respective governments can take.


You’re ignoring inflation. At 2.5% inflation, this is a 0.5% real interest rate.


Inflation can only come with an increase in the money supply and the money supply only increases by issuing debt, which carries more interest.


Not true. Inflation is defined as the increase in the cost of goods. This can come about by an increase in money supply, a reduction in demand for the money that exists, a restriction in supply of certain goods or an increase in the demand for enough key goods. I’m not an economist so I’m sure one could come up with more examples.


No, inflation is only because of an increase in the money supply, which is increased by lending in modern economies. Rulers will make false definitions and false measurements in order to try to hide this. Just as Roman emperors would debase the metal in their currency and demand that people treated the money as pure. But the truth is still the truth, no matter the lies.


> No, inflation is only because of an increase in the money supply

Japan increasing its money supply but having long stretches of low inflation (and even deflation) for the last ~twenty years:

* https://fred.stlouisfed.org/graph/?g=1680i



The pope says he's appointed by God. So it must be true...

Your sources are as truthful about inflation as asking any general if he's fighting for the good side of a war.


> The pope says he's appointed by God. So it must be true...

Carlos Jobim says he’s the expert on finance and monetary policy so it must be true. He doesn’t need things like evidence, experts, research or data. His word is the truth.


Your sources are organizations that derive all their power from inflating the monetary supply. They have reason to lie always.


I think you need to get off r/Superstonk and return to the real world.


During the era of the Soviet Union, all institutions and all science within their sphere of influence agreed unanimously that socialism was a superior system in all ways measurable. The people believed it. Probably most academics believed it, as well as leaders of institutions. In the highest echelon they were painfully aware of the shortcomings of their system, but it was still inconceivable to admit what was really going on.

In the future you will probably read books and memoirs from international banking leaders, speaking more unfiltered about how they had to keep the truth from the population, just as we today can read the memoirs of previous Soviet leaders admitting their lies and failure.

When you talk about economists as an authority, do you think anybody could get a tenure or a degree unless they believe in inflation as a mysterious force and not man-made? Probably as likely as somebody getting a degree in political science in the USSR without being a socialist.


> In the future you will probably read books and memoirs from international banking leaders, speaking more unfiltered about how they had to keep the truth from the population, just as we today can read the memoirs of previous Soviet leaders admitting their lies and failure.

No, we really won't.


Ok, you are right, let's end this "discussion".


We agree on one thing: this wasn’t a discussion. It was you embarrassing yourself in public.


When people don't want to talk to you anymore, it doesn't always mean that you're right. You should take that advice into real life.


> Why not build own version?

Probably because it costs like 10k per line of code for apple to implement anything decent. (not claiming that magnet is decent)


Do you have a source? Preferably one that also says that Apple doesn’t spend that much when using code from companies they have bought?


I imagine this situation is kind of unique. Once a company finds out the features they sell will be integrated into the 1st party platform, and they have a chance to get out of the business, and they know they’ve got competitors that Apple could turn to next, I imagine Apple would be getting a steep discount. Especially if it’s an aqui-hire — peace of mind the employees will be taken care of.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: