Hacker Newsnew | past | comments | ask | show | jobs | submit | _m8fo's commentslogin

As far as I know they don’t have guarantees. They simply make good on refunding if there’s a downstream issue (e.g. hotel is overbooked).


How has using foundationdb been?

What have been the biggest pros? The biggest cons? Would you use it again (one alternative could have been TikV).


This is a level of support all software companies should aspire to. It's also what the enterprise likes to see and will give you money for.


I continue to choose AWS for the companies I work at not because their offerings are better, but because their support is so far superior.


I had an interview somewhere that was using Google Cloud. My responsibility would be to own all of that (the CTO was who was interviewing me) but I took the interview since I wanted to understand why and wether or not I actually had the power to change that (given I would own it).

I didn't, and didn't take the job and Google Cloud was a major reason for it. I did not want my job at the risk of Google's decisions. I just don't trust them.

AWS may not be perfect but I don't worry as much that a decision on their part is going to really screw me over.


I’m not sure I understand Apple’s logic here. Are iCloud Photos in their data centers not scanned? Isn’t everything by default for iCloud users sent there automatically to begin with? Doesn’t the same logic around slippery slope also apply to cloud scans?

This is not to say they should scan locally, but my understanding of CSAM was that it would only be scanned on its way to the cloud anyways, so users who didn’t use iCloud would’ve never been scanned to begin with.

Their new proposed set of tools seems like a good enough compromise from the original proposal in any case.


You are correct, the original method would only have scanned items destined to iCloud and only transmitted some hash of matching hashes. And yes, similar slippery arguments exist with any providers that store images unencrypted. They are all scanned today, and we have no idea what they are matched against.

I speculated (and now we know) when this new scanning announced, that it was in preparation for full E2EE. Apple came up with a privacy preserving method of trying to keep CSAM off their servers while also giving E2EE.

The larger community arguments swayed Apple from going forward with their new detection method, but did not stop them from moving forward with E2EE. At the end of the day they put the responsibility back on governments to pass laws around encryption - where they should be, though we may not like the outcome.


There are also ways to detect matches even with e2ee iirc and I suspect they found doing that instead easier than dealing with the previous approach.

At the time I also thought it was obvious it was in preparation for e2ee (despite loud people on HN who disagreed).

I do wonder if they had intended to have it be default on though, maybe not since probably better for most users to have a recovery option.


> There are also ways to detect matches even with e2ee

By definition, encryption (with unique user keys) means you can't infer nor check what the content of the message is. Not without client cooperation, which is what this feature would have been.


This is what I was recalling, this method gives you a clever way to do it using the file itself as the key:

> “Convergent encryption solves this problem in a very clever way:

“The way to make sure that every unique user with the same file ends up with an encrypted version of that file that is also identical is to ensure they use the same key. However, you can’t share keys between users, because that defeats the entire point; you need a common reference point between users that is unknown to anyone but those users.

“The answer is to use the file itself: the system creates a hash of the file’s content, and that hash (a long string of characters derived from a known algorithm) is the key that is used to encrypt said file.

“If every iCloud user uses this technique — and given that Apple implements the system, they do — then every iCloud user with the same file will produce the same encrypted file, given that they are using the same key (which is derived from the file itself); that means that Apple only needs to store one version of that file even as it makes said file available to everyone who “uploaded” it (in truth, because iCloud integration goes down to the device, the file is probably never actually uploaded at all — Apple just includes a reference to the file that already exists on its servers, thus saving a huge amount of money on both storage costs and bandwidth).

“There is one huge flaw in convergent encryption, however, called “confirmation of file”: if you know the original file you by definition can identify the encrypted version of that file (because the key is derived from the file itself). When it comes to CSAM, though, this flaw is a feature: because Apple uses convergent encryption for its end-to-end encryption it can by definition do server-side scanning of files and exploit the “confirmation of file” flaw to confirm if CSAM exists, and, by extension, who “uploaded” it. Apple’s extremely low rates of CSAM reporting suggest that the company is not currently pursuing this approach, but it is the most obvious way to scan for CSAM given it has abandoned its on-device plan.”

https://stratechery.com/2022/apple-icloud-encryption-csam-sc...


That makes me happy, because 12 years ago here on HN I posted a comment [1] outlining how a Dropbox-like service could be implemented that stored user files encrypted, with the service not having the keys, yet allow for full deduplication when different users were storing the same file, while still supporting the normal Dropbox sharing features.

The file encryption part was based on using a hash of the file as the key.

It's always nice to later find out that one's quick amateur idea turns out to be an independent rediscovery of something legit. Now that I've learned it is called "convergent encryption" Googling tells me it it goes back to 1995 and a Stac patent.

[1] https://news.ycombinator.com/item?id=2461713


This still suffers the same problems as the original proposal. Specifically, Apple could still be pressured or forced by governments to check for non-CSAM images. And using cryptographic hashing means they can’t detect altered files, while using perspective hashing leaves them open to false positives.


That’s not what’s commonly understood to be a modern cipher. It would be trivial for a government to make a list of undesired messages/images and find everyone that has forwarded it.

https://en.wikipedia.org/wiki/Chosen-plaintext_attack


Yeah that’s literally the point in the CSAM case.

For regular people taking photos the government won’t have their plaintext.

For popular media people are uploading the same copy of it saves a lot of bandwidth.


> that hash is the key that is used to encrypt said file.

So every file has a unique key? So thousands or 10,000s of keys would need need to be in the keychain, mapped to the file name.

And if one person's keys are leaked, they can be used prove that other people had the same file

No, this doesn't sound well thought out


Could one defeat this by changing a single byte in their file?


Bit, yes. Though on a moderately large file it would be easy to brute force all one-bit modifications, and then the effort grows exponentially (basically) in the number of bits flipped, so you’ll want to do more than a few.


> At the time I also thought it was obvious it was in preparation for e2ee

I thought the same.

> despite loud people on HN who disagreed

Yeah, loud people be like that, but this is really Apple’s communication fault. They could have started with that “hey we want to provide e2e encrypted storage, the price of it will be that we need to scan what you upload for csam”.


In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.

To counter the "think of the children" -argument governments use to justify surveillance, Apple tried scanning stuff on-device but the internet got a collective hissy-fit of intentionally misunderstanding the feature and it was quickly scrapped.


> In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.

They basically did. If you turn on Advanced Data Protection, you get all of the encryption benefits, sans scanning. The interesting thing is that if you turn on ADP though, binary file hashes are unencrypted on iCloud, which would theoretically allow someone to ask for those hashes in a legal request. But it's obviously not as useful for CSAM detection, as, say, PhotoDNA hashes. See: https://support.apple.com/en-us/HT202303


Nice! TIL this exists.

For anyone else wondering, to enable it just go to iOS Settings -> iCloud and you'll see "Advanced Data Protection." Toggle it to enabled to create a recovery key, which you'll then be prompted to input correctly after saving it somewhere safe, and then return to the iCloud Settings page, toggle it one more time and enter your recovery key again to confirm.


> but the internet got a collective hissy-fit of intentionally misunderstanding the feature

how was it misunderstood? your device would scan your photos and notify apple or whoever if something evil was found. wasn't that what they were trying to do?


Your device would scan your photo at the point of you uploading it to the cloud and then it could encrypt it before sending it to the cloud. That meant that Apple's cloud servers didn't need to be able to scan it to comply with US Govt "recommendations" for cloud providers.

Whereas right now all the other cloud providers just send the photo as-is and scan it on the cloud servers.

With Apple's approach, the cloud servers don't get to look at every single one of your photos like cloud vendors do today, scanning happens within the privacy of your own phone, and only known-kiddy-porn signatures are flagged.

Apple came up with a way to make things way more private, but the concept of your own device working "against" you if you happen to be a pedophile was too much of a leap.


Your device would've scanned your photos ONLY if you would've uploaded them to Apple's cloud service anyway.

And it wouldn't have notified Apple of "something evil", just specifically known and human-verified actual real child abuse photos. And not even that, it would have needed multiple matches of those very real and verified abuse photos before it flagged them so that a real human could see a "visual derivative" of the photos.

Only if those multiple matches of derivatives were deemed as actual, very real, child pornography the authorities would've been called.

But nope. Now they just scan ALL your data in the cloud when the authorities demand it. And that's somehow better according to the internet in a way I still can't understand.


> so that even they can't access it.

> scanning stuff on-device

What do you think they were going to do once the scanning turned up a hit? Access the photos? Well that negates the first statement.


That was explained in the original design. Each possible match would count, let’s call it a “point”.

Once you reached a certain threshold (the number was not given) it would trigger an alert in a system at Apple.

Each report contained a bit of data that wasn’t enough to identify someone. Once enough “points” from one account accumulated they’d have enough to identify who you were, which files matched, and presumably the full decryption key.

I believe the plan was the suspect files would be decrypted and compared against the real CSAM signatures. If a close match was found it would be sent to NCMEC for confirmation and law enforcement actions.

The threshold was to prevent false positives from the perceptual hashes, like the Google AI scanning incident. Reportedly nobody has one or two pictures. People with CSAM tend to have a lot, so they’d show up “bright red”. They probably didn’t want to reveal the number so people wouldn’t try to keep only that many pictures on their phone to avoid detection.


> What do you think they were going to do once the scanning turned up a hit? Access the photos? Well that negates the first statement.

In the whitepaper, the cryptography required that Apple have multiple different photodna (or whatever the name was for the on-device one) matches before they could unwrap the user's message containing these suspected CSAM photos and to then send them to NCMEC.


Also, IIRC, it wasn’t the raw photos. It was small thumbnails of them.


"reduced-quality copy" was the wording in the whitepaper IIRC.

So the resolution most likely would've been the same, but the detail blurred so that the poor human agent wouldn't have to see actual CSAM, just enough to make a call whether it is or isn't a likely match.



No. A small thumbnail “visual derivative” is included with the neural hash, which is unlocked (only for matches) only once the number of matches exceeds a threshold.

This was all outlined in the first two pages of the white paper, and explained in more detail further down.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


Who is this "they" who will access the photos on-device?


> I’m not sure I understand apples logic here. Are iCloud Photos in their data centers not scanned? Isn’t everything by default for iCloud users sent there automatically to begin with? Doesn’t the same logic around slippery slope also apply to cloud scans?

I don’t see the problem with this status quo. There is a clear demarcation between my device and their server. Each serving the interests of their owner. If I have a problem with their policy, I can choose not to entrust my data to them. And luckily, the data storage space has heaps of competitive options.


This status quo is that a lot of countries want to use the CSAM argument to push privacy-invasive technology (cough UK) like e.g. forcing companies to allow the government to break E2EE to catch CSAM distributors. Apple made this feature while planning to move iCloud Photos to E2EE so that they could argue "look, we still catch x CSAM distributors with n < 0.x% false positive rate, even with E2EE photos. therefore you don't need to pass these laws that break E2EE."


I know "give them an inch, they take a mile" is a reductive comparison but I really can't see this way of thinking going any other way in the long term.


It isn't reductive. At the end of the day, that's exactly what it comes down to.


> the data storage space has heaps of competitive options

The generic space does, yes. But if you want native integration with iOS, your only choice is iCloud. It would certainly be nice if this was an open protocol where you could choose your own storage backend. But I think the chances of that ever happening are pretty much zero.


Precisely! The software running on the phone should be representing the owner of the phone, period. We begrudgingly accept cloud scanning because that ship has already sailed, despite it being a violation of the analog of fiduciary duty. But setting the precedent that software on a user's device should be running actions that betray the user is from the same authoritarian vein as remote attestation. The option ignored by the "isn't this a good tradeoff" question is one where the device encrypts files before uploading them to iCloud, iCloud may scan the encrypted bits anyway to do their legal duty, and that's the end of the story. This is what we'd expect to be happening if device owners' interests were being represented by the software on the device, and so we should demand no less despite the software being proprietary.


1. What you’re asking for (“The option … where the device encrypts files before uploading them to iCloud, iCloud may scan the encrypted bits anyway to do their legal duty, and that's the end of the story.”) is impossible.

2. The division you envisage (“The software running on the phone should be representing the owner of the phone, period.”) is wishful thinking. Do you think the JavaScript in your browser does only things in your interest?


A state of affairs where users' devices encrypt files, and then iCloud scans the stored blobs to perform a perfunctory compliance check is clearly not impossible. So please describe what you mean.

Web javascript is one of the places the battle is being fought. Users are being pushed into running javascript (and HTML) that acts directly against our own interests (eg ads, surveillance, etc). Many of the capabilities exploited by the hostile code should be considered browser security vulnerabilities, but the dynamic is not helped by one of the main surveillance companies also making one of the main browsers.

But regardless of the regime the authoritarians are trying to push, the computer-represents-user model is what we should aspire to - the alternative is computational disenfranchisement.


> The division you envisage (“The software running on the phone should be representing the owner of the phone, period.”) is wishful thinking.

In this specific case it is not wishful thinking.

The feature got scrapped. Users and people who support privacy won.


You sure about that? Like really sure? Like you have definitive evidence that this assertion is true. Or are you placing faith in the news you read?


> Are iCloud Photos in their data centers not scanned?

No outright statement confirming or denying this has ever made to my knowledge, but the implication, based both on Apple's statements and the statement of stakeholders, is that this isn't currently the case.

This might come as a surprise to some, because many companies scan for CSAM, but that's done voluntarily because the government can't force companies to scan for CSAM.

This is because based on case law, companies forced to scan for CSAM would be considered deputized and thus subsequently it would be a breach of the 4th amendments safeguards against "unreasonable search and seizure".

The best the government can do is to force companies to report "apparent violations" of CSAM laws, this seems like a distinction without a difference, but the difference is between required to actively search for it (and thus becoming deputized) v. reporting when you come across it.

Even then, the reporting requirement is constructed in such a way as to avoid any possible 4th amendment issues. Companies aren't required to report it to the DOJ, but rather to the NCMEC.

The NCMEC is a semi-government organization, autonomous from the DOJ, albeit almost wholly funded by the DOJ, and they are the ones that subsequently report CSAM violations to the DOJ.

The NCMEC is also the organization that maintains the CSAM database and provides the hashes that companies, who voluntarily scan for CSAM, use.

This construction has proven to be pretty solid against 4th amendment concerns, as courts have historically found that this separation between companies and the DOJ and the fact that only confirmed CSAM making its way to the DOJ after review by the NCMEC, creates enough of a distance between the DOJ and the act of searching through a person's data, that there aren't any 4th amendment concerns.

The Congressional Research Service did a write up on this last year for the ones that are interested in it[0].

Circling back to Apple, as it stands there's nothing indicating that they already scan for CSAM server-side and most comments both by Apple and child safety organizations seem to imply that this in fact is currently not happening.

Apple's main concerns however, as stated in the letter by Apple, echo the same concerns by security experts back when this was being discussed. Namely that it creates a target for malicious actors, that it is technically not feasible to create a system that can never be reconfigured to scan for non-CSAM material and that governments could pressure/regulate it to reconfigure it for other materials as well (and place a gag order on them, prohibiting them to inform users of this).

At the time, some of these arguments were brushed off as slippery slope FUD, and then the UK started considering something that would defy the limits of even the most cynical security researcher's nightmare, namely a de facto ban on security updates if it just so happens that the UK's intelligence services and law enforcement services are currently exploiting the security flaw that the update aims to patch.

Which is what Apple references in their response.

0: https://crsreports.congress.gov/product/pdf/LSB/LSB10713


To add a bit more color, 18 U.S. Code § 2258A specifically states:

> Nothing in this section shall be construed to require a provider to—

> (1) monitor any user, subscriber, or customer of that provider;

> (2) monitor the content of any communication of any person described in paragraph (1); or

> (3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).

The core of 18 U.S. Code § 2258A - Reporting requirements of providers is available at https://www.law.cornell.edu/uscode/text/18/2258A.


I was looking for that!

Great addition to provide more context.


Don't forget this part:

>(e) Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined— (1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and (2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

I find these clauses at odds with one another in that the Failure to Report clause created a tangible duty upon the provider, which, were I a judge, would satisfy me that rhe provider was, in fact, deputized.

Does nobody actually read the legislation that is passed and realize that oops, I just passed am unconstitutional law.

That they include the construed... clause just solidifies for me that the legislators in question were trying to pull a fast one.


It’s because they wanted their cake and eat it too, get as close as possible to the 4th amendment without crossing the line.

Put simply, if they have knowledge of it they have a duty to report, but they can’t be compelled to try and find out.

In theory this means that if they happen to stumble upon it or are being alerted to it by a third party (e.g. user report) then they have to report it, in practice many voluntarily monitor it, maybe because they want to avoid having to litigate that they didn’t have knowledge of it or maybe because it’s good PR or maybe because they care for the case.

I think in most cases it’s all of the above in one degree or another.


I have no qualms with voluntary monitoring and reporting. However the inclusion of the penalty imposes a tangible duty. That tangible duty is enough to convince me this act is effectively a de facto deputization. The act of searching is, in essence "look out for, raise signal when found". This Act does everything it can to try to cast the process that happens after the search phase as "the search forbidden by the 4th Amendment" instead of the explicitly penalized activity, which is couched as "voluntary, and not State mandated despite a $150000 price tag assessed by... The State". Even going so far as creating a quasi-government entity, primarily funded by the State whose entire purpose is explocitly intended to act as a legal facade to create sufficient "abstract distance" through which the State can claim "it twas not I who did it, but a private organization, Constitional protections do not apply"

Words mean things, and we've gotten damned loose with it these days in my opinion when the want strikes. "Voluntary" anything with a $150000 fine for not doing it is no longer voluntary. It's now your job. If it's your job, and the State punishes you for not doing it, you are a deputy of the State. I do not care how many layers of legal fiction and indirection are between you and the State.

If you can't not comply without jeopardy, it ain't voluntary.


> I find these clauses at odds with one another in that the Failure to Report clause created a tangible duty upon the provider, which, were I a judge, would satisfy me that the provider was, in fact, deputized.

Absolutely not. That section requires a report under the circumstances where a provider has obtained “actual knowledge of facts and circumstances” of an “apparent violation” of various code sections (child porn among others). It doesn’t place on the provider the burden of seeking out that knowledge. In other words, it covers the cases where, for example, a provider receives a report that they are hosting a child porn video and are pointed to the link to it. Providers can’t jam their fingers in their ears and shout LALALA when they’re told they’re hosting (or whatever) CSAM and given the evidence to support it. They don’t have to do anything at all to proactively find it and report it, however.

Think of it like this. I, as a high school, teacher, am a mandated reporter of child abuse. It’s literally a crime (a misdemeanor) for me not to report suspected child abuse. But I don’t have to go out and suss out whether any of my students are being abused. That doesn’t make me a state actor for 4th Amendment purposes (although I am otherwise, because I am a public school teacher, but that’s a different issue).


Except it does make you a state actor, and even children know it, as even the 9-11 year old demographic has literally disclosed to me, the "crazy uncle" in their life, that they are not comfortable being open with any type of guidance counselor or state licensed therapist due to knowledge of just such a dynamic.

A spade, is a spade by any other name. If the state will come down on you for not doing something (message generation), you are a deputy of the State. Period.


It WASN’T the case. Photos are listed on their page of stuff that’s not end to end encrypted.

Since it all went down they added the advanced security option that encrypts photos, messages, and even more.

But that option is opt-in since if you mess it up they can’t help you recover.


Non-encryption ≠ CSAM scanning

That said, I could be wrong about them not scanning currently, I simply don’t have anything authoritative saying either way.

Only statements that imply that they currently don’t, nothing more.


I don’t know if they do or not, but like everyone else I assume they are. Seems like it would be a massive legal (and PR!) liability if it was discovered they weren’t.


Why do you think so? AFAIK there is no legal requirement to scan uploaded files.


so users who didn’t use iCloud would’ve never been scanned to begin with. - so why not implement csam for icloud only without local scanning?


Because the idea is that the iCloud data would be encrypted so their servers couldn’t scan it. With the plan being they would do on device scanning of photos that were marked as being stored on iCloud.

It’s objectively better than what google does but I’m glad we somehow ended up with no scanning at all.


that sounds strange, I mean i'm not sure what's the big difference. If data is scanned on icloud, this means it's not encrypted, got it, if scanned on devices, data is fully encrypted on icloud, but apple has access by scanning it on devices and can send unencrypted matches, so it behaves as an unencrypted system, that can be altered at apple's will, just like icloud... but still, why scanning locally only if icloud is enabled? why not scan regardless? Since policy is meant to 'catch bad ppl', why limit to icloud option and not scan all the time


Apple doesn’t want to scan period. However if Apple does E2ee icloud, the biggest political issue will be that of CSAM. So in order to reserve CSAM, they came up with this scheme.

Apple doesn’t want to expand their power which is why they don’t scan locally. They weren’t doing it before and they don’t want to offer it now.


> Since policy is meant to 'catch bad ppl', why limit to icloud option and not scan all the time

The policy is meant to ensure Apple's servers are not storing and distributing CSAM, not that Apple wants to become a police investigative force.


The era you're describing is possible today if you build your own e-bike.

You can pick up a kit from ebikeling which includes standard throttles, hub motors (or mid-drive if you're into that), pedal assist sensors and displays.

You can buy one of thousands of batteries with XT60 connectors or solder one to any battery you'd like.


Depending on where you live this may not be a legal option. It shouldn't be like that but there are plenty of places where e-bicycles need to go through a certification track that costs a large multiple of a single e-bike. This to ensure that you match the maximum assist and top speed factors and that the bike is safe from an electrical point of view (which is an important enough factor).


Probably because I live in a detached home, but I kind of don't care if somebody else's bike catches on fire, but I strongly care if they are going to be going to be going over 50kph on a MUP. Even so, I think certification is a poor way of addressing the issue, since people modify the firmware of mass-manufactured ebikes all the time.


> since people modify the firmware of mass-manufactured ebikes all the time

They certainly try. Some bikes are better protected than others and part of the certification process is to verify it isn't trivial to hack the bikes. And if this is found to be the case after the fact certification can be withdrawn, retroactively so manufacturers have a lot riding on getting this right.

Note that in plenty of places nobody cares, but in some countries authorities are strict (and getting ever stricter).


MUP = multi-use path (shared between pedestrians and cyclists)


Does the certification requirement apply to bikes people build for personal use?

If so, I’d guess the requirement is highly unusual. Normally, the requirements you mention would be enforced by certifying the components.


It's a problem and a grey area. I've yet to have problems with my homebrew stuff but I've already had some conversations with LE that stopped me (fortunately not while exceeding the limit) because my bike looks more than a little weird.

And no, it's not just the components. The reason for that is that the bike motor, controller and rear wheel + sensor all have to be 'just so' for the speed limiter to work properly. There are some defeat tricks, usually stuff that fools the sensor by doing a neat little bit of Bresenham on the input signal but the smarter bike motors realize this is happening and will happily brick themselves. The latest generation Bosch is afaik not yet hacked and this is not for lack of people trying.


All the power to anyone that wants to do that! My biking needs are extremely utilitarian: I have a cargo e-bike I take my kids to and from school in. I don’t trust myself with (or, rather, I know I don’t have the time to be expert at) e-bike motor installation/repair/etc and in my experience very few bike shops will want to do something like that for you given the higher risks involved. Given I’m taking my kids on the thing every day I don’t feel like taking risks either.


The Samsung Q990c supports both. Not sure if it's the best, though, but I'm happy with mine.


I recently started collecting 4K UHD Blu-Rays since I got a Samsung Q990c.

The most obvious difference to me is the audio. Though the discs do look better than say Netflix 4K, the audio is way better on my particular setup.

I do wonder if streaming is really the same "atmos" as a 4K Blu-Ray. From my experience there must be some serious compression or misrepresentation.

Will be interested in trying FlexConnect with and without Blu-Ray once it's available for me (though not sure it would be better than my current setup).


You are correct that streaming atmos is different. Blu-rays deliver audio using the Dolby TrueHD audio codec, which is a lossless codec whose streams average 6 mbps, but can peak up to 18 mbps. Streaming services use Dolby Digital Plus to deliver dolby atmos content, which is typically encoded at 768 kbps.


For:

https://static1.squarespace.com/static/62185f3b81809a6fd03dd...

Is there supposed to be a line through some of the center?


It looks like they've copied the template/page from a previous diagram file and forgot to remove one of the lines, probably just an accident.


They should do the same with gas (hybrid) cars - since to be fair there’s a hit there with temperature differences and it would be neat to see the differences.

I also noticed this data is generated from aggregated user data. I wonder how a prolonged ride can affect the range. Could long rides heat the battery enough to offset the issues in cool temps?


ICE cars do not suffer from this class of user experience problem. Yes they burn more gas if you need to use the defogger, but this does not really impact your daily planning because it only means you fill up sooner, which you can do anywhere in one minute.


We worry far more about range in our gasoline vehicle than in our EV. Our EV starts with a full battery every morning. But our gas vehicle might have 50 or 350 miles of range depending on when it was filled up last, and the extra 10 minutes it takes to stop and fill might make us late for wherever we're going.


Filling up in 10 minutes instead of hours is worse because... it's so convenient and quick that you may forget to fill up the previous day? This is not a very good argument.

You can solve this by leaving the house ten minutes later, which you should do anyway in case something happens.


The idea with an electric car is you just plug it in every night automatically in your garage – if you're lucky enough to live in a house - every morning it's ready to go. You never have to think about it. Trip driving is different. Tesla made it easy where you can basically drive virtually anywhere in the US and they'll be a supercharger or two along the way. If you live in an apartment or a condo with no electricity then it's harder. You're also stuck with idiots saying you can't add an electric line because it'll burn the place down or something.


Great if you own a house. For most apartment dwellers home charging isn't an option. As you know, home ownership in the US is for the rich.


We thought it wasn’t an option until we tried. Just need a long cord and the right parking place. I would agree most don’t have it though, for now.


I thought the selling point of Tesla was the super fast charging? How long would it take to charge to make up the difference in reported range? Quick Google says a supercharger is up to 200 miles in 15 minutes.

They should fix the reporting though. Ideally let the user put in observed range to calibrate.


also, heating is basically "free" with ICE cars (well, fan excluded).


It has a noticeable impact when your car is fairly efficient. I notice a roughly 10% increase in fuel consumption in the winter and I live in a mild Pacific coast climate where winter barely even happens.


Do you have electric heaters too? I have an older car, basically zero difference with long drives, but the car doesnt heat at all until the engine gets warm.

There is a slightly higher fuel consumption in city driving, because it takes longer for the engine to heat up, and the idle rotations are usually higher when the engine is still below optimal temperature.


It is nowhere near this large of a difference.


> I wonder how a prolonged ride can affect the range.

It can make a huge difference. I noticed articles this year talking about how bad EV range is in very hot weather (>95F).

At first, I thought this was implausible. Then I looked at the data on some days where I had a lot of short trips interspersed with lots of cabin overheat protection and preconditioning. If I extrapolated from that, I'd have horrible range! I never noticed, because range never matters on days like that.

On a road trip in the same conditions, range has been pretty good.


An interesting idea, but if I’m understanding the problem trying to be solved - might be better suited by durable execution (two examples being Azure’s durable functions, and Temporal.io).

In practice transactions between arbitrary data stores would result in potentially boundless and unpredictable latency, no?

Also, is Postgres strongly consistent and linearizable)? One alternative would be using a database with stronger consistency guarantees (Spanner is but not open source, FoundationDB is but has limitations on transactions unless you implement mvcc yourself, which to be fair you are).


Durable execution is good for atomicity, but this approach also gives you isolation. If you're doing updates on both Postgres and Mongo and you want to guarantee both updates happen, Temporal can do that. But if you want isolation, for example the guarantee that any operation that can see the Postgres update can also see the MongoDB update, then you need distributed transactions like in Epoxy.


Consistency is a property of distributed databases. Stock Postgres is not distributed, and thus gets strong consistency for free.

There is still a concept of (transaction) isolation levels, and the ANSI SQL standard defines a transaction mode READ UNCOMMITTED that could give you inconsistent results, but Postgres ignores that and treats it as READ COMMITTED.


in terms of ACID "Consistency ensures that a transaction can only bring the database from one consistent state to another, preserving database invariants: any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This prevents database corruption by an illegal transaction. Referential integrity guarantees the primary key–foreign key relationship." So no it's not free


That is an entirely different meaning of consistency. GP was talking about CAP theory, as they acknowledged in a follow-up comment.


Yeah you are right - I thought the primary in this case was distributed since most of the shims were (CouchDB, Mongo, etc).


Durable execution is an equivalent alternative.

Replication speed could be bad, I don't see a reason to expect that.

PostgreSQL can have serializable transactions though that is not the default isolation level.


What about Mongodb?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: