If they're intentionally causing the customer to have an unspendable balance, knowing that it's making them $200m/yr, how is that not fraud (or some kind of crime)? I'd expect atleast CA would do something about it.
Customers agree to this when they accept the terms of the app. This is also how a debit or savings account at any bank works. Both businesses have sophisticated models to determine how and when customers are likely to make withdrawals, and based on these models they lend out the money based on acceptable risk criteria.
Even if it is in the T&Cs, this one feels like it wouldn’t actually hold up?
Expecting people to read those for most simple sign ups is already a high baseline, and Starbucks is not technically a banks and offers no consumer protections (FSCS or other), so that feels knowingly misleading, even if the total balances held are small per customer.
I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).
Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)
Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.
The poster you're replying to is plain wrong, using "class" is ubiquitously common in the javascript/typescript world, it's the idiomatic way to create classes, and it has better semantics than trying to use prototypes. You might compile away the class keyword for compatibility, though.
But you don't have to do either of those things. There's a third way, with functions and bare objects. I'm not sure that's what GP meant, but a lot of the JS I've written (which tends to be for the browser, mostly vanilla, and quick-and-dirty, to be fair) never touches classes or prototypes. The JSON data being produced/consumed is just a bag of fields, the operations on the document are just top-level functions, events get handled in callback closures, responses to HTTP requests get handled with promises, etc. Sprinkle in some JSDoc comments and you even get fairly workable autocomplete suggestions. Of course, the web APIs are built on prototypes/classes, so it's not like they're totally absent. But with things like data attributes, querySelector, and HTML templates, the usual needs for my own code to be OOP (or even structs-with-methods a la Go/Rust) just don't emerge that much.
Yeah, I would do a lot with plain objects, and using closures and iifes to do encapsulation and what-not. It was ugly and a bit of a pain, but once you learned how it all worked it made sense and was doable. I felt that classes were a bit of a bolt-on that violated my own internal understanding of how JavaScript worked, but by that point I was moving on to other stuff so never really got used to it.
I'm not denying the existence of class in JavaScript, but at least from what I've seen when React went to functions so did most of the JavaScript community that had moved to class based syntax, except for those who worked with Java/C# as well.
I think the real sign of this is a class where all the members are static, or pure data classes - ie, classes as a default rather than classes for things where classes make sense
I'm a big fan of react, but all the server stuff was a cold hard mistake, it's only a matter of time before the (entire) react team realises it, assuming their nextjs overlords permit it.
Yea, I don't want something trying to emulate emotions. I don't want it to even speak a single word, I just want code, unless I explicitly ask it to speak on something, and even in that scenario I want raw bullet points, with concise useful information and no fluff. I don't want to have a conversation with it.
However, being more humanlike, even if it results in an inferior tool, is the top priority because appearances matter more than actual function.
To be fair, of all the LLM coding agents, I find Codex+GPT5 to be closest to this.
It doesn't really offer any commentary or personality. It's concise and doesn't engage in praise or "You're absolutely right". It's a little pedantic though.
I keep meaning to re-point Codex at DeepSeek V3.2 to see if it's a product of the prompting only, or a product of the model as well.
I prefer its personality (or lack of it) over Sonnet. And tends to produce less... sloppy code. But it's far slower, and Codex + it suffers from context degradation very badly. If you run a session too long, even with compaction, it starts to really lose the plot.
Maybe they should be based on a range of factors that influence how successful the university thinks the candidate will be as an undergraduate? Not just exam results?
It means I think admissions officers sometimes know there’s more to a human than their raw test scores. They likely also know that a decent result at some schools requires more work than a great result at others.
I’ve met smart people who do poorly on exams. I’ve met dumb people who do well on them.
I'm not sure if they meet the requirements for being a terrorist group or if I agree with them being considered terrorists, but I just want to point out the name of the organisation isn't a valid argument in favour of them, the actions of the organisation matter a lot more than the name, for example on many occasions they've used violence to prevent people from political speech (is that antifascism or fascism?)
"'such' a phishing attack" makes it sound like a sophisticated, indepth attack, when in reality it's a developer yet again falling for a phishing email that even Sally from finance wouldn't fall for, and although anyone can make mistakes, there is such a thing as negligent, amateur mistakes. It's astonishing to me.
Every time I bite my tongue (literal not figurative) it's also astonishing to me. Last time I did was probably 3 years ago and it was probably 10 years earlier for the time before that. Would it be fair to call me a negligent eater? Have you been walking and tripped over nothing? Humans are fallible and unless you are in an environment where the productivity loss of a rigorous checklist and routine system makes sense these mistakes happen.
It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.
Humans are imperfect and anyone can make mistakes, yes. I would argue there's different categories of mistakes though, in terms of potential outcomes and how preventable they are. A maintainer with potentially millions of users falling for a simple phishing email is both preventable and has a very bad potential outcome. I think all parties involved could have done better (the maintainer/npm/the email client/etc) to prevent this.
That's true but it's like saying most everyone has a small chance of crashing their car. Yet when someone crashes their car because they were texting while driving, speeding, or drunk, we justifiably blame them for it instead of calling them unlucky. We can blame them because there are clear rules they are supposed to know for safety when driving, just as there are for electronic security. The rule for avoid phishing is called "hang up, look up, call back".
Yeah but society doesn't act as if it's an unthinkable event we never planned for when a car crash happens. Blame someone or don't, but there are going to be emergency responders used to dealing with car crashes coming, because we know that car crashes happen (a lot) and we need to be ready for it.
Yes of course we need to defend against scammers at multiple levels because none of them are bulletproof, so putting too much trust in individual developers also a problem here. Even if they didn't get hacked, they could have just become the hacker themselves.