> Also, what kind of banking are people doing that requires an app? I genuinely don't know what it could be.
Close to every bank in the EU requires their user to have an app, for MFA (both for logging in and for validating transactions - transfers, payments). They use the smartphone's TPM. I have yet to see one that allows you to use your own MFA app.
The few I've seen that don't require it will validate the same through text messages (not everyone has a smartphone); though if you associate their app even once, you're screwed - the app it is from now on.
Here in The Netherlands banks used to offer authenticator devices, which they are phasing out (you can still use them, but they wont replace them once they run out of battery). Pretty much all banks switched to app-only.
No SMS at all (which is not surprising, because SMS is not secure).
Also, IMO fingerprint/face-based authentication is much nicer/quicker, especially for online payment flows like iDEAL (Dutch predecessor to Wero). And banks here work on GrapheneOS, so not much is lost.
> Anecdotally, of my two EU (massive legacy French) banks, neither requires a mobile app. SMS all the way.
My wording was bad, sorry; but try to install their app just once. After that, I'd bet you won't ever be able to go back to SMS validation (which is what I was talking about at the end of my comment).
If not, I'd be curious to know the banks you're talking about (to consider switching to them, for one thing). What I said above is true of Caisse d'Epargne, HSBC, CCF, among others.
>I'd be curious to know the banks you're talking about
Fortuneo (internet-only subsidiary of Crédit Mutuel) and LCL. I have had both their apps installed at points in the past. In both cases they defaulted back to SMS 2FA upon uninstalling, though I remember worrying I would have the problem you describe.
Ultimately I can't see how a bank could get away with forcing (rather than just pushing) existing customers to install an app. This would surely be a breach of contract.
> I would only use it in a sauce if I needed to accommodate a vegan guest.
As an alternative, I've found methylcellulose to be pretty good for thickening my vegan homemade sauces (mainly tried it because I use it for other stuff, like fakemeat homemade protein sources). That's for homemade mayo or the like; for sauces in stews and similar, flour does the job - though US cooks seem obsessed by cornstarch instead for that use case.
They are ‘obsessed’ with cornstarch instead of flour because cornstarch is almost pure starch and doesn’t add a flavor the way that flour does. It shares that property with methylcellulose.
> That's pretty bad! I wonder what kind of bounty went to the researcher.
I'd be surprised if it's above 20K$.
Bug bounties rewards are usually criminally low; doubly so when you consider the efforts usually involved in not only finding serious vulns, but demonstrating a reliable way to exploit them.
Everyone should read this comment, it does a really eloquent job explaining the situation.
The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.
Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.
Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.
Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.
Why is it the USA doesn't have their own bug bounty program for non-DOD systems? Like, sure, they have a bounty for vulns in govt systems. But why not accept vulns for any system, and offer to pay more than anyone else? It would give them a competitive advantage (offensive & defensive) over every other nation. End one experimental weapons program (or whatever garbage DOD spends its obscene budget on) and suddenly we're not cyber-sucky anymore.
I think you are confusing bug bounty programs with espionage and cyber warfare. The USA definitely accepts vulnerabilities for any system (or at least target systems), paying good money for them if it is an attack chain, giving them that competitive edge you mention. They have at least one military organization over this exact thing (USCYBERCOM) and realistically other orgs to include the intelligence community.
There are no bug bounties on "any" system because bug bounties are part of programs to fix bugs, not exploit them. They therefore have bug bounties for their own systems, as those are the ones they would be interested in improving. What you described, which they definitely do, is cyber espionage, and those bugs are submitted through different channels than a bug bounty.
But that's the thing, I think they specifically need a non-IC program. If I'm a white-hat, grey-hat, or a somewhat cagey black-hat, I'm not gonna reach out to a shadowy organization with a penchant for extrajudicial surveillance, torture & killing to make $50k on a bug. Sure, you can try your hand at selling them an exploit that won't get revealed. But if only you and The Company know about the bug, and it could mean the upside in a potential war (or just a feather in an agency head's cap), why would The Company keep you alive and able to talk about it? OTOH, if the program you're reporting to doesn't have a track record of illegal activity, personally I'd feel a lot safer reporting there. And ideally their mission would be to patch the bug and not hold onto it. But we get to patch first, so it's still our advantage.
Because collecting and gatekeeping vulns so you can attack other countries is bad manners.
If you look up some of the Snowden testimonies, it's implied USA at least had access to some 0-days at the past, but nobody admitted to it, because it just bad national politics.
Even if USA is doing dog-shit in politics now, openly admitting to collecting cyber-weapons (instead of doing it silently) is just an open invitation to condemnation
From being in the trenches a couple of decades ago, they do. They just don't disclose after they pay the bounty. They keep them to themselves. I knew one guy (~2010?) making good money just selling exploits (to a 3-letter agency) that disabled the tally lamps on webcams so the cams could be enabled without alerting the subject.
This underestimates the adaptability of threat actors. Massive cryptocurrency thefts from individuals have created a market for a rather wide range of server-side bugs.
Got a Gmail ATO? Just run it against some of the leaked cryptocurrency exchange databases, automatically scan for wallet backups and earn hundreds of millions within minutes.
People are paying tens of thousands for “bugs” that allow them to confirm if an email address is registered on a platform.
Even trust isn’t much of a problem anymore, well-known escrow services are everywhere.
I don't believe those numbers will ever come close to converging, let alone bounty prices surpassing black market prices.
It seems like these vulnerabilities will always be more valuable to people who can guarantee that their use will generate a return than to people who will use them to prevent a theoretical loss.
Beyond that, selling zero-days is a seller's market where sellers can set prices and court many buyers, but bug bounties are a buyer's market where there is only one buyer and pricing is opaque and dictated by the buyer.
So why would anyone ever take a bounty instead of selling on the black market? Risk! You might get arrested or scammed selling an exploit on the black market, black market buyers know that, so they price it in to offers.
Even though I agree with the conclusion with respect to pricing, I don't think this comment is generally accurate.
Most* valuable exploits can be sold on the gray market - not via some bootleg forum with cryptocurrency scammers or in a shadowy back alley for a briefcase full of cash, but for a simple, taxed, legal consulting fee to a forensics or spyware vendor or a government agency in a vendor shaped trenchcoat, just like any other software consulting income.
The risk isn't arrest or scam, it's investment and time-value risk. Getting a bug bounty only requires (generally) that a bug can pass for real; get a crash dump with your magic value in a good looking place, submit, and you're done.
Selling an exploit chain on the gray market generally requires that the exploit chain be reliable, useful, and difficult to detect. This is orders of magnitude more difficult and is extremely high-risk work not because of some "shady" reason, but because there's a nonzero chance that the bug doesn't actually become useful or the vendor patches it before payout.
The things you see people make $500k for on the gray market and the things you see people make $20k for in a bounty program are completely different deliverables even if the root cause / CVE turns out to be the same.
*: For some definition of most, obviously there is an extant "true" crappy cryptocurrency forum black market for exploits but it's not very lucrative or high-skill compared to the "gray market;" these places are a dumping ground for exploits which are useful only for crime and/or for people who have difficulty doing even mildly legitimate business (widely sanctioned, off the grid due to personal history, etc etc.)
I see that someone linked an old tptacek comment about this topic which per the usual explains things more eloquently, so I'll link it again here too: https://news.ycombinator.com/item?id=43025038
That is why I said "also", it should not be the only factor.
The conversation was moving between two possibilities only: either collect bug bounties or sell on the black market. I believe most (again: most, not all) security researchers collecting bug bounties right now would not start selling on the black market in case bounties disappeared. They would change their focus to something else to sustain themselves
The market is priced at the point that the most economic for the business. Apple buying an exploit for $100m is not worth it (to apple) vs the potential loss of life of people who might be killed if sold on the black market. Buying an exploit for 1m prevents them being used to jailbreak, is good PR, and is ass covering PR insurance in case an Apple exploit cause loss of life (‘the seller could have sold to us, but instead they sold it to an evil corporation’).
You can work your day job and make $20-500k/yr or pursue drug dealing and make $5-5000k/yr. I don’t think that’s actually a compelling argument for the latter even if the opportunity cost is better.
Drugs are illegal, exploits are not illegal. Selling them to someone associated with illegal activity is probably illegal, but there is a legitimate fully legal exploit market with buyers like intelligence agencies, and an illegal market with buyers that run oppressive regimes and commit genocide.
I read this often, and I guess it could be true, but those kinds of transaction would presumably go through DNM / forums like BF and the like. Which means crypto, and full anonymity. So either the buyer trusts the seller to deliver, or the seller trusts the buyer to pay. And once you reveal the particulars of a flaw, nothing prevents the buyer from running away (this actually also occurs regularly on legal, genuine bug bounty programs - they'll patch the problem discreetly after reading the report but never follow up, never mind paying; with little recourse for the researcher).
Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying. And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.
The way this trust issue is (mostly) solved in drugs DNM is through the platform itself acting as a escrow agent; but I suspect such a thing would not work as well with selling vulnerabilities, because the volume is much lower, for one thing (preventing a high enough volume for reputation building); the financial amounts generally higher, for another.
The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.
I don't think you know anything about how these industries work and should probably read some of the published books about them, like "This Is How They Tell Me The World Ends", instead of speculating in a way that will mislead people. Most purchasers of browser exploits are nation-state groups ("gray market") who are heavily incentivized not to screw the seller and would just wire some money directly, not black market sales.
I mean, you're still restricted to selling it to your own government, otherwise getting wired a cool $250k directly would raise a few red flags I think. And how many security researchers have a contact in some government-sponsored hacking company anyway? Do you really think that convincing them to buy a supposed zero-day exploit as a one-off would be easy?
Say you're in the US. I'm sure there are some CIA teams or whatever making use of Chromium exploits "off the record", but for any official business the government would just put pressure on Google directly to get what they want. So any project making use of your zero-day would be so secret that it'd be virtually impossible for you to even get in contact with anybody interested to buy it. Sure they might not try to "screw you", but it's sort of like going to the CIA and saying, "Hey would you be interested in buying this cache of illegal guns? Perhaps you could use it to arm Cuban rebels". What do you think they would respond to that?
Defence firms like Raytheon are often happy to pay for stuff like this. What happens afterwards with the exploit is anybody's guess. Source - a vague memory of a Darknet diaries episode.
Eh, not really? If it's a legit company who provides services to various governments, they're going to pay you, they're going to report the income to the government, you'll get a 1099 for contract/consulting, and you'll pay your taxes on the legit income. No red flags. Assuming they're legit and not currently sanctioned by the US government that is.
> Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying.
Is conning a seller really worth it for a potential buyer? Details will help an expert find the flaw, but it still takes lots of work, and there is the risk of not finding it (and the seller will be careful next time).
> And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.
They also have the money to just buy an exploit.
> The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.
I'd imagine the skills needed to get paid from ransomware victims without getting caught to be very different from the skills needed to find a vulnerability.
Because it's nice to get $10k legally + public credit than it is to get $100k while risking arrest + prison time, getting scammed, or selling your exploit to someone that uses it to ransom a children's hospital?
Depends. Within the US, there are data export laws that could make the "whoever" part illegal. There are also conspiracy to commit a crime laws that could imply liability. There are also laws that could make performing/demonstrating certain exploits illegal, even if divulging it isn't. That could result in some legal gray area. IANAL but have worked in this domain. Obviously different jurisdictions may handle such issues differently from one another.
Issue 1: Governments which your own gov't likes, or ones which it doesn't? The latter has downsides similar to a black market sale.
Issue 2: Selling to governments generally means selling to a Creepy-Spooky Agency. Sadly, creeps & spooks can "get ideas" about their $500k also buying them rights to your future work.
So you did use LLMs to write at least part of the software. I imagine you feel no shame, but it would be nice to at least mention it on the github page. It's a security risk.
As for your question, I don't know about the person you're replying to, but for me any software where part of the source was provided by a LLM is a no-go.
They're credible text generators, without any understanding of, well, anything really. Using them to generate source code, and then using it, is sheer insanity.
One might suggest it means I soon won't be able to use any software; fortunately the entire fever dream that is the ongoing "AI" bubble will soon stop, so I'm hoping that won't be the case.
They literally state that they used LLMs to build it in the second sentence of their initial comment so not sure why you frame it as something they weren't upfront about.
As for it being a bubble that will stop completely, that ship has long since sailed and I assume you're inadvertently using LLM generated code somewhere in your software stack already, due to news reports saying certain companies are already using LLMs in their codebase.
I wish I could speed up time just to see how this comment would age. While I personally prefer living in a world without LLMs, I do suspect you're going to end up without any software.
I'm imagining some apocalyptic world Mad Max style where there are underground groups hand writing code to avoid the detection of the AI. Unfortunately, so few people are able to do it any more and the code is so bug ridden that their attempts at regaining control over the AI often ends in embarrassing results. Those left in the fight often find themselves wondering why everyone just rolled over for the machines, what, because it made their lives easier??
Maybe it's a scene from a show I've seen already??
There will always be a niche of people writing software, just as today while most work in web dev or backend, there are some who work in embedded or have retro computing as a hobby.
“atlanticist” - the culture of the enlightenment and the good that’s come from it.
Wikipedia does hold ideals, that access to knowledge is a net good, that people can cooperate both in contribution and review without a dominating magisterial authority. That rational dialogue and qualification and refinement is possible, and that it’s possible to correct for bias, and see the difference between bias and agenda.
Like those whose anti-enlightenment agenda is revealed when they use “atlanticist” as a slur.
No. One can beleive in the enlightenment ideals without placing north america, europe, and the relations between them as the most important thing.
For example - one could argue (quite successfully) that the US and Europe propping up dictators in south america and middle east to secure easy access to oil against the wishes and election results of those nations is opposed to many enlightenment ideals, but it is still atlanticism by prioritizing north american and european relations and preservation of values within their little bubble.
Also, just because there was much good resulting from enlightenment thinking, we also got things like the slave trade, the belgian congo, various genocides and so on from it... all of which are pretty bad.
The very notion that the enlightenment had all the answers and that there is nothing more to improve or learn is itself anti-enlightenment.
(I know there were abolitionists in the enlightnement,and examples of people opposed to all the other bad ideas i mentioned, but there are plenty of people who "rationally" argued for them too)
"The slave trade" refers to the transatlantic slave trade, not slavery in general. (Though I would question whether that really qualifies as a "product of the enlightenment": post hoc ergo propter hoc, and all that.)
Is there another public source for encyclopedia-type articles that is better for geopolitical content? For example, if I have a philosophy question I'll often consult the Stanford Encyclopedia of Philosophy instead of Wikipedia.
If there isn't a more neutral public source -- if there are only sources with different biases, or if the better sources are behind paywalls -- then I think that Wikipedia is still doing pretty well even for contentious geopolitical topics.
Usually disputes are visible on the Talk page, regardless of whatever viewpoint may prevail in the main article. It can also be useful to jump back to years-old revisions of articles, if there are recent world events that put the subject of the article in the news.
Apart from Wikipedia, speaking more generally, I think that articles with a strong editorial bias still provide useful information to an alert reader. I can read articles from Mother Jones, Newsmax, Russia Today, the BBC, Times of India, etc. and find different political and/or geopolitical slants to what is written about and how it is reported. I can also learn a lot even when I strongly disagree with the narrative thrust of what is reported. The key thing is to take any particular article or publication as only circumstantial evidence for an underlying reality, and to avoid falling into complacency even when (or especially when) the information you're reading aligns with what you already believe to be true.
Wikipedia has been the proto-Reddit for a long time, that is, it was relatively easy for ideological bubbles to manufacture the Chomskyian Consent, just by being early adopters.
As such it rapidly developed into heavily biased page, as Wikipedia‘s co-founder Larry Sanger keeps pointing out.
It helps if you are proficient in multiple languages so you can at least „hop“ between the (some) bubbles. But the gatekeeping is always there.
I know sometime around Trump's first presidency, in Bill Clinton's Wikipedia entry, under the Impeachment section they added in a picture of Trump and Clinton shaking hands, apropos of nothing in the surrounding text.
There is no "intelligence" in LLMs; they're text predictors. As far as I can tell the whole LLM technology has limited applications in entertainment and that's about it. "Hallucinations" (even that term is problematic, as it suggests there's an actual consciousness/person, or the seed of one, there) as well as other "failures" - in fact features inherent to how the tech works - make it irrelevant for basically all other use cases.
Tech as an industry already had an atrocious reputation but the moment the insanely stupid "AI" bubble pops I suspect it'll get much worse. Ultimately it's pretty deserved, though. At this point the bullshit is so strong one almost wishes for a new AI winter.
I agree, ESR certainly does spout a lot of slanderous, racist, misogynistic, and homophobic drivel. That and attacking RMS is basically his whole schtick.
Truth is a complete defense to a claim of slander, and ESR's own words provide that proof and defense for everyone who rightly calls him racist, misogynistic, and homophobic. Words so vile and lurid that people have actually begged Thomas Ptacek to stop posting them to twitter, and donated over $30,000 to charity just to not hear ESR's own words quoted.
>Raymond has claimed that "Gays experimented with unfettered promiscuity in the 1970s and got AIDS as a consequence", and that "Police who react to a random black male behaving suspiciously who might be in the critical age range as though he is an near-imminent lethal threat, are being rational, not racist."[30][31] A progressive campaign, "The Great Slate", was successful in raising funds for candidates in part by asking for contributions from tech workers in return for not posting similar quotes by Raymond. Matasano Security employee and Great Slate fundraiser Thomas Ptacek said, "I've been torturing Twitter with lurid Eric S. Raymond quotes for years. Every time I do, 20 people beg me to stop." It is estimated that, as of March 2018, over $30,000 has been raised in this way.[32]
And jailbreak that Kindle and install KOReader on it, too; now it supports epub in a much more awesome reader app (not to mention having the capability to install any gtk app - I have a term emulator with ssh on mine, among others). And you're now root and can remove the amazon crap for real, no need to use airplane mode after that.
Close to every bank in the EU requires their user to have an app, for MFA (both for logging in and for validating transactions - transfers, payments). They use the smartphone's TPM. I have yet to see one that allows you to use your own MFA app.
The few I've seen that don't require it will validate the same through text messages (not everyone has a smartphone); though if you associate their app even once, you're screwed - the app it is from now on.
reply