Hacker Newsnew | past | comments | ask | show | jobs | submit | H8crilA's commentslogin

A lot of these peptides are designed to optimize bodies. And a lot of these people suffer from OCD - often a similar type to anorexia. No, it is not normal to intrusively think about your body every few minutes, most people that you see around you do not think about their bodies more than maybe once a day, when they look in the mirror after a shower. And maybe not even that.

> crummy code, but not the very tool that's supposed to be the state-of-the-art coder

Why not? It is subject to the same pressures, in fact it is subject to more time pressure than most corp code out there. Also, it's the model that's doing the coding, not the frontend tool.


I thought the sales pitch of all of this is that the AI was supposed to relieve people from having to do a bunch of annoying bootstrap coding and to do it in a way that we could extended easily.

I have a subscription to Claude Code and despite my skepticism, it has been pretty good at just getting a goofy PoC thing going. When I look at the code, it’s usually insane unless the prompt was so narrow and specific like about writing a function that does one thing and only one thing.

Outside of small, personal projects, I am still really uncomfortable at having agents run wild. I see the result, and then I spend a bunch of time having to gain the context of what is going on, especially if I ask it to implement features in spaces I have general knowledge, but not expertise. So, the problem remains the same. These things still need handholding by people who understand the domain, but having people become glorified PR reviewers is not an acceptable path forward.

Arguing that there is lots of bad production code kinda avoids the actual issue that is going on here. Yes, a lot of sloppy code can and has been written by people. I’ve seen it myself, but it feels like the actual thing is that, we are now enabling that at scale and calling it “abundance” when instead we are really generating an abundance of completely avoidable security holes and logic errors.


Exactly. I thought AI was going to be smarter. I thought AI would give us expert coders. Instead we have idiot savants.

Does the pressure affect the LLM's judgement in the same way it does a developer whose job is on the line?

i once scolded an ai for being too late when i figured out an issue before it could come back with an answer: it made an excuse that it took too long to start up, lol

i would guess telling it to "hurry up" would produce even worse code than already does without hand-holding or maybe it would make an excuse again...


You treat your brokerage account this way? I'm sure that the retirement funds don't.

If you're a retail business that sells RAM then yes, this is the way.

If you're a fund that holds RAM in some indirect manner (like you hold hypothetical RAM futures) then it depends on whether your country's laws ask for market-to-market value for that specific kind of security.


France didn't pay taxes on the gold, so it didn't keep it "on the books" at decades-old prices. It tracked the real-time value.

However, that doesn't mean there isn't profit possible, even over a supposedly super-liquid asset like gold.


America has lost every war in the recent past.

Has anyone “won” a war in the recent past? In the old fashioned sense that they conquered something and used the newly acquired resources to make their own citizens lives better?

The problem with the post ww2 world is that the old definition of winning a war no longer holds. You just don’t see wars of conquest very often and they don’t seem to work when they happen.

The closest I can think to winning off hand is a few of the colonial civil wars. Vietnam for instance won in the sense that they outlasted the US and have a nominally communist government but it is not an outpost of the Soviet Union and it’s a major trading and tourist partner of the US.

Iraq is not led by a belligerent to the US dictator and Afghanistan isn’t home to training camps for terrorists dedicated to attacking the US (yet).

These were all extremely stupid, expensive and inhumane military actions. But the US never went into them to hold territory. So “there until we got tired of it” is as close to winning as it was ever going to be.


Yes, winning a war means achieving your political objectives. For example Iran wins this war even if they maintain the status quo. And they are on track to get even more, like obtaining ownership over the strait.

Then by the stated aims going in the US “won” both wars in Iraq.

Some of them. These were the stated objectives as per general Tommy Franks:

* Depose's Saddam government

Accomplished.

* Identify, isolate, and eliminate Iraqi WMDs

Failed. They were never there.

* Find, capture, and drive out terrorists from Iraq

Failed. Iraqi-based terrorism increased in the aftermath.

* Collect intelligence related to terrorist networks, and to "the global network" of WMDs

Failed. North Korea tested its first nuclear weapon in 2006, years after the invasion. The US accuses Iran of trying for them to this day. Chemical weapons were used by ISIS.

* End sanctions

Accomplished.

* Deliver humanitarian support to the Iraqi people, including the displaced

Failed. There were more displaced people due to the war than before and a higher need for humanitarian support which took years to complete.

* Secure Iraq's oil fields and resources, "which belong to the Iraqi people"

Accomplished. Somewhat, US and UK based companies, plus China, now runs a lot of their oil fields. Iraqi GDP per capita is one of the lowest in the region.

* Help the Iraqi people "create conditions for a transition to a representative self-government"

Arguable. Parts of the country want to secede and have armed groups. Representation and turnout is not amazing, but I guess not even in Western countries it is.


> Secure Iraq's oil fields and resources, "which belong to the Iraqi people"

The cynical read of this statement (extract resources from the invaded countries in order to enrich the American capital class) is the primary aim for all these conflicts.


That's not cynical. Trump has done the world a great benefit by transparently saying out loud what was hidden US policy for decades.

The notion of owning or monetizing an international waterway is fundamentally incompatible with customary international law. Iran can try it anyway if they're not worried about international law, but that was always an option for them, war or not. The timing of performing this extortion now seems to be mainly about scoring war propaganda points.

Panama Canal and Suez Canal require tolls, granted not exactly the same thing.

The Panama and Suez Canals charge fees because they are artificial passageways, created by the blood and sweat of thousands. Both were huge investments.

The Panama Canal had cost 400-500 million USD and 25-30k lives to construct, when it opened in 1914.

The Suez Canal cost around 100 million USD and 100-120k lives to build in 1869.

Charging for transit through man-made infrastructure is fundamentally different from charging for passage through a natural international waterway.


> fundamentally incompatible with customary international law

So is bombing countries on a whim.

If you want to take the high ground you have to make sure you don't first poison it with your own stupid mistakes. Iran can make a pretty credible play for reparations, and if the belligerents are unable or unwilling to pay up then Iran can selectively blockade the strait for their vessels and cargo. It is one of those little details that was 100% predictable going into this.


Not exactly "on a whim" after Israel has been attacked by at least a hundred thousand Iranian rockets and drones.

Yes, and before you know it we're at the Balfour declaration. But none of that matters in the context of the situation on the ground (and, crucially, in the water) today which was entirely predictable (except by Trump, Hegseth & co). You either plan for that eventuality or you don't start the war.

Note that we're talking about the US and Iran, not about Israel, though obviously they are a massive factor here it is the US that is in the hot seat, both Israel and Iran were doing what they've been doing for many years.


Why would we look back to the Balfour declaration? Israel has been attacked by tens of thousands of Iranian rockets and drones just since Oct 7.

After all their aggression, it seems absurd to paint the Iranian regime as a victim that was attacked "on a whim" and is owed reparations.


I can't find sources for "tens of thousands of rockets just since oct 7", can you help me? I see a few thousand as parts of exchanges after the Israel-initiated "12 Days War", and then a few thousand more after the (also Israel-initiated) current conflagration. Notably, the rocket attacks stopped during peace talks that US and Israel entered after starting the wars, only to resume after those peace talks were betrayed with bombing.

Not sure what the best data source is, but one data point is that just in the month or so since Oct 7, the number of rocket/drone attacks against Israel was already around 9,500: https://www.reuters.com/world/middle-east/hamas-fires-rocket...

The above claim was that Iran had attacked with thousands of rockets. These are from Hamas.

The 9,500 figure was for all fronts, not just Gaza. But true, it does include some Hamas rockets, most of which are not exactly "Iranian" (although Iran helped with training and smuggling some parts).

Another data point - https://www.reuters.com/world/middle-east/one-year-war-israe...

> Since the start of the war, 13,200 rockets were fired into Israel from Gaza. Another 12,400 were fired from Lebanon, while 60 came from Syria, 180 from Yemen and 400 from Iran, the military said.

So 12,400 rockets fired at Israel by Hezbollah, the vast majority of which are supplied by Iran at no cost. That's just in one year and doesn't include drones.


> except by Trump, Hegseth & co

Do not underestimate the current administration. They have other reasons for this conflict, and so does Netanyahu.


Azerbaijan invaded Nagorno-Karabakh in 2023 and now all their enemies are gone (disarmed and Armenians expelled) which presumably makes their citizens better off once they move into the empty territory.

Yeah and I suppose Sri Lanka won against the Timor rebellion.

So I shouldn’t say it never happens.


And the left didn’t make a peep about 100K+ people being ethnically cleansed from their historical homeland. Contrast with Palestine.

Two things to note there. One, many did make a peep; I have friends, coworkers who both ardently discussed and even pointlessly protested in small groups with signs.

The other - I don't pay taxes to the Azeris, every moment of my productive life doesn't support the genocide there, and my soul is in some way not as blackened by the atrocities there. I think people care about Palestine because they rightly feel complicity. Maybe Russian citizens - whose labor indirectly goes to supporting Azeri atrocities - are up in arms?


Well, given that the Azeris are armed by Israel, there might be some indirect US complicity…

The Gulf War was a decisive victory, if you consider that recent.

It hasn’t. There hasn’t been a war in centuries where America didn’t obliterate its opponent. It loses politically because its people don’t want war, but it’s defeated militarily everyone it’s engaged with.

If you can not win a war because your population is unwilling to bear the cost, then you are still unable to win (that is in fact a very typical way for a war to end).

Nobody is disputing the fact that the US spends more money on arms than anyone else and has the shiniest of toys as a result, but "winning" in war is about effecting the outcomes that you want, not about whether your weapon systems are superior.

The US military has clearly failed to deliver the outcome that Americans wanted in many recent conflicts (Vietnam, Taliban); counting those wars as "lost" makes a lot of sense.


One of the reasons to do a war is to simply show the enemy that you are able and crazy enough to go to war with them over whatever grievances you had. This is called strategic deterrence.

You are making the folly of thinking of war like lawsuits, where one side wins and the other side loses, and the losing side goes home with nothing. This is not so.

If you're walking home from work and some person tries to mug you, even if they are unsuccessful, that will permanently change your behavior as if they had successfully robbed you anyway. Maybe you'll change your route. Maybe you won't walk and drive instead.


You can both "win" or both "lose" if your goals are not in direct conflict (rare).

I'd argue that the most important thing when trying to win wars is to aim for realistic outcomes.

The first gulf war was arguably a win because of realistic goals (get Iraq out of Kuwait and stop them from invading it again), while most other interventions in the region were basically "designed to fail", and unsurprisingly never achieved anything of note (and the problem was not lack of military capability).


Yes but if you spend some billions of dollars to replace the Taliban with the Taliban, you have only demonstrated that you are willing to make your own citizens suffer with diminished resources for no outcome.

>If you're walking home from work and some person tries to mug you, even if they are unsuccessful, that will permanently change your behavior as if they had successfully robbed you anyway. Maybe you'll change your route. Maybe you won't walk and drive instead.

In global politics, this tends to make you want to increase your defenses so it doesn't happen again, and find local partners for that defense. This usually comes at the cost of US influence, not its increase.

Like Iran is looking at its current situation and going "The literal only deterrence we could have to prevent this is to develop a nuclear capability. The US cannot be trusted to deal with, and it is pointless to try."

A nuclear Iran can now only be avoided by scorched earth. Scorched earth will now just cause an already partly US hating population to hate them more and create matyrs. Theres no possible upside to this conflict.


With Afghanistan, I think people fixate on the fact that the Taliban is still there and while that's true, Al Qaeda has completely been wiped out (except fringe groups that have adopted the name) and OBL, the person most responsible for 9/11, was successfully killed by an attack launched out of Afghanistan. The current Taliban and whatever terrorist groups remain in that region no longer have an interest in hurting the US directly. The current Taliban is also very different from the one in 2001, almost geopolitically flipped in some ways (allied with India instead of Pakistan, and almost certainly responsible for majorly disrupting China's OBOR project in that region, another win for the US.

Not to mention, 20 years of no Taliban. An entire generation of Afghans grew up without being under a Taliban government.


“A Kourier has to establish space on the pavement. Predictable law-abiding behavior lulls drivers. They mentally assign you to a little box in the lane, assume you will stay there, can't handle it when you leave that little box.” - Snow Crash

Is it strategic deterrence, or just being so unreliably and inconsistent that insider information becomes more valuable?

Is it strategic to demonstrate a lack of planning or that you are a poor ally incapable of garnering support (either domestically or abroad)?


The term of art for losing politically is “losing”.


War is fought to achieve political objectives. If those objectives are not achieved then it is only fair to say you lost the war.

Where to go next? I don't think anyone has gotten close to automating everyday PC usage, likely via screen capture and raw keyboard+mouse inputs. Imagine how much bigger would that market be than vibecoding.

tbh I don't think this use case is going to be as big as people seem to think

there are a lot of reasons, but in brief - I think AI desktop use is a product that the average person isn't going to get much value out of. to make an analogy - the creators of Segway thought people would buy them in large numbers, but it turned out most people don't mind walking manually (or at least, don't mind it enough to spend money on a scooter). I think makers of AI Desktop Use products are going to find out the same thing as it relates to everyday tasks like checking email and shopping.


I was thinking more remotely managing the computer in a warehouse, replacing the mouse of an architect, or some physical object engineer. That your grandma can finally find Discord by speaking to such a bot is just a nice side effect.

well yeah I wasn't even talking about professional use, since I think in professional use cases it will turn out make a lot more sense to set up APIs that AIs, use, than to set up screen scraping and mouse+keyboard use.

in fact even in rare cases where it's not possible to get an API or CLI to interface with some piece of software, I think people will find that their best bet is to first create a deterministic screen-scraping program for that specific software, then have that program serve an API for the AI to use. it would be so much cheaper to run (inference-wise) and so much more reliable, than having the AI itself perform the image interpretation and clicking.

I see AI desktop use as mainly a consumer product for that reason, since that's the situation where you have to react "on the fly" to whatever the user asks you to do and whatever program happens to be on their computer (versus professional cases which are more large-scale and repetitive, and where you can have a software developer on hand).


Automating GUI use is a silly idea when the AI can do much of the same things by getting access to a *nix command line - which is how all coding models work. It matters when driving proprietary apps or browsing websites that aren't providing a clean machine-readable API, not really otherwise.

Do you regularly find text content that you know is AI written (but is not marked as such)? Because honestly I don't, and it must exist in decent quantity by now. Or perhaps it's still sparse?

Have a look here [1] and here [2] - I think they are good resources, but fallible in the long run. I think yes, I do, often confirmed by communication with people I know (i.e. i suspect they have used AI to make something -> I ask). This falls victim to confirmation bias, though. I suspect a nontrivial amount of writing I read is AI generated without me realising, and I'm wary also of falsely flagging AI-generated content that is actually from humans.

[1] https://en.wikipedia.org/wiki/Wikipedia%3AAI_or_not_quiz [2] https://en.wikipedia.org/wiki/Wikipedia%3ASigns_of_AI_writin...


Okay, but the answers in [1] look something like:

AI generated. Some of the clues include:

- Most obviously, a failed ISBN checksum

- Other source-to-text integrity issues; for example, the WWF source says very little about Malaysia specifically, only mentions Sunda tigers (Panthera tigris sondaica), and does not mention tapirs at all

- Very short yet consistent paragraph length

- Generic "see also" links, one of which is redlinked

This is not the sort of thing that I pay attention to unless I'm doing detailed research. And even then I'd probably have a bot check these for me, ironically, since it's such a mechanical job. At the very least detecting AI like this requires conscious effort.


Ok, but like, what about [2]?

I can easily tell AI writing. I'm sure plenty goes under the radar, but I can still catch a lot.


I think the second resource that you linked to is valuable. The first is useless unless you're a Wikipedia editor, the significance of verifying citations not withstanding.

The gap between LLM-generated writing and the composite style of the average Wikipedia page is more narrow than most people may believe.


Yes, here, reddit, X, at work in people's emails and status reports.

You will start to recognize it over time. The major AI models each have their own voice and patterns that they overuse.

The more you see those patterns the more you start recognizing them. By now I can recognize quickly if a blog post or README.md was generated by Claude or ChatGPT because the signs are so obvious.

Even Hacker News comments that are AI written are easy to spot if they weren't edited. I know I'm not alone because when I recognize an AI comment I check their comment history and find other people calling out their AI-generated submissions, too.

Learning how to recognize the output of the popular AI models is becoming a critical business skill, too. You need to be able to separate out the content from someone who was doing real work that you should take seriously as opposed to the output of someone who is having ChatGPT produce volumes of text that they don't review. The people who do that will waste your time.


I don't see how to interpret your claims. How do you yourself know that you're right when you "recognize" Claude or ChatGPT? How do you know how much of the text you don't recognize as any LLM is actually LLM-generated? My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.

I'm not claiming to be able to spot 100% of LLM written output

However the default tone and output style of Claude and ChatGPT are very obvious.

> My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.

If you can share that data we can discuss it, but there's nothing really to discuss here without a source

Among people who review a lot of user-submitted content, it becomes easy to spot the consistent voice of LLM writing. Wikipedia has a full page on it: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


It’s very obvious if you leave the default tone. If you specifically ask it to hide its ai voice and make it appear human, it does a really good job. Even better if you give it an example of the writing style.

Ask it to write in the style of patio11 or someone else with a distinctive tone, and it will do a remarkable job.

It will pass pretty consistently. Not sure I love it.


This is a temporary problem. Look at how fast things are progressing. Things will improve until none of this matters because the output is indistinguishable.

I wish I could be this confident about the future.

It’s not confidence. It’s the most realistic trendline from the last 3 years. Chatbot to agentic coding.

Yes, often, and often here on HN or Substack if I point it out, it doesn't lead to anything good. Many don't recognize it, many do, the author gets defensive etc.

This article doesn't have the tells, it looks human written.


Literally every day from green accounts on Hacker News, and in many, many TFAs.

My comment history is months of me pointing it out about articles here. You're just not noticing it, it's everywhere and is extremely obvious to me.

It's possible I should envy you, I'm not sure.


For example the first frontpage post I read just now (I haven't checked others) is I'm fairly sure written with the use of AI (I would guess based on a human draft): https://news.ycombinator.com/item?id=47566442

I can't prove it but I'm comfortable enough in my judgment to say it.


Yes, all the time.

HN and YouTube are the worst offenders for me.


I see it all the time in basically every form of text communication. What makes you think you are not seeing it?

I found that many people don't have a radar for this. They may know about delve, emdashes, tapestry, multifaceted or "not just X but y" and if these are not there they don't see it.

They probably don't care enough to notice the tells. I think that it's generally those ambivalent, skeptical or opposed to AI who notice, while those who wholeheartedly support AI see no reason to differentiate between it and humans and so do not even try to.

I don't think it's that simple. I'm not blanket opposed to it. I'm more along the lines of the author of the article. Use it for what it's good at, sift through unstructured info, convert information from one format to another, implement things that are planned out well with iterations and feedback, etc, and generally mapping out the capabilities.

I think those who are very opposed to AI often don't know much about the real limitations since they don't use it, and their complaints are often a year or more out of date.

I think the ideal demographic for spotting these are people who use the frontier LLMs a lot and they also have worked with text in detail, such as copywriters, people who have learned foreign languages and grammar etc., have edited articles for language and generally have a more "wordsmith" look at language and are sensitive to flow and rhythm of language on a more technical level.


I'm pretty sure this was written or heavily edited by an llm.

https://www.seriouseats.com/eggplant-grilling-tips-11759622


All the time, especially on LinkedIn.

There's at least two comments in this submission from green accounts if you enable showdead.

Someone still has to.

This seems to be mainly about the so called negative symptoms, not positive symptoms (like hallucinations or delusions). While it is often hard to argue with people about their positive symptoms in schizophrenia or in mania, pretty much nobody who has negative symptoms wants to have them. The fact that antipsychotics do little about the negative symptoms is probably the biggest pain of schizophrenia sufferers - and they are aware of that.

Also, and this depends on the jurisdiction, but people can be forced to take psychiatric medication against their will. Or even forced to go through a treatment like ECT, for example when presenting with strong and dangerous mania. BTW, ECT has an extremely unfair popular opinion, it's one of the best treatments in all of psychiatry. It could even be that it is impossible to get a response from the patient, for example if they are catatonic and don't budge within a reasonable time - you just inject them with benzodiazepines, as this is a serious condition if left to last a long time.


ECT just comes across as a bit barbaric. I'd welcome more research into Psilocybin to achieve system reset.


Psychedelic therapy is unlikely to ever become mainstream treatment for serious mental illness like schizophrenia or bipolar disorder. In those cases, it’s significantly more likely to cause more problems than it solves. In general these new faddish psychedelic treatments are mostly effective for bougie mental illness – mild depression, anxiety, stress – but they do not belong in a treatment regimen for serious illness. There is a reason every psychedelic treatment study excludes participants with schizophrenia and bipolar.

ECT might seem barbaric and unsexy compared to dosing some psilocybin and listening to some ambient music in a cozy room, but that doesn’t reduce its clinical value for people with serious, treatment resistant disorders.


People do try psylocybin, or ketamine, or frankly just about anything. Esketamine even has regulatory approval as a tretment. Research is sometimes posted here on HN. But nothing seems to be as effective as ECT, it truly is the king of affective disorders.

BTW, and not many people know this, it is a procedure performed under full anesthesia, including muscle blockers. From the outside it looks very calm, and from the inside the patient's experience is pretty much identical to taking a nap.

It is not risk free, precisely because of the anesthesia, so in most areas one can only get it if they try enough other treatments - like 2 or 3 or something like that, ideally from different classes of drugs. But definitely do consider this if you're suffering and nothing seems to help (enough).


My great friend, when we were 20, shot himself in the head while we were doing shrooms. This is not an uncommon occurrence. Thousands of incidents of self harm happen every year in the US alone because of these drugs.

I would advise anyone against this. Don't believe the weird hype (that mostly all comes from a few small clicks of people looking to profit off this drug) about mushrooms being some spiritual, mental catch all. If you have any sort of mental illness you probably should avoid. Don't play Russian roulette with your sanity.


People do that after getting drunk too.


Depressed people killing themselves as soon as they start to get treatment is a known phenomenon. The energy that comes from treating depression gives them just enough oomph to get to the 'finish line.' It's quite possible they were thinking about it and never told anyone and everyone thinks it's a complete surprise caused by the drug, plenty of people have suicidal ideation or depression while giving zero indication or clues to anyone close to them that would be the case.


Even outside a drug. Breakup happens this can be a result.

Maybe if guns weren’t so accessible people wouldn’t be so quick to use them on themselves in those moments. There’s a statistic out there where a gun in the home is most likely to harm you.


The Mossad and various abrahamic apocalyptic cults will finally push us into green tech. Maybe I judged them too harshly.


Someone else already wrote it, but it's just too funny to not abuse:

Evals are bad because people learn and fit to them. So we do extremely small evals instead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: