Hacker Newsnew | past | comments | ask | show | jobs | submit | kelnos's commentslogin

I get the general frustration there, but it's weird to focus on NASA's budget when it's such a teeny tiny fraction of the total.

Yes, there's a lot of government waste, but NASA ain't it.

And I would suggest that the billionaire class and unfettered capitalism are far more responsible for the modern day version of Scott-Heron's woes than the good ol' government scapegoat.


If DOGE served for anything at all it was for showing that there isn’t even that much “waste” per se. If there’s any waste it’s in the Pentagon which can’t even audit itself, but of course DOGE didn’t even get close to that. It was all performative for them.

I think they proved that the waste is not easily defined. I would call fraud, waste, but a computer program isn't likely to discover it without boots on the ground looking to see if the money is actually going where the records indicate.

The richest person in the world, who has had billions from government handouts, decided they were going to audit government spending.

Fraud doesn't even begin to describe it.


SpaceX did not receive government subsidies. Government contracts, yes, but those are payments for services delivered, not subsidies or handouts.

I'm pretty enraged that the government was illegally taxing me, and now that those taxes have actually been found to be illegal, I'm not getting a refund.

Corporations claiming the refund on my behalf (and then not propagating that refund to me) is just icing on that shit-cake.


I think what's missing in your (and the plaintiff's) analysis is that the government did not illegally tax you, it illegally taxed importers. The fact that those importers "chose" to raise their prices as a reaction is their business decision.

I think what you and the plaintiff need to show (direct connection between supplier costs and consumer prices) fundamentally goes against free business in the US. I mean, companies change prices all the time for whatever reasons they want, no?

But IANAL, "unjust enrichment" is apparently a real claim (though not sure if it applies to a store-consumer relationship) and consumer protection laws exist, so maybe I'm wrong.


It's this and I don't know how people here can't see it. You're getting fleeced by corporations as they walk away with all of the money thanks to an illegal tax by the US government on most consumer goods.

Companies get to benefit from higher prices being standardized (once a price baseline go up, they rarely go back down) and they get another check from Uncle Sam.


Or maybe the author is just a competent writer.

Yes. Let's assume so. My point is the suspicion itself.

I hate that this is the first thing that crosses my mind now anytime I read a well-written article.

But do you believe that they'll continue to improve until they're good at everything, all the time, in ways a human can never match?

If yes, then that's dangerously optimistic. If not, then we'll always need humans who have learned the "hard way" (the Alices, not the Bobs). But if LLMs make it impossible for Alices to come up in the field, we're screwed.


I think that a lot of software engineering work is a lot simpler than people like to think, and that the demand for Alices is far outweighed by the demand for Bobs. I think there will always be a place for Alices, but there will be a drastic reduction in the workforce. I think all of this unconditionally about future improvement in AI - in my view the models today are more than capable of bringing about this shift, it will just take time.

But can Bob actually do that with agents, without limit? Right now, he's going to hit a ceiling at some point, and the Alices of the world will run circles around him.

The question is: will agents improve to the point that even the most capable Alices will never be needed to solve problems? Maybe? Maybe not? I'm worried that they won't improve to that degree.

And even if they do, what is the purpose of humans in this world?


I think the real issue is that no, he can't, but corporate and government entities that decide won't care. Things will simply get worse. The problems will be left to fester as things that simply "can't be done".

> we're trending towards superintelligence with these AIs

The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

> Whether a human does actual work or not isn't particularly exciting to a market.

You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.


There's no good definition of superintelligence. A calculator is already way more capable than any human at doing simple mathematical operations, and even small AIs for local use can instantly recall all sorts of impressive knowledge about virtually any field of study, which would be unfeasible for any human; but neither of those is what people mean when they wonder whether future AIs will have superintelligence.

General superintelligence is more well defined, I assume that is what he meant. When I hear superintelligence I assume they just mean general superintelligence as in its better than humans at every single mental task that exists.

> But it's far from clear that we're not moving toward a plateau in what these agents can do.

It is a debatable topic, and I agree with you that it's unclear whether we will hit the wall or not at some point. But one point I want to mention is that at the time when the AI agents were only conceived and the most popular type of """AI""" was LLM-based chatbot, it also seemed that we're approaching some kind of plateau in their performance. Then "agents" appeared, and this plateau, the wall we're likely to hit at some point, the boundary was pushed further. I don't know (who knows at all?) how far away we can push the boundaries, but who knows what comes next? Who knows, for example, when a completely new architecture different from Transformers will come out and be adopted everywhere, which will allow for something new? Future is uncertain. We may hit the wall this year, or we may not hit it in the next 10-20 years. It is, indeed, unclear.


Are agents something special? We already had LLMs that could call tools. Agents are just that, in a loop, right?

Roughly speaking - yes. Still, it's an advancement - even if it's a small one - on the usual chatbots, right?

P.S. I am well aware of all of the risks that agents brought. I'm speaking in terms of pure "maximum performance", so to speak.


I think that's a bit different than the argument being made. We should still always use htonl() and ntohl() etc. when dealing with protocols that use network byte order (a shame we're stuck dealing with that legacy). I think even if all big-endian machines magically disappeared tomorrow, we should still do that (instead of just unconditionally doing a byte-swap).

But for everything else, it's fine to assume little-endian.

You sound like some sort of purist, so sure, if you really want to be explicit and support both endiannesses in your software when needed, go for it. But as general advice to random programmers: don't bother.


So? The same problem exists for having the OS broadcast the user's age range to all apps/services/websites: the service outside your jurisdiction doesn't have to actually restrict content based on age.

At least with the reverse system (services broadcast an age rating), you have some nice properties:

1. You can set it up so that if the service doesn't broadcast an age rating, access is denied.

2. You aren't leaking age information (even if it's just a range) to random websites outside your jurisdiction.


> instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child.

Isn't that how parental controls already work?

There are problems, though:

1. The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)

2. Web browsers. Unless the parent is going to uninstall and disallow web browser use, the kid can still sign into whatever service they want using the web browser. I don't think parental controls block specific sites, and even if they do, there are ways around that, certainly.

I am very often the person who says that parents should actually parent their kids and not rely on the government to nanny them. But in this case I think there actually is value to the government making laws that make Facebook (etc.) less evil. And as a bonus, maybe they'll be forced to be less evil to adults too!


> The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)

If so many people give their kids phones and so few don't, why ban them in the first place? Clearly the vast majority of parents are fine with their kids having one.

You're just inventing a problem then. Or worse, implement a conservative talking point.


Had this problem with my kid - social media caused serious mental health issues. Toxic content in kids areas.

But taking it away was worse.

Once “not using it” isn’t an option, government intervention becomes reasonable.


1. The current norm of social siloing apps was created by these tech companies in the first place. What regulators can do is discourage anti-competitive practices that lock users into specific software and hardware platforms. If there's plenty of competition for every kind of social app, and competition for OSes, and users could freely choose and move between them, then not having a particular app would not result in social isolation. This affects adults as well.

2. The OS has a firewall. But it's currently not user-controllable on your phone. Phone companies have decided you don't need that feature. But actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites. We can even make it possible to import/export site lists as a txt file so that you can download/share a curated block list that you or other parents made, to block many things at once. You could also do this for your entire home WiFi network in your WiFi router's settings, if your router's firmware has that feature.

And yeah, I agree that we should make the platforms less evil in general. But I think the way to do that is to give people the ability to easily ditch bad platforms and build new ones. Let the platforms actually compete, then the best will prevail. Right now, they don't prevail because of layers and layers of anti-competitive barriers. It would take great technical effort to regulate all the tricks these tech companies use, that's why I propose dealing with it at the root: make it so that all computer/phone hardware manufacturers must open-source their device drivers and firmware, and let the user lock/unlock the bootloader and install alternative OSes. If we do this, then the entire software ecosystem will fix itself over time along with all the downstream problems.


> Phone companies have decided you don't need that feature.Bu actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites.

iOS: Settings > Screen Time > Content & Privacy Restrictions > Toggle on

Then same area:

- App Installations & Purchases: disallow all

- App Store, Media, Web & Games > Web Content > Limit Adult Websites > Fill in allowlist and/or denylist, or Only Approved Websites and fill in allowlist


Apple is indeed better than most other companies on #2. But that's because it's the worst offender on #1. Its strategy is to appear to be the model company that cares about user rights and privacy, in hopes of capturing everyone in their closed-source walled garden that's already surveiling you at the OS level.

They're a part of the corp-gov surveillance complex [0]. This is the real threat behind the age verification push. The feds already have mass surveillance capabilities in iOS and macOS, and even Windows and most Android distros, but not on most open-source Linux distros, so they're starting to force it legally in the open. They're desperate because Linux is about to outcompete the enshittified Windows on desktops.

[0] https://en.wikipedia.org/wiki/Edward_Snowden#Revelations


It's possible to mandate effective parental controls and then say "it's illegal to give your child access to facebook" and then just see what happens. You don't have to jump straight to making it technologically guaranteed by construction, maybe it's enough to just give parents the tools and an excuse to say no.

We don't need DNA testing locks on cans of beer that won't let you drink from them unless you're an adult, do we? It's perfectly possible for a parent to buy their child all the beer they want, and there's nothing stopping the children from trying to peer pressure them into it, and in many countries it's not even generally illegal to let your child drink beer! And yet almost all parents are able to almost completely enforce a reasonable level of restricted access, simply because society frowns upon it.


> I don’t see how that’s better in any real way.

It's so much better. In the one case, the OS is leaking age information (even if just an age range) to every service it talks to. In the other case, the OS isn't telling anyone anything, and is just responding to the age rating that the app/service advertises.


That response reveals exactly the same information.

How would you implement a feed of mixed content? Say you're YouTube and some videos are about puppies and some videos are about guns? How would you hide only the gun videos from the homepage when the user is under 16?

Why does YouTube allow videos about guns but not boobs?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: