> I’ve been an engineer for almost 40 years and love seeing what Claude Code can do.
You would say that because otherwise you'd be afraid as being seen as "too old for this job", and hence risking getting kicked out of it all, meaning no future employment opportunities. I know that feeling, because I myself have been doing this programming job for 20+ years already (so not a young one by any means), but let's just cut the crap about it all and let's tell it how it is.
> You would say that because otherwise you'd be afraid as being seen as "too old for this job"
Um... I am still an active reverse engineer of both ring-0 and ring0 applications on both macOS and Windows (I worked on both the VS and Xcode teams). I'm developing a new tool for macOS that allows users to "see behind" active windows without the constant need for cmd/alt+tabbing. My age has zero bearing on my skill set or ability to understand technology. https://imgur.com/a/seymour-r9whXO5
> let's just cut the crap about it all and let's tell it how it is
The reality is, as I said, that this technology exists and it isn't going anywhere. Young people are going to use it as a tool just like we did when GUI operating systems first became prevalent.
I don't even remotely buy into the AI hype but I'm not going put the blinders on either. There is utility in this technology.
Really? That's a lot of presumption and reductionism to LLMs enthusiasts.
People of varied ages, already leverage LLMs on a daily basis. And LLMs will only get better.
Yesterday, Opus did work for me that would have taken me weeks. And the result was verified with a comprehensive suite of unit tests plus smoke tests by myself. The code looks exactly as the rest of the code in the 10y+ old, hand-written, enterprise project, no slop.
And you actually should be afraid of being left behind in dev related fields if you don't use LLMs. In most areas in fact.
Once the market corrects for LLM assisted production, the expectations will raise. So right now there is a small window to leverage LLMs as a time saving advantage before it becomes the norm and everyone is forced to use it because expecttions will reflect that.
> Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands. (...)
> The supply chain risk designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China. It could severely impact Anthropic’s business because enterprise customers with government contracts would have to make sure their government work doesn’t touch Anthropic’s tools.
Also, the Government money would be a nice bonus, of course, but basically this is an existential threat for Anthropic.
More generally, is quite interesting to look at the similarities between how pre-2022 Russia was seen and how pre-Trump-second-term US used to be seen until not that long ago, i.e. both governments were believed to be run by big business (oligarchs in Russia, big corps/multinationals in the US).
But when push came to shove it became evident (again) that the one that holds the monopoly of violence (i.e. not the oligarchs in Russia, nor the big corps in the US) is the one who's, in the end, also calling the shots. Hence why a company like Anthropic is now in this position, they will have to cave in to those holding the monopoly of violence.
> Also, the Government money would be a nice bonus, of course, but basically this is an existential threat for Anthropic.
It's also an existential risk to them if they cave in. What is the point of the company's existence if it's just another immoral OpenAI clone? May as well merge the companies for efficiency.
It's outrageous that the government is using the "supply chain risk" threat as a negotiating tactic. I know, I know, for the current administration it's unsurprising, but this is straightforward abuse of authority. There is no defensible claim that using Anthropic is a risk to anyone not trying to use it for murder or surveillance. At worst, it could be seen as less effective for some purpose, but that is not what "supply chain risk" means.
Could be challenged in court? As in, could a challenge win?
Horrible stuff is happening every day, so outrage fatigue is real. Still, try not to normalize it. Explain to yourself exactly why something is or is not a problem, before moving on to attempt to live your life.
> pre-2022 Russia was seen and how pre-Trump-second-term US used to be seen until not that long ago, i.e. both governments were believed to be run by big business
Who on earth believed that Russia was anything but a de facto dictatorship for roughly the past two decades? Putin murdering with impunity has been a running gag since 2003[1].
> Who on earth believed that Russia was anything but a de facto dictatorship for roughly the past two decades?
There were lots of people in the Western media who genuinely believed that Putin would be toppled by Russian oligarchs just after the war in Ukraine got more intense in February 2022, on account of "this war is bad for the business of Russian oligarchs, hence they'll get rid of Putin". From the horse's mouth, a CNN article from March of 2022 [1]:
> Officials say their intentions are to squeeze those who have profited from Putin’s rule and potentially apply internal pressure for Russia to scale back or call off the offensive in Ukraine.
That "internal pressure" is mentioned in connection with the bad oligarchs, in fact as an implicit anti-thesis of those bad oligarchs "who have profited from Putin’s rule", the implication being that there were other oligarchs, supposedly the good ones, who would have forced Putin's hand to end the war. That did not happen, was never in the cards to happen, in fact.
It might well have been, but the fact that the West first sanctioned the Russian oligarchs (even before 2022) showed that they really did believe in that wishcasting, i.e. they really did believe that the oligarchs will react “economically rational” and do something about Putin so the sanctions would go away.
Sanctions on their own don't prove that they believed the oligarchs could do anything about Putin. Arguably Putin's oligarchs are merely his appendages, so hammering them indirectly hammers Putin and the Russian war machine.
Cwn someone explain to me like I'm 5 how the government would invoke defense act and force the company to tailor its model to the military's needs?
For physical goods, I understand, but for software how exactly Is this possible? Like will the government force them to provide API access for free? It's confusing
My guess? Require them to not do the reinforcement learning on a custom model that implements guardrails. I think Anthropic has some of this built in already and couldn't alter it without retraining, but there's tons more layered on top.
Saw that, too, but at some point one cannot just stand like sheep in the slaughterhouse, the reaction was to be expected (even though it could have happened in a more civilized way, not via personal-ish attacks, I agree with that).
More generally, there are now literally trillions of dollars being invested in this madness/tsunami/whatever-one-wants-to-call-it, which means that it has now become impossible to follow said money so as to follow the conflicts of interests (it’s easy to assume a conflict of interest for a guy like Karpathy given his past and recent employment history, but I do think that Simon is more on the genuine side), so this is why that counter-reaction is now manifesting itself so chaotically, hitting left and right with not necessarily any logic behind it, which means that there are going to be collateral “casualties” during it all (such as Simon in this case).
Not sure if you're correct, as the market is betting trillions of dollars on these LLMs, hoping that they'll be close to what the OP had expected to happen in this case.
The GP’s point was about LLMs generally, no matter the interface. I agree that this particular model is (relatively speaking) ancient in AI the world, but go back 3 or 4 years and this (pretty complex “reasoning” at almost instant speed) would have seemed taken out of a science-fiction book.
As an EU citizen this is damn nice. The US might have some things to still work on/improve, but when it comes to freedom of speech it is still light years ahead of everybody else, and good for them.
The unspoken truth is that tests were never meant to cover all aspects of a piece of software running and doing its thing, that's where the "human mind(s)" that had actually built the system and brought it to life was supposed to come in and add the real layer of veracity. In other words, "if it walks like a duck and quacks like duck" was never enough, no matter how much duck-related testing was in place.
In a capitalistic society (such as ours) I find what you’re describing close to impossible, at least when it comes to large enough organizations. The profit motive ends up conquering all, and that is by design.
It's clearly possible for companies to self-impose safeguards: ESG/DEI, Bcorp, choosing to open source, and so on. If investors squeal, find better investors or tell them to put up with it. You can make plenty of profit without making all the profit that can be made.
Curious if today's Berkeley's professors would still wear Alphabet (former Google) t-shirts while holding presentations, I now realise that things have changed a lot in the last 10 years.
I've also not gone through the whole presentation, but does he at any point talk about the moral choices one will most definitely have to make during a career in tech? (this is related to the previous paragraph). Is it a "bad career" if people choose not to work for companies (such as Alphabet) that have gone all in behind AI? Seeing as now AI is used by State-entities for very nefarious reasons. Like I said, 2026 is way different compared to 2016.
I’ve been thinking a lot more about this lately. Big Tech today is far more powerful than 1990s Microsoft and 1970s IBM ever were. I’m not anti-AI, but the sheer power major players like OpenAI, Alphabet, Microsoft, and Meta have make me very nervous.
The challenge for computer science researchers who have qualms about working for Big Tech is finding an alternative career path. Speaking from an American point of view, academia has always been competitive, and the immediate future of research funding is uncertain given the political climate. This uncertainty also extends to government labs. The challenge with industry research is that there are not a lot of non-Big Tech employers of computer science researchers. This leaves starting a business, but business is very different from research.
I’m a tenure-track professor at a community college in the Bay Area. While I’ll never be able to afford to purchase a home near my job, I am able to live well as a single man renting an apartment. I have a great career teaching and using my long summer breaks for research and side projects. I like not having to worry about “publish or perish,” and I enjoy teaching and mentoring students. While this might not be considered “successful” for some people who are aiming for a professorship at an R1 university or an industry job at a top company’s top lab, I love my job and believe it’s a fantastic route for someone who enjoys teaching and who also wants extended time during the summer for research and side projects.
Big Tech today is far more powerful than 1990s Microsoft and 1970s IBM ever were.
In aggregate, sure, but no company today comes within an order of magnitude of the power an IBM of the ‘70s and ‘80s or a Microsoft of the ‘90s and ‘00s had over the tech landscape.
1970s IBM and 1990s Microsoft were formidable monopolies, but I was thinking in lines of influence over society and not necessarily in terms of market share. The consequences of social media and centralized Web services are much more impactful on society, for better and for worse, than dominance over 1970s mainframes and 1990s desktop operating systems, Web browsers, and office suites.
That power isn't even a rounding error compared to their power today. In those decades you had commercial dominance over a niche sector. Today you play kingmaker together with the other oligarchs.
Big Tech today is the media. Taken together, they completely control what a majority of the populace knows about the world. It is considered completely impractical and somewhat suspicious not to carry one of their location tracking communication devices at all times.
Orwell's Ministry of Truth could not dream of what Meta, Alphabet, OpenAI and Apple can do at any time, anywhere.
Maybe it's my own bias (after working for a big tech company for 5 years), but it's certainly not the dream job I thought it would be. Soulless corporations with questionable impact on society, lot of turnover, increasing pressure every year, fear of layoffs. Even the tech isn't that exciting, there's lots of tedious work, technical debt, hacked solutions, no time for researching and building quality solution.
That being said, it's certainly different for researchers. I can imagine that being a researcher at Google is more fun than being a median SWE in another FAANG. But still, I find these companies less enticing in general, even the products tend to degrade as they keep pushing the monetization.
I’m an ex-industry researcher with experience at FAANGs, albeit as a software engineering intern (Google) and a production engineer (Facebook).
I think it depends on the interests of the researcher. If a researcher is comfortable being a “brain for hire” who is comfortable solving research problems that are driven by business needs and where there needs to be short-term or medium-term results, then I think there are plenty of opportunities at large companies, including the FAANGs. I find research more fun than software engineering, but researchers are far from immune from pressures to ship.
If a researcher is more interested in curiosity-driven work and who wants to work on a longer time frame, I’m afraid that there’s no place in industry, except for maybe Microsoft Research (which I’ve heard changed under Satya Nadella), that supports such work. The days of Bob Taylor-era Xerox PARC and Unix-era Bell Labs ended many decades ago, and while there were still curiosity-driven labs in industry well into the 2010s, I have witnessed the remainder of these old-style labs change their missions to become much more focused on immediate and near-immediate business needs.
My experience as an L5 Google Research research scientist is that I have a lot of freedom as long as I can show I've made progress on one or two things the company cares about at each annual review. IMO, this is about as much research freedom as is reasonable for me to request.
15 - 20 years ago i was a total Google fanboy, i liked what they did with their search, i loved their overall ecosystem and perceived culture. I even have a - now orphaned - googlemail email address.
Nowadays? I wouldn't touch anything that comes from Google (granted also not from any other big tech company) with pliers.
You would say that because otherwise you'd be afraid as being seen as "too old for this job", and hence risking getting kicked out of it all, meaning no future employment opportunities. I know that feeling, because I myself have been doing this programming job for 20+ years already (so not a young one by any means), but let's just cut the crap about it all and let's tell it how it is.
reply