I also remembered his post about dropping $50k on the site redesign*
I actually thought it was a big W for him when I saw this post. But I guess, if you consider the opportunity cost of Google employment, it's a financial L.
Based on my reading here he was just offered a no-interview re-hire at Google, and decided not to take it. So calling that and L or W seems to take too few factors into account.
It's funny, since Stack Overflow has done EXACTLY this since day one (i.e. generate cash, with user knowledge provided for free).
The only difference is SO uses community, gamification & reputation facades, to convince users to participate for free.
With OpenAI its simply a blackbox, no credit is given.
So I guess the lesson is people are willing to participate and share things for free, as long as they're given credit, community standing or something along those lines.
For me it is less about credit and more about access. Stack overflow is public and freely available - I’ll give answers for the benefit of the community. ChatGPT is a product, it’s locked behind accounts and limited unless you’re paying.
They changed the deal on their end? I’ll delete my posts.
But your answers will still be available on SO, unless you remove them. Your answers were free and publicly available until you removed them. Making them also available to paying customers of chat-gpt does not change that at all.
In fact, chat gpt will probably still be able to answer those questions, so you removing your answers actually only removes them from the public, thus forcing people to use a paid product instead.
You had one goal, and your actions achieved the opposite.
Siuan Sanche's law of unintended consequences ought to be taught in primary school. Unfortunately it isn't.
Wow, it had been years since I read a Barry Schwartz post, a SEO authority since back in the day, I didn't realize his forum had turned so nasty.
Funny you mention 'No content creator thinks to themselves, “let me go write my next article on Reddit”'. Schwartz and many other SERP/SEO experts talked about writing for medium, circa ~2013, to raise their Google rankings, back when everyone jumped on the medium bandwagon.
Google is bleeding ends users and content creators alike. If search results are getting worse for end users, many AI price points (free or $20/month) or ad-free paid search (Kagi) are eating away at Google's market share. At the other end, content producers which had a symbiotic revenue sharing relationship are also jumping ship.
As you point out, Google will likely never recover, they dropped the ball at both ends: worse end user experience and worse ad revenue sharing, both of which were their lifeblood. I think Google in a few years will be like Yahoo search or AOL email before it, they will still have users, but most likely not by free will, but rather users landed through OEM/marketing deals.
Regional and national media are swallowing it up, yeah the country needs the investment, but at what price ? Gov still mum on what it took to 'land' the deal (tax/land break).
I remembered the Amazon HQ2 hoopla from years back, that U.S state/local govs were turning over backwards to land it, offering a lot of incentives. And I just looked the yesterday, that deal (HQ2) is still on hold and it was for just as much ($5 billion).
Conflating the three is easy, because instead of paying for delivered value, the company is doing arbitrage on location ( like it could do with gender, race or anything else)
All companies will take advantage of maximizing their profit or reducing cost, but it's a slippery slope once a subjective metric to determine value is used.
I for one live in a "low" CoL, but my AWS bill is just the same as a person in NY, SF or Geneva, should I also expect a discount because "my income is lower"? Or is it only fair to be billed equally, because the value all of us get is the.same ?
Turn the tables, if a dev in India, Romania or Mexico is delivering the same value as one in the US or UK should (s)he be paid any less ? Why ?
This argument falls apart when you consider that some projects have zero or negative value. Developers who work on these projects still get paid and we obviously don’t have to write a check when we make a mistake that costs the company money. Nobody actually likes “delivered value” compensation except under hypothetical circumstances where they imagine it can only increase their pay.
The hiring market is a market. Supply and demand drives compensation.
Delivered value isn’t one of those forces driving supply and demand. It sets the maximum an employer can pay someone and still get an ROI, but that’s it.
> Turn the tables, if a dev in India, Romania or Mexico is delivering the same value as one in the US or UK should (s)he be paid any less ? Why ?
Because it’s a job market and you’re bidding for candidates against their other options.
If you’re house shopping and you find an identical 3 bed, 3000 sq. ft. house in all of those markets, would you expect to bid the same for it? Of course not.
The sooner we accept the realities of job markets and supply and demand, the sooner this all makes sense.
Are you really asking why companies take risks on new projects that might not work out? Or expecting that companies can perfectly predict which projects will succeed?
Look at it this way: What if the developers were only paid after the product broke even and starts "delivering value". You think you're going to get a lot of developers signing up to work on a new project that might only pay them if they stick with it for a few years and it succeeds due to reasons that include things out of their control (like sales cycles, market moves, etc.)?
Replace "delivered value" with "expected delivered value" and the argument goes through mutatis mutandis. Of course there are uncertainties in the value of unrealized work, but the company is paying because they think the expected value of the work is higher than what they are paying in wages.
So if the developer makes a mistake that lowers the delivered value, who pays? E.g. slower than promised development, things that were promised don't work at all or cost more for less results, etc.
The chance that they won't deliver factors into the initial expectation and is reflected in the hiring and salary decision. You can evaluate this chance both globally and for an individual by considering their interview, past experience, recommendations. Presumably if a developer is routinely not doing their work, the employer will revise their expectation downward, and ultimately stop employing that developer if they are a net negative. I see no problem here beyond the inherent uncertainty that comes with working in a complex world where your knowledge is incomplete.
(edit to add: well, no problem except capitalism, but that's a story for another time)
> Are you really asking why companies take risks on new projects that might not work out? Or expecting that companies can perfectly predict which projects will succeed?
Uncertainty is fine; is mean that if the expected value[0] is below zero it's a terrible idea to do it.
> Look at it this way: What if the developers were only paid after the product broke even and starts "delivering value". You think you're going to get a lot of developers signing up to work on a new project that might only pay them if they stick with it for a few years and it succeeds due to reasons that include things out of their control (like sales cycles, market moves, etc.)?
Empirically yes; what you've described is close enough startup employees with low cash and high stock payments.
Here's your mistake. Companies have never and will never pay for delivered value. They pay the least they can to get an acceptable candidate. The difference is important, and the mentality that companies are paying for delivered value rather than the minimum acceptable amount leads to these mistaken conclusions often.
As for why? Because their goal is to maximize profit of course. Why pay more when less will do? The overseas dev should be paid less because their opportunity costs are much lower, so they accept a lower wage.
I have never loved learning the details of an obscure communication protocol or the convoluted methods of a library written by someone who wants to show how good they are. It seems like "junk knowledge" to me. LLMs save me from all this more and more every day.
This is depressing or tongue-in-cheek considering who he is -- Redis creator -- and has an older post titled 'In defense of linked lists', so talking about linked lists in Rust is not "junk knowledge" or something an LLM can analyze circles around any human.
It's the best coding nihilism as a profession post I have read though.
There is a misunderstanding going here. A linked list is a pure form of knowledge. What we see today is an explosion of arbitrary complexity that is the fruit, mostly, of bad design. If I learn the internals of React, I'm not really understanding anything fundamental. If I get to know the subtleness of Rust semantics and then Rust goes away, I'm left with nothing: it's not like learning Lisp. Think to all the folks that used to master M4 macros in the Sendmail, 30 years ago. I was saying the same, back then: this is garbage knowledge.
Today we have a great example in Kubernetes, and all the other synthetic complexity out there. I'm in, instead, to learn important ML concepts, new data structures, new abstractions. Not the result of some poor design activity. LLMs allow you to offload this memorization out of your mind, to make space for distilled ideas.
Spot on - it is one of the main reasons I haven't enjoyed programming in recent years, so much of it is learning what you call "garbage knowledge". Yet another API, yet another DSL, yet another standard library. Endless reading of internal wiki pages to learn the byzantine deployment system of my current company. Even worse, when I know exactly what I want, but some little dependency or piece of tooling is bad and I spend hours, or days, trying to debug it.
I, too, find LLMs a balm for this pain. They have kind-of-basic level of knowledge, but about everything.
In short, it allows for a more efficient expenditure of mental and emotional energy!
Much of programming, coding and developing is done by a person who is a knowledge worker and writes code. A good proportion of code to be written, will be written just once and never again. The one-off code snippet will stay in a file collecting dust forever. There is no point in trying to remember it in the first place, because without constant repetition of using it, it will be forgotten.
LLMs can help us focus our knowledge where it really matters, and discard a lot of the ephemeral stuff. That means that we can be more of knowledge workers and less of coders. I will push it even further and state that we will become more of knowledge workers and less of coders until we will be, eventually and gradually, just knowledge workers. We will need to know about algorithms, algorithmic complexity, abstractions and stuff like that.
We will need to know subjects like that Rust book [1] writes about.
The information super-trash-way, the data is feeding on itself.
We're heading back to the information dark ages. I don't know if I'm glad or sad, the pendulum is swinging the other way, where printed books or face-to-face learning, will come back in vogue to get vetted information.
Only now its more scalable to do content farm garbage with AI, it's cheaper SEO on steroids.
I'm optimistic it will one day be possible to sift through AI generated garbage, but it will take time just like it did with email/spam. And the most likely outcome will be through paid services, either paid content or paid filtering, just like email works best to this day.
I remember the early email days, early 2000's, pretty much anyone could setup their own email server (qmail/sendmail), there wasn't much spam to worry about and it required a lot of effort to make spam cost effective. Fast-forward today, even though you can still setup one, it requires a crazy amount of effort to ensure delivery in-and-out due to spam abuse, that, or pretty much paying a transactional fee, which is the easiest, so large providers don't flag email as spam.
I actually thought it was a big W for him when I saw this post. But I guess, if you consider the opportunity cost of Google employment, it's a financial L.
* https://mtlynch.io/tinypilot-redesign/