I see this argument pattern a lot, so looked into what the name is.
Apparently it's called Sorites paradox: https://en.wikipedia.org/wiki/Sorites_paradox or the "continuum fallacy" in which something that's continuous is dismissed as not existing because we can't divide it into clear categories.
Readability without a clarification is a non-concept. You can't say "X should be readable" without giving some context and without clarifying who you are targeting. "Code should be readable" is a non-statement, yes.
Add "to most developers" for context and you'll probably get exactly what original claim meant.
It's not a non-statement. Rich Hickey explains it well, readability is not about the subjective factors, it's mostly about the objective ones (how many things are intertwined? the code that you can read & consider in isolation is readable. The code that behaves differently depending on global state, makes implicit assumptions about other parts of the system, etc - is unreadable/less readable - with readability decreasing with number of dependencies).
"to most developers who are most likely to interact with this code over its useful lifetime."
This means accounting for the audience. Something unfamiliar to the average random coder might be very familiar to anyone likely to touch a particular piece of code in a particular organization.
>"Code should be readable" is a non-statement, yes.
Oh, I completely disagree here. Take obfuscation for example, which you can carry on into things like minimized files in javascript. If you ever try to debug that crap without an original file (which happens far more than one would expect) you learn quickly about readability.
This is an interesting point, but there's slightly more to it than that. When something is simple and does the job well, it has limitations. The problem is that adding each subsequent feature has a small benefit and a small, but immeasurable cost.
Sometimes that cost outweighs the benefit, but knowing that before the fact is very hard, and removing features is almost impossible as people shout disproportionately loudly about losing things.
It's similar to the problem of regulation. Looking at each individual law, it often seems reasonable. It's only when there are 10,000, and everything grinds to a halt, that people realise there's a problem.
IMHO the log services example is excellent to illustrate this: the path that leads to their hairball of complexity is perfectly clear and every solution they do is extremely logical and obvious.
This is why these services are so much similar.
But the end result is always too complex and it seems to me that "perfect" point does not exist for this line of products.
I've been letting Gemini run gcloud and "accept all"ing while I've been setting some things up for a personal project. Even with some limits in place it is nervewracking, but so far no issues and it means I can go and get a cup of tea rather than keep pressing OK.
Pretty easy to see how easy it would be for rogue AI to do things when it can already provision its own infrastructure.
That is because you are replying to two different people.
People can learn across layers of abstraction, but specialisation is generally a good thing and creates wealth, a Scottish guy wrote a good book on it.
I will preface this by saying that I care a lot about climate change and carbon usage and AI usage is not a big issue, it is in fact a distraction from where we should be focusing our efforts.
They've gone downhill in the last few years in my opinion, they've become more overtly partisan and got substantially downgraded on factual reporting by MediaBias Fact check: https://mediabiasfactcheck.com/the-guardian/
They've always been left of centre, but they're lazy and jump more into the predictable culture war pandering.
The FT is streets ahead of anyone else, they've become more centrist and less dry in recent years. I don't know what their revenues are like but I'd wager that they're doing better as they're one of the only ones with a business model that allows them to pay for good journalism.
If they were "left of centre" that would be fine there are few if any major left-wing newspapers in the UK. The pandering from my perspective has been to those on the right. They seem to be doing the "well if both sides hate us we must be doing something correctly!" except the right want rivers of blood and the left want public transport, healthcare and to ensure the more vulnerable among us are treated with dignity and compassion.
The "culture war" people refer to is not "woke ideology" being pushed everywhere as is so often the accusation, but an enormous, orchestrated push against an otherwise fairly organic process where the world had otherwise become more naturally more accepting of immigrants and LGBTQ+ minorities.
If you think the Guardian panders to those on the right you need your head examined. I would love nothing more than a left leaning socialist government and even I shake my head at some of the nonsense the Guardian publishes, especially in their opinion section. It's an effort to get clicks and cause outrage and some of it is no better than the tabloids.
I also disagree that there has been a "fairly organic process where the world had otherwise become more naturally more accepting of immigrants and LGBTQ+ minorities". That's a rewriting of history. Equal rights for the LGBTQ+ community were incredibly hard fought for over many years. It's been anything but organic. It's important not to forget how recently most of the civil rights we take for granted in many areas of life were rights that were denied by a majority.
> And for games like Overwatch, I don't think improving is a moral imperative; there's nothing wrong with having fun at 50%-ile or 10%-ile or any rank. But in every game I've played with a rating and/or league/tournament system, a lot of people get really upset and unhappy when they lose even when they haven't put much effort into improving. If that's the case, why not put a little bit of effort into improving and spend a little bit less time being upset?
Interesting read, but I feel like the author could've spent just one more minute on this sentence. How good you are at given activity often doesn't matter, because you're mostly going to encounter people around your own level. What I'm saying is, unless you're at the absolute top or the absolute bottom, you're going to have similar ratio of wins to loses regardless whether you're a pro or an amateur, simply because an amateur gets paired with other amateurs, while a pro gets paired with other pros. In other words, not being the worst is often everything you need, and being the best is pretty much unreachable anyway.
This can be very well extended to our discussion about SWEs. As long as you're not the worst nor the best, your skill and dedication have little correlation with your salary, job satisfaction, etc. Therefore, if you know you can't become the best, doing bare minimum not to get fired is a very sensible strategy, because beyond that point, the law of diminishing returns hits hard. This is especially important when you realize that usually in order to improve on anything (like programming), you need to use up resources that you could use for something else. In other words, every 15 minutes spent improving is 15 minutes not spent browsing TikTok, with the latter being obviously a preferable activity.
>Just for example, if you're a local table tennis hotshot who can beat every rando at a local bar, when you challenge someone to a game and they say "sure, what's your rating?" you know you're in for a shellacking by someone who can probably beat you while playing with a shoe brush (an actual feat that happened to a friend of mine, BTW). You're probably 99%-ile, but someone with no talent who's put in the time to practice the basics is going to have a serve that you can't return as well as be able to kill any shot a local bar expert is able to consitently hit.
And it's very easy to forget when you're the guy going to the club just how bad most regular players are.
I'm in a table tennis club, my rating is solidly middle of the pack, and so I see myself as an average player. But the author is correct, I would destroy any casual player. I almost never play casual players, though.
Not sure how applicable this is to software engineering.
Competitive games are complex. It's hard to be 95% percentile. There are so many mistakes one can make, even if each individual mistake is unlikely, it's likely that a mistake will be made. I participate in Dota 2, and literally everyone makes noticeable mistakes, even including tier 1 pro players and the top ranked pub players. I honestly find it amazing how good people are given how complex the domain is.
Now scale that up 10x, because reality is at least an order of magnitude more complex than a video game.
First, I don't want to waste time feeding something into an LLM that should be commented in the first place.
Second, not at all. An LLM can tell you how the regex works (hopefully). It can't tell you what each piece means in terms of the program's logic. Or at least not always and not reliably.