This was particularly true for one of the projects I've worked with in the past, where Python was chosen as the main language for a monitoring service.
In short, it proved itself to be a disaster: just the Python process collecting and parsing the metrics of all programs consumed 30-40% of the processing power of the lower end boxes.
In the end, the project went ahead for a while more, and we had to do all sorts of mitigations to get the performance impact to be less of an issue.
We did consider replacing it all by a few open source tools written in C and some glue code, the initial prototype used few MBs instead of dozens (or even hundreds) of MBs of memory, while barely registering any CPU load, but in the end it was deemed a waste of time when the whole project was terminated.
Ditto for me. I had gotten so used to building web backends in Ruby and running at 700MB minimum. When I finally got around to writing a rust backend, it registered in the metrics as 0MB, so I thought for sure the application had crashed.
Turns out the metrics just rounded to the nearest 5MB
It would have taken the same time, if not less, given the extra time for mitigations, trying different optimization techniques, runtimes, etc.
One of the reasons the project was killed was that we couldn't port it to our line of low powered devices without a full rewrite in C.
Please note this was more than a decade ago, way before Rust was the language it was today. I wouldn't chose anything else besides Rust today since it gives the best of both worlds: a truly high level language with low level resource controls.
I would agree except for the python part. Sure, you gotta move fast, but if you survive a year you still gotta move fast, and I’ve never seen a python code base that was still coherent after a year. Expert pythonistas will claim, truthfully, that they have such a code base but the same can be said of expert rustaceans. I would stick to typescript or even Java. It will still be a shitshow after a year but not quite as fucked as python.
If you're writing FastAPI (and you should be if you're doing a greenfield REST API project in Python in 2026), just s/copy/steal/ what those guys are doing and you'll be fine.
> Just pick Python and move fast, kids. It doesn’t matter how fast your software is if nobody uses it.
The reason nobody uses your software could be that it is too slow. As an example, if you write a video encoder or decoder, using pure Python might work for postage-stamp sized video because today’s hardware is insanely fast, but even, it likely will be easier to get the same speed in a language that’s better suited to the task.
They were the users and it was too slow for them so they switched to python. Not C++ of course, what they meant was "the libraries we wrote in C++ were too buggy and slow that using them was slower than if we just used Python."
And this is why pretty much all commercial software is terrible and runs slower than the equivalent 20 years ago despite incredible advance in hardware.
For lots of software there wasn't an equivalent 20 years ago because there wasn't a language that would let developers explore semi-specified domains fast enough to create something useful. Unless it was visual basic, but we can't use that, because what would all the UX people be for?
Most the the business I do is rewriting old working python prototypes to C++. Python sucks, is slow and leaks.
The new C++ code does not leak, meets our performance requirements, processes items instead of 36 hour in 8 hours, and such.
We are also rewriting all the old python UI in typescript. That went not so easy yet.
And when there are still old simple python helpers, I rewrite them into perl, because this will continue to run in the next years, unlike python.
Another anecdote, the team couldn’t improve concurrency reliably in Python, they rewrote the service in about a month (ten years ago) in Go, everything ran about 20x faster.
> Stores the user's birth date for age verification, as required by recent laws in California (AB-1043), Colorado (SB26-051), Brazil (Lei 15.211/2025), etc.
The Brazilian law does NOT require this. This is a misconception, and likely based on an understanding of California's law being extrapolated to the Brazilian law.
They are almost complete opposites.
The Brazilian law (Lei 15.211/2025) puts the burden of age verification on *providers* of web platforms, app stores, or dumb terminals. Not on operational systems.
It also mentions "reasonable measurements" - which vary according to the type of content, platform, etc - and which are much less strict that anything written in California's or UK's laws regarding the same subject. It is far more based on individual risk assessment and purpose of the platforms themselves.
In all fairness, the Brazilian law is the most friendly to open source and the status quo. Even though I'm also worried about the long term results of this legislation, I'm somewhat relieved by the way it turned out.
I'm not sure how you would translate sistemas operacionais de terminais which is covered by the law, but to me it reads "terminal operating systems". If a terminal has its own OS, it is probably not "dumb" in any meaningful sense, and no one really uses terminals anyway except for retro enthusiasts. Even people still using, like, VM/MVS on a mainframe are connecting via a PC running a 3270 emulator.
Lei № 12.965 (2014) defines a terminal (which applies in Lei 15.211) as any internet-connected computer or device.
Yes, you are correct. It was meant as "end user OSs", and indeed some requirements are on the end user OS.
However, it is still not the same as California's law since it only describes the provision of "age bracket" and only on particular circumstances. In practice, this requirement will likely be limited to certain platforms due to technical feasibility. Proportionality and "technically secure measurements" are mentioned in the law, so there is no point requiring this from a desktop computer where someone can just type any birth year.
It is likely most of the responsibility will fall on digital platforms:
> I have a personal aversion to defer as a language feature.
Indeed, `defer` as a language feature is an anti-pattern.
It does not allow the abstraction of initialization/de-initialization routines and encapsulating their execution within the resources, transferring the responsibility to manually perform the release or de-initialization to the users of the resources - for each use of the resource.
> I also dislike RAII because it often makes it difficult to reason about when destructors are run [..]
RAII is a way to abstract initialization, it says nothing about where a resource is initialized.
When combined with stack allocation, now you have something that gives you precise points of construction/destruction.
The same can be said about heap allocation in some sense, though this tends to be more manual and could also involve some dynamic component (ie, a tracing collector).
> [..] and also admits accidental leaks just like defer does.
RAII is not memory management, it's an initialization discipline.
> [..] what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.
Why would you want to replicate the same cleanup procedure for a certain resource throughout the code-base, instead of abstracting it in the resource itself?
Abstraction and explicitness can co-exist. One does not rule out the other.
I believe any reasonable person could understand the previous comment is about the rules themselves, not about a statement in the CoC saying where they apply or not.
Also, the fact that the website is not covered by the CoC makes it worse, since the leadership is excluding themselves from their own engagement rules.
I used OCaml extensively for a few years, around the time of OCaml 3 and OCaml 4, and I can add a few cents to this discussion.
Some of the points listed here can be considered a matter of taste or opinion, some are indeed pain points, and some are implementation details.
OCaml as a whole is hard to directly compare with most other languages you mentioned above as "better" or "worse". It both suffers and benefits from being an academic/research project not directly in control of a larger corporation.
As a language, IMHO, it is miles ahead of mostly all the languages mentioned above. It recently adopted a novel mechanism for modeling concurrency called algebraic effects, together with state-of-art multi-core support. This not only abstracts away several features that are usually hardcoded on most languages but puts it on another level as a language and abstraction capability. There are other toy languages that implement similar mechanisms or part of this, but none with the adoption level of OCaml.
However, since it does not have the same amount of resources and adoption, progress sometimes is slower that one would expect. Documentation can be sparse, community is smaller, etc.
Regarding OCaml on Windows, I myself used it exactly 20 years ago. It not only has one implementation, but three. There are some tradeoffs and support is not at the same level as Linux but it's still there, and I wouldn't call it mediocre:
You might find it harder to find libraries, for sure. I have not checked the situation recently, though given the smaller community that is likely still the case.
As a tongue-in-cheek comment, I could definitely say "OCaml is certainly not a good language - not as bad as most of all the others though".
As an analogy, it would be equivalent to say that "contrary to an airplane, a car sidesteps the problem of requiring wings".
Yes, indeed - but it doesn't fly.
reply