> disassembling the stars to make them live a thousand times longer
Why would they do that when there are insane amounts of start with billions of years of life ahead of them. What purpose would that serve at this point in time?
Modern Teslas can store double the daily usage of a US average household. And that's assuming you have literally zero generation at night (hydro / nuclear / geothermal / pumped storage ).
> So people published code that could be referenced and copied on GitHub. There was no ethical problem, the world, society were happy.
This code has different licenses. You can't just copy code randomly without checking license first.
Copilot serves it stripped of the license to unaware users. Even if copilot user wants only to reuse code licensed in a way that allows it copilot will serve him code from restrictive licenses without him being aware.
The most likely outcome would be a ban on net new CO2 in the atmosphere, enforced by the army of any country that has insurance companies unwilling to pay for the destruction of half of any cities next to a coastline, a 15-m wall around the remainder, most of the agricultural lands too hot and dry to be used, and the bulk of the rest battered by weekly cat-6 hurricanes, a billion of climate refugees…
A lot of people are going to die of heat in the next month. That number will double every year until we address it decisively.
That’s not counting on eco-terrorism which will likely anticipate a lot of the reaction.
Not the OP, but pretty sure it's not sarcasm. I'm not sure if all of that will happen, but I would suggest that on current trajectory at least the following is likely to happen:
- Sea level rises leading to widespread flooding of vulnerable cities (and some particularly low-lying island countries too).
- Severe droughts and water shortages in desertous areas. Some of which may not have been desertous before.
- General weather changes (both reduced rainfall and increased rainfall) leading to disruptions in food production supply chains.
All of which will likely lead to significant political pressure to mitigate further disruption.
Atmospheric science isn’t belief: unless we rapidly change energy, transport, and construction practices, most humans will face existential threats in the next five years to ten years one way or another. That part is not for debate.
A lot of activists and governments have started responding (banning ICE cars in 2025, 2030, and 2035). My belief is that most will have increasingly strict rules because they will see increasingly obvious droughts, heatwaves, and hurricanes—that’s the part where I’m less sure about.
There are people and politicians who are delusional, so anything can happen. I expect them to move and be more popular in areas where the consequence will be dire soon.
There are people who protest and see no change. Them, and people who lose everyone they care about in climate-related disasters, will most likely resort to violence: this has already started with people attaching themselves with concrete to a highway or deflating tires of SUVs in London. That group is organized and they don’t feel heard. That much has happened and I can testify that it’s not blaming down. I believe that this will lead to more violence, notably targeted assassinations.
Well if we're to hit net-zero carbon emissions then we either need to not emit any long-term stored carbon or capture and store carbon from the atmosphere. The former is likely to be much cheaper than the latter.
This article is super overcomplicated.
All it had to say was that rust tells you when you keep reference to on stack variable after it goes out of scope. Context provided adds nothing.
I must say - as someone who doesn't use rust - I haven't had this type of issue in years, and when I did it wasn't hard to debug. You get corrupted data, set data breakpoint and in the provided example you will see it being modified by unrelated operations on stack. From there there is only one conclusion. Authors reactions seems to be a bit exaggerated.
Yeah no. In my experience, when several people commit to a common C/C++ codebase this kind of issue become really exhausting when it happens more than once, and the symptoms may be so subtle it's a bitch to debug.
Rust lowers your mental load. You spend more time being creative and way less time debugging "obvious" (or not) mechanical problems (reference not-on-stack-anymore variables, use-after-free, concurrent write access and all kind of compiler undefined behavior). That why garbage collected languages are so successful (they let you concentrate on the business logic) and for the first time it's available in a system language.
Everything that can be done by your computer should be done by your computer. You should leave your precious brain cells available for the important stuff.
I work in probably what is considered one of the least "safe" languages: C++
The issues that Rust is supposed to help with are simply not what we spent time on. All the bugs reported are pretty much exclusively root caused to "business logic".
From recent time I can recall only one that was a programming mistake and not architecture/business logic related. It was a missing break in a switch that already had some fallthroughs so it didn't look incorrect at a glance.
I do understand what Rust is supposed to provide but in practice it's simply an extremely minor source of bugs.
I see many C++ programmers say stuff like this, and maybe it is the case that studios exist which can do C++ without mistakes related to memory management or sigils and so on, but I also observe that:
1. High quality teams like the Linux kernel team and PostgreSQL, do periodically have serious security bugs that are things Rust would have caught.
2. I see sometimes C++ instructors making the same claims you do and then spending half of a lesson tracking down a memory mistake, (Casey Muratori) or in the same month a tweet from a game engine developer saying they don't really see the value of garbage collection and then tweeting about how they spent 48 hours tracking down a memory mistake. (of course, for a game engine that is what you have to do sometimes!)
There are however valid questions, like if Rust slows down your development say, 5%, would you get more net safety from spending 5% more time testing/fuzzing c++ code instead? etc.
> High quality teams like the Linux kernel team and PostgreSQL
Neither of those code bases are C++, which significantly dilutes your point. A major benefit of modern C++ is that it is much safer than the language Linux and PostgreSQL are written in.
Chromium _is_ written in C++, has some of the best coders and one of the best-tested codebases in the world. They're still finding zero-days based on exploits due to the language's failings.
There's a reason Mozilla has decided that all parser code should be moved to Rust ASAP.
I'd venture that the development slowdown from using rust is closer to 50% than 5%. Even just compile times probably slow down that much, not to mention you have to write (sometimes) as much as 10x code to express yourself.
The specific exercise I have in mind is a lockless thread queue. < 20 lines in C .. ~200 lines in Rust.
I write code much faster in Rust than in C++. Part of it is thanks to the type system – fewer opportunities for dev errors means that I can produce code that is more concise, spends less time handling runtime errors because I can have static guarantees that they have already been handled somewhere, etc. Part of it is #[derive(...)], great documentation, Cargo and other QoL components of Rust.
> The specific exercise I have in mind is a lockless thread queue. < 20 lines in C .. ~200 lines in Rust.
Why is this your exercise you have in mind though? This is such a bad argument. Like, yes, if you work at doublylinkedlist.com where your entire job is writing a new linked list implementation every day of the week, Rust might be a bad choice. But that's not what any kind of commercial enterprise actually looks like. If I saw you writing a lockless thread queue at work, I'd tell you to stop wasting time.
>> Why is this your exercise you have in mind though?
That was an example I could point to where the code size difference between Rust and C or C++ is roughly 10x. I mentioned it to corroborate the claim that sometimes writing Rust is very verbose, on the order of ten times more.
My point was not that average commercial codebases are largely dominated my these types of structures. My point was that my experience with Rust has been that the development slowdown is much more than 5%, as the OP suggested.
>> If I saw you writing a lockless thread queue at work, I'd tell you to stop wasting time.
A lockless thread queue (not entirely sure what that would be, sorry; a queue in front of a pool?) seems like the kind of low-level library that could be written with liberal use of unsafe. No need to flagellate yourself on the altar of compile-time safety.
So that you can write "primitives" such as a queue in unsafe, and assuming it is correct, it is not possible to introduce bugs from other code.
Also, things like queues tend to be implemented in either the stdlib or a very popular library. So they are very well tested and likely to be widely reviewed.
I'm sure there is a ton of variance. I have done some Rust projects where I run into absolutely no safety complaints from Rust because it just isn't the kind of code that does anything the borrow checker cares about. For those development time ends up typically faster than C/C++ due to various syntax and tooling niceties.
Other projects will really get into domains where you have to work hard to satisfy the borrow checker and it can slow you down a lot. In a real application you won't be writing lockless thread queues for a big % of the time. But then for a real application the compile times will start to weigh on you more. (Though, C++ does not always compile fast either unless some care is taken to be sure it does)
And you're running at least one static analysis tool and a linter on your C++ code too, right? You're not just deploying the C++ compiler output as-is, right?
Those take time to run too and should absolutely be added to the compile time metrics when comparing against "cargo build".
On a personal project I run 2 metaprogramming passes, compile the entire project (~2m LoC) _twice_ and run the entire test suite in < 30s on a laptop from 2010.
I don't run a linter because I hate them, and my metaprogramming passes do a bit of extra static checking clang doesn't. I occationally run extra static tools (valgrind et. al) but they're painfully slow and very rarely catch anything.
>The specific exercise I have in mind is a lockless thread queue. < 20 lines in C .. ~200 lines in Rust.
Do they have an equivalent API? Rust does have a hard time explaining very, very low level stuff that you'd have to do unsafe and various magicks. But once you get the unsafe details right, the consumer API for them tends to be extremely rigid and fool proof.
But it is just not true. Rust has a hard time writing algorithms where objects don’t have a singular owner. That’s all, it is not low level stuff. It for example has proper SIMD support, while the supposedly low level c doesn’t.
A fair point, though the few security issues in recent years I've looked at were not the kind of thing you would turn to unsafe rust for. But I've certainly not done a broad enough sampling to say what % of cases are like that.
To be fair, I am sure Rust catches a fair amount (compared to C++ which would catch 0). I just think that phrasing rust has having 0 memory errors can be a tad inaccurate.
Except when something goes wrong, you can typically focus all your attention on the unsafe blocks, which should be a very small portion of your codebase, if it exists at all. (Contrary to popular belief, unsafe Rust code is neither mandatory nor widespread.)
By contrast and comparing apples to apples, your entire C or C++ codebase is the equivalent of one giant unsafe block. You can bring in 3rd party tools to perform some of the static analysis the Rust compiler does for you, but call it for what it is.
> There are however valid questions, like if Rust slows down your development say, 5%, would you get more net safety from spending 5% more time testing/fuzzing c++ code instead? etc.
I think a similarly valid question is how often you resorted to dynamic allocations just to get the borrow checker off your back. If your Rust version uses 5% more dynamic memory (with the corresponding performance and memory footprint penalty), is it perhaps worth staying with C/C++ and spending more development time on testing/fuzzing?
Some people make the opposite claim: Rust lets you get away with less dynamic allocation (and more data shared between threads) because you can rely on the compiler checking your work.
The effect might be positive or negative depending on the circumstances.
If you're willing to jump back to C++ to get around the borrow checker strictness, couldn't you just use Rust's unsafe block where you're sure it won't result in a bug?
Is it harder to fuzz Rust? Honest question, because fuzzing is something I occasionally read about but am not practiced in.
I've found it very easy to get started with fuzzing in Rust using cargo-fuzz. I didn't do anything very advanced, and my closest point of reference is testing Python with Hypothesis, but it did turn up bugs.
It claims that Rust is particularly suitable for it because integer overflow panics in debug builds (and out of bounds indexing always panics), which sounds reasonable.
> And what did we screw up? Some legit stuff! It’s Rust code, so I am fairly confident none of the issues were security concerns, but they were definitely quality of implementation issues, and could have been used to at very least denial-of-service the minidump processor.
This is so much better than the outcome would have been in any C or C++ project despite the many protestations of "just follow modern best practices" adherents. The author of minidump is no novice, is well versed in best practices in multiple languages including C++, was sure the code was solid, and still got spanked hard by the fuzzer. Denial of service outcomes aren't ideal, but they were likely fewer in number and are unambiguously better than security vulnerabilities.
That is a fair thing to wonder about, but at the moment most of the design decisions Rust pushes you toward tend to be better for performance on modern hardware. For instance using an array based arena to store a graph, instead of the traditional allocation of a chunk of memory per node on the heap. Or just keeping more stuff on the stack.
But it takes a lookup hit and you need an arena per object type with its memory overheard. You'll get more cache misses.
A lot of the rust performance talking points just aren't true. Rust is slower than C and C++, not by much but you can't get a true believe to even recognized this. Rust has turned into a religion.
Rust also disallows some things that you can do fine on C++ if you know our architecture because thing won't work on some machine 20 years ago (eg, it has more strict alignment requirements than any machine a consumer can see).
Throughput may only be 5% or more slower, but latency issues for rust is a much bigger issue. The devs I've talked to don't even try to pretend they have a good latency profile.
I've worked in a variety of languages, and returning to a c++ project recently I do see that we spend a lot more time thinking about how to write the code in a way that avoids problems. Meaning that there's a lot more architecturing required to reach a sane state.
We have a sister product written in a dynamic language, and sometimes we have identical functionality.
I've noticed that when a change is discussed, the c++ gang has architectured themselves into the current solution and therefore have a much harder time making changes.
So for that reason I think it's easy to overlook these complexities when you're working in c++ alone; they feel natural and are just part of how you work. You forget that a lot of this architecturing just isn't necessary in a lot of other languages.
Maybe I'm wrong, but I believe that most of the architecting that you describe would be effectively what you do with regards to performance as well: Minimizing change of ownership, moving to a system with more static allocations with fewer "objects" that are linked into a variety of subsystems.
Yes, that's true. In a sense, c++ requires good code structuring.
That's also part of why I enjoy returning to c++, the people involved know how to structure code and create clean architecture.
That said, sometimes c++ does get in the way. Creating trees or graphs can be cumbersome, and IMO it's very biased towards virtual methods to solve polymorphism.
Extending lifetimes by pooling or similar is also quite common, and is in my eyes sometimes overdoing it. If you for instance use Rust, you can be a lot more confident that the compiler catches these issues, and be more conservative and efficient in the solution.
This is a function of the type of software you write. There are many large C++ code bases that rely on various types of static polymorphism almost exclusively, rarely having a use case for virtual methods or dynamic polymorphism. There is a similar story with inheritance versus composition; some types of code bases naturally gravitate toward one or the other.
The nice thing about C++ is that it as amenable to any of these models should it benefit the application.
Sure. I guess template-based polymorphism is alright for code dispatch. But if you want to store the different types of objects involved you have limited choices. There's no straight-forward support for sum types. std::variant is fairly new, has a cumbersome API, and is quite slow (ballpark the same as virtual calls). There's no support for methods on enums nor anything for customizing the fields of different enum constants. There's also no pattern matching or other really convenient way of deconstructing variants.
So while it's there, I would say that the oo virtual method style is much better supported, although storage for those usually requires some type heap allocation.
Okay, yeah, I would broadly agree with this. I think most people use template-based polymorphism, which is pretty flexible in practice. The use of virtual methods is verboten for many common use cases of C++, due to the necessity of being in paged memory, so constructions for dealing with polymorphism without virtual methods are commonplace. And std::variant is a bit of a hot mess.
> The use of virtual methods is verboten for many common use cases of C++, due to the necessity of being in paged memory
I wasn't aware of this. Maybe I'm just out of the loop. Do you know where one can I learn more about this? I'm desperately trying to reduce the number of virtual calls in our codebase, but I'm hitting the aforementioned problems.
It is a design problem endemic to database engines and probably file systems. A design requirement for most of the dynamic runtime data structures is that they can be directly paged to storage, either in whole or in part, and be paged from storage in an arbitrarily distant future on different machines with different compilers. In order to make this work, all data types used in pageable data structures must 1) have a size and alignment that is not compiler-dependent so that page types always have a size that is a strict multiple of the I/O page size and 2) not contain any pointers. This precludes vtables.
This has traditionally been managed with CRTP, tagged unions, etc with some scaffolding to make it convenient and compliant with strict aliasing rules. Ideally, almost all of the dynamic polymorphism is pulled up to the level of the page types, an opaque blob of I/O friendly complex data structure, minimizing the amount you have to do. It is also important to note that JIT-ing has replaced many of the use cases for dynamic dispatch e.g. adding user-defined schemas at runtime.
None of which may apply to your use case. Some things inherently require an unfortunate amount of dynamic dispatch.
> and IMO it's very biased towards virtual methods to solve polymorphism.
Is it? These days I'd expect C++ to be very biased towards using templates for polymorphism. After all, templates are a thing that C++ provides with functionality that other languages often lack, whereas in the field of virtual methods/"dynamic dispatch OOP" C++ severely lags behind other languages. Choosing between two features of a language and using exactly the one that is worse in comparison to your competition feels wrong to me.
I do not think C++ "lags severely in the field of virtual methods". I use both OOP and template based polymorphism depending on particular needs and see no significant problems in either.
C++ can't do things today that Smalltalk and CLOS were able to do in the 1980s already; how is it not lagging behind in OOP? Hell, companies like Trolltech had to extend C++ for their own purposes to provide just a subset of the extra features (relative to standard C++) that had already been available in environments like Smalltalk or CLOS.
I was told that deferred_ptr/deferred_heap was supposed to solve these things for C++, so perhaps that would make Rust the option with worse problems in this department. Not sure where it got by now, though.
…and when you do need a large number of dynamically allocced/deallocced objects, then using indices to arrays instead of pointers. Which kinda defeats the purpose of using the Rust borrow checker…
I think this mirrors my day to day experience with C++.
On the other hand, fuzzing large c++ programs will routinely uncover memory safety issues in practically any large codebase that hasn't been absolutely beaten to death by fuzzers already.
The issues are not usually so much "I returned this thing on the stack" they tend to be things like "this (very unexpected) sequence of api calls will result in a UAF in this deeply nested data structure over here on the heap".
> The issues that Rust is supposed to help with are simply not what we spent time on. All the bugs reported are pretty much exclusively root caused to "business logic".
Everyone claims this, probably because business logic bugs are more memorable. But I've never seen it match the real statistics. According to the best published data, null alone is something like 30-70% of bugs, you just don't remember them because they're uninteresting.
It depends on the person's mental categorisation of bugs too. For example I could see someone classifying null bugs as business logic bugs because "the business logic didn't account for that information not always being there"
It's a language bug, really. If you're expected to say what the individual things in your program are allowed to be (like you have to in C++, Java, C# etc.), and you say "foo is a BarBaz object", and the language and compiler allow you to set foo to something that isn't actually a BarBaz object and this is considered OK, then the language is botched.
There are and never will be (meaningful) statistics for the "N percent of bugs are caused by X" question.
Every org's use cases are different, how do you get (let alone compare) data frm different orgs, who really counts their bugs anyway (and those that do at scale and in detail are probably doing suffering from some form of myopic management disorder or another), etc.
All you can do is ask people their gut take based on their particular experience. For systems engineers, a lot of bugs are due to memory safety. For more consumer-oriented startups (or in most any bigcorp), yeah, it's "business logic" (or people's inability / unwillingness to communicate), etc.
"We found that 70 percent of our bugs could have been prevented by moving to TypeScript", yeah sure.
Counterpoint: When I was on the Windows accessibility team at Microsoft, one of my most brilliant colleagues gave a presentation about what to look for when doing code reviews, and he emphasized three main categories: C++, COM (Microsoft's Component Object Model), and concurrency. Rust eliminates many of the issues in the first and third categories. And yes, we were writing modern C++ as much as we could in that legacy codebase; by 2019 we were using several C++17 features, as well as the latest C++ utility libraries for working with COM and WinRT. Given the state of the Rust windows crate, that team (which both I and that colleague left in late 2020) might even be able to use Rust in new code. I'm sure that would make him happy.
I had to deal with a senior dev. that gave me a talk about seniority after I ran valgrind over our software. Guy was so deep into the whole senior dev. power trip that he blamed third party libraries for his bugs, dev. tools for "incorrectly" identifying his bugs and wrote more bugs to work around his other bugs.
Finding and fixing issues in C++ code can be easy with the available tools, getting people to use them on the other hand can can be like talking to a wall.
Unsurprisingly the types of bugs reported are going to be around business logic errors and not obscure edge case that users won't run into naturally. The bugs are still there though.
>The issues that Rust is supposed to help with are simply not what we spent time on.
There could be plenty of bugs in your codebase but just because you don't spend time on them doesn't mean they don't exist. Hundreds of millions of people used OpenSSL everyday for 2 years after the Heartbleed bug was introduced. It didn't cause any obviously broken code until someone exploited it to read credit card numbers off of a remote server.
Can you elaborate on the proficiency of your dev team, is this with juniors etc? Is it a large team? And what is the complexity of the project? I think this is important information
GPU driver, most devs are senior. Hundreds of thousands of lines of code in the "slice" my team is interested in.
Team for our component has on it's own has probably over 40 people.
Driver should be even more prone to programming bugs because most of it is about manipulating data in raw "untyped" memory.
GPU drivers are also some of the most buggiest stuff out there I used. When I worked in Games we routinely managed to make the GPU drivers crash which thankfully at the time was already no longer taking down your engine machine.
In practice for you. Where I am, things are very different! Buffer overflows and memory corruptions, threading issues, uninitialized variables, etc appear on a very regular basis, and end up being very difficult to debug, mostly because the moment where you corrupt memory and the moment it triggers a bug can be very far apart.
I would say it isn't even the debugging. As many of the C++ programmers here have said as you get good at C++ this becomes something that you are vigilant for and it rarely actually gets written. But just the lack of even needing to think about it is a huge load off my mind. When I used to write C++ I never really realized how vigilant I was. Every time I added code into the middle of a function I had to double check all of the lifetimes, every time I shorted the lifetime of a variable I had to search for it to the end of the function. Just not having to worry about this much really frees your mind for thinking about other things.
For me it was what lifetimes and types to use to make my program work. Just as it was in C++, but with a static verification step at the end which most of the time got in my way.
Rust makes lots of sense in high-churn projects or projects which have very high security requirements (like browsers). Otherwise I’d think carefully about using it.
I find neither C++ nor Rust to be particularly “creative” programming languages because one has to think all the time about two things which are irrelevant for the features of the program - resource management and complicated types.
Rust is even worse than C++ because it highly encourages encoding logic in types and the community loves doing that.
You seem to have almost come to the same conclusion yourself, but then mistakenly assume that the same kind of productivity of a GC language is available in a system language. Nope. Although at least in C++ one can just say “fuck it” during the “creative” prototyping phase, copy most things and still have decent syntax and performance. In Rust you’d have to pepper everything with clone, boxes and (a)rc, so you'd have another mess.
The exhausting part for me was always when the bug escapes detection for a little while, is associated with some new functionality that very much did not escape detection, and it turns out that the wrong implementation is faster than possible. Now to fix the bug I get to be the bad guy and take away the customers toys (one of many many reasons to push for making the culprit fix the problem, especially if they don't want to).
It doesn't take many of these to form a coping mechanism that prevents this from happening again, even if it's at great cost. This is also the genesis of many unwinnable arguments that drag on forever.
Nope. In my experience, Rust saves you from very common obvious errors that junior programmers make, certainly not 10x programmers who have been writing C for a long time. It's great when you're starting out (which is why it's massively popular with new graduates or people who are just learning to program), but at some point it's questionable if the hand-holding Rust gives you is worth the extra development time overhead, more complex syntax, etc.
It also depends on what you're writing. If you're writing cryptographic routines, or protocol handshaking, the tradeoffs are heavily weighted in Rust's favor.
Yep, all those kernel, driver, and database devs out there are all junior devs just starting out. No way anyone with decades of experience will introduce a remote exploit or crashing bug in their C programs and libraries. Never happens. No CVEs to see here! Move along!
Why do you think this is a valid argument? Are you saying there are no junior devs working on databases, kernels, drivers? What evidence do you propose to back up that astounding claim?
> All it had to say was that rust tells you when you keep reference to on stack variable after it goes out of scope.
That's the root cause, but it's not the interesting bit.
The code in the article comes from a production compiler. And normally, the AST (abstract syntax tree) is a single data structure output by the parser. Ownership is simple: the entire AST has the same lifetime, and it's managed by a caller. This should be easy, right?
But it turned out that there was a piece of code that sometimes "synthesized" extra, temporary AST nodes. And these nodes had a shorter lifetime than the rest of the AST.
These are vicious bugs. You have some long-standing convention about how things work, but one little piece of code makes an exception (often for excellent reasons). Then another module decides to make an aggressive optimization that relies on the original assumption. But that assumption is now true only 99% of the time.
It's a communication failure, and it might take years to actually turn into a bug. And that bug may manifest as extremely rare memory corruption that shows up in automated crash reports.
Running down this kind of phantom memory corruption is one of the most frustrating things I've ever done. It often involved spending weeks staring at minidumps, looking for interesting patterns in crashes. There's that horrible moment when you realize that 20% of your crashes occur within a thousand instructions after a particular font-rendering function reports an error, accidentally corrupting the exception-unwinding machinery.
And sure, I get it. Maybe your team is simply good enough that nobody ever makes a mistake like this. But if so, they're exceptional. I've worked on amazing teams that still get bitten by subtle miscommunications and misunderstandings.
Reviewing commits for a security-critical project written in C or C++ can be incredibly tedious. I've spent an entire day trying to validate that the assumptions made by a 10-line change are memory-safe in the context of the larger program. These reviews are incredibly mentally draining, and even when I'm done I'm not 100% sure that I didn't miss something and let a vulnerability into the codebase.
Rust is a breath of fresh air in comparison. Worrying about memory safety isn't even a concern for the vast majority of commits that don't touch modules with unsafe code. All assumptions made about the lifetimes of references are made explicit in the code, and checked by the compiler. On rust projects I find I have much more mental energy to use against other aspects of the problem.
I agree with you on the first point, but I totally disagree with you on the second point: sure if you know how to reliably reproduce an issue it isn't complicated to debug, but this kind of issue can be difficult to reproduce, can create silent issues..
> You get corrupted data, set data breakpoint and in the provided example you will see it being modified by unrelated operations on stack.
That's provided that you can even reproduce the issue well, especially in an instrumented build which might be way slower than the non instrumented one. Often you get bug reports like "crash after one hour of usage" where basically every feature of the app has been heavily used by multiple users. Rust applications might still crash but they crash safely, which means your error messages are more meaningful.
It all depends on the complexity of your application. The type of guarantees Rust offers in my experience becomes exponentially more useful as your application grows in size.
This indeed. Small local console app running on trusted data? Maybe an hour to track down some memory corruption if you're particularly unlucky, in which case shuger's kind of got a point: who cares?
Large network-exposed app? Individual memory corruption heisenbugs have taken me weeks to track down (and weeks before that for QA to create a reliable repro for) - a needle in a huge haystack. They often predate my employment - having lurked semi-silently for who knows how long causing who knows how many unreported crashes. When release dates slip because of bug backlogs filled with memory safety related crash bugs, when ~70% of many vendor CVE reports are down to memory safety issues [1][2][3], and when you personally have to deal with the fallout of all that: shuger's point completely and utterly evaporates.
Just because Rust, in particular, compiles successfully, that's no guarantee that the code isn't complex and difficult to understand. I can write a complex badly designed app in any language. Similarly I can write simple well designed large applications in any language too.
Yeah, but that doesn’t mean all languages are created equal in that department. Let’s take this pseudocode:
int a = 0;
if (findindex(mylist, myvalue, &a)) {
// dostuff with a
}
Here, findindex returns false if it can’t find the value. The problem is that there’s nothing forcing you to use the if, you can just forget it and you’ll be left with incorrect code. In Rust, this type of error is impossible to make by accident, because the findindex function would return an Option, and you have to explicitly handle both cases (or explicitly say you don’t care about one of the cases).
Things like that, along with the lifetime system, make it easier to write good code. It’s like saying that it’s possible to destroy your foot with a shotgun and with a pencil – it’s possible, but it’s a lot easier do to by accident with the shotgun.
This fallacy gets repeated over and over again and it doesn't make it any less false.
Languages are tools and some tools are actually better-built than others. If we can claim that a language like Brainfuck makes writing clear code extremely difficult, and Python or Rust make writing clear code easier, we've already established that there's a spectrum for language in expressiveness and clarity.
Eliminating entire classes of bugs makes for better understanding.
> It all depends on the complexity of your application. The type of guarantees Rust offers in my experience becomes exponentially more useful as your application grows in size.
Sure, but the development pattern that get used for larger applications typically don't benefit from Rust's additional safety; for one you're going to be using a garbage collected language.
> for one you're going to be using a garbage collected language.
I wish it were true, but promise you it is not. As a counterpoint, I point to most "AAA" gamedev and OS development.
In gamedev it's even a bit flipped: The smaller indie gamedevs can pay the GC hit for Unity's C#, web JS, actionscript flash back in the day, etc. - not much working data, not much garbage. Larger scale titles start missing vsync and having horrible stuttering when GCs are thrown into the mix too brazenly - they're still used on smaller scales (embedding browser tech for UI, limited scope scripting, etc.) but they have a lot of native, non-GCed code.
Agree; this is not an issue that slows down my development or bughunts.
You can get a long way towards safety without learning Rust. It's those rare cases that will get you.
It's a trade-off; take the time to learn the language and deliver later, or just use what you already have to deliver a product now.[1]
[1] During a Rust discussion some years back, when I was at a different company, on a specialised and large-ish product written in C++03.
I went through about 3 years of tickets (limited to only the bugs reported). No open ticket was older than a few weeks. Out of maybe 1000 bugs, only a single one was something Rust would have prevented. I would think that most mature products will have similar stats, so the trade-off is not as obvious as it looks to be on the surface.
It probably depends on project type and how complex your ownership models are, but that doesn't really track with large projects having a majority of their CVEs be memory safety issues that are far less likely in Rust[1] (e.g., https://www.chromium.org/Home/chromium-security/memory-safet...)
[1] I say far less likely because obviously it's possible with unsafe Rust, but I've never had one happen, seen one happen in real code, or been affected by part of a dependency tree having one.
I'm not saying that a large number of CVEs won't be prevented in Rust, I'm saying that so few bugs are CVEs that the trade-off is not always worth it.
If you have 1000s of bug reports, of which 5 are CVEs, and then have 3 of those 5 be preventable, most dev teams are still going to consider the cost/benefit of going through the pain of developing a long-term product in Rust, or of switching to Rust altogether.
I suppose it comes down to risk assessment; if those CVEs are critical “fix this now or the world catches fire”, then their relative infrequency seems to be outweighed by their impact, no?
> I suppose it comes down to risk assessment; if those CVEs are critical “fix this now or the world catches fire”, then their relative infrequency seems to be outweighed by their impact, no?
>I went through about 3 years of tickets (limited to only the bugs reported)
This statement is meaningless without any insight on how bugs were created. If the bug reports exclusively dealt with "happy-path" or "business logic QA", then of course you won't see any CVEs. Did the use of fuzzers or address sanitizers create bug reports? Were these tools even used? If not, the claim that only one of 1000 bugs were memory safety issues isn't credible; you weren't looking for them so of course you didn't find them.
I think it says a lot when almost every C++ developer claims to have a higher quality code base compared to say Linux or Chromium when it comes to memory safety errors.
What is it about C++ as a community where most average devs claim an inhuman level of proficiency but only the coders with experience in real critical codebases have the humility to claim that without extremely strict coding practices and extensive use of fuzzers we're barely smarter than apes at churning out safe code?
> All it had to say was that rust tells you when you keep reference to on stack variable after it goes out of scope.
The author isn't just telling you that Rust is awesome because it tells you something, he's acknowledging the frustration in learning how to listen to the compiler.
It's kind of like Jerry Pournelle describing the ups and downs of USB by documenting an epic journey that all started with trying to scan some handwritten notes for his next novel using a Canon scanner he borrowed from Alex that he's just now getting around to reviewing, because the pins of the parallel port are too bent to use the old Epson. Okay, so maybe it was a little overcomplicated.
Earlier this month we integrated a C++ library written by my team with a server written by another team.
We saw the data corruption, and we knew it was a reference issue, but it took quite a bit of effort to track down. The cause was confusion around string_view and string&, with different behavior when you pass each to a new thread.
Rust would have caught this much earlier and saved 3 days work.
This case might be trivial to debug, but when you start adding concurrency a lot of that ease goes out the window. Right now, our tests are mildly flaky because of asan crashes from use after stack frame issues. Reading the code it should be joining all the coroutines on destruction, but yet the asan violations are happening. It really isn't that trivial to debug. Luckily in our case it doesn't affect production since it's only on shutdown (probably...?).
As someone considering learning Rust, the article put me off fast with the long preamble to even explaining what the issue was. I really hope it's not that complicated, but even your rebuttal fills me with fear - "corrupted data [......] being modified by unrelated operations on stack". How would you explain that to put a C programmers mind at ease?
To be fair youtube is not really social media in traditional sense. It's more of a content delivery platform. It's not a place where you go to watch pixtures from your friends holiday or political hot takes but more traditional entertainment.
It's funny how closely early YouTube resembles TikTok in the 'average people singing, dancing, vlogging, being funny in front of the webcam' type of video that's entirely missing from modern YouTube (the webcam having been seamlessly replaced by the front-facing phone camera).
Even the median length was similar, around 30 seconds, and the video responses feature for replying to one video with another certainly looks familiar. We can look back at exactly how it was in the Wayback Machine: here's a random snapshot of the 'Most Recent' page from 2006.[1] Remember the stereotypical TikTok feed full of dancing girls?
We can't get the past internet back, of course, but this realization really made me see TikTok in a different way. (though YouTube's propensity for recommending decade-old videos ought to be noted with regard to this--a social media site willing to show you some of its oldest content, that's rare!)
Why would they do that when there are insane amounts of start with billions of years of life ahead of them. What purpose would that serve at this point in time?