Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> However, I am not convinced one can make something solving all the things Rust solves which is substantially simpler as language.

I wonder a lot about this too. As far as I know, rust is alone. It’s the first language to exist which does compile time memory management in a safe way. Every other language either is memory-unsafe (C, Zig) or garbage collected. (Go, JS, etc).

Rust might the first language of its kind, but it probably won’t be the last. I’d be very surprised if we’re still using rust 100 years from now. Subsequent languages can learn from rust’s accidental complexity and clean up the approach.

Personally I suspect it’s possible to have a borrow checked language that’s much simpler than rust. I don’t know what that would look like, but it would be surprising to me if we found the best programming language in this space on our (collective) first attempt at it.

But I don’t have enough insight to figure out what rust’s successor would look like either. But luckily for both of us, there are a lot of smart people thinking about the problem now. Let’s give it a decade and see.



There's some exciting work already going on in this space, in fact!

* Andrew Kelley is looking into combining escape analysis with allocators to assist in Zig's memory safety. [0] It could be quite a step forward!

* D is adding a borrow checker! [1]

* Cone is adding a borrow checker on top of user-definable allocators. [2]

* We're currently prototyping a region-based borrow checker [3] for Vale, which should get us most of the benefit of Rust's borrow checker with only a fraction of the complexity. It's opt-in, not upwardly infectious, and can stay near the "computational leaves" of one's program.

[0] https://news.ycombinator.com/item?id=31853964

[1] https://dlang.org/changelog/2.092.0.html#ob

[2] https://cone.jondgoodwin.com/memory.html

[3] https://verdagon.dev/blog/zero-cost-refs-regions


It sounds like the "region borrow checker" is fundamentally based on reference counting, which makes it a garbage collected system. Also, statically eliminating reference count traffic from the stack isn't new; it goes back to Deutsch and Bobrow 1976. Actually, I suggest reading the paper; it seems like a quite similar system [1].

[1]: https://www.memorymanagement.org/bib.html#db76


The region borrow checker was originally designed for a reference-counted system, but not any more.

We later discovered generational references, a method for memory-safe single ownership with no borrow checker, reference counting, nor garbage collection. To our delight, the region borrow checker works even better with generational references than with reference counting =)


Generational references are best seen as an alternative to RC that makes different tradeoffs. They add overhead to reads/writes thru the managed reference, as opposed to the overhead for changes in ownership you see wrt. RC/ARC in something like Rust. (Note that other languages like C++ or Swift cannot cleanly separate ownership changes to the same extent as Rust, so their overhead is ultimately a lot higher. Especially Swift, with its ARC-for-everything approach.) They look like they might be good complementary approaches.


I'd be curious as to whether my comment here hypothesizing an AI wizard stikes you as "just nuts", or not.


Frankly it seems strange to me to be comparing Vale's generational reference system and Rust's borrow checker directly. They have completely different characteristics and are not direct substitutes for one another.

First, Rust's borrow checker incurs zero runtime overhead for any pointer operations (whether dereferencing, copying, or dropping a pointer) and requires no extra storage at runtime (no reference counts or generation numbers); it's entirely a set of compile-time checks. Generational references, on the other hand, require storing an extra piece of data alongside both every heap allocation and every non-owning reference, and they incur an extra operation at every dereference.

Second, since Rust's borrow checker exists entirely at compile time, it doesn't introduce any runtime failures. If a program violates the rules of the borrow checker, it won't compile; if a program compiles successfully, the borrow checker does not insert any conditional runtime panics or aborts. Generational references, in comparison, consist entirely of a set of runtime checks; you won't found out if you violated the rules of generational references until it happens at runtime during a particular execution and your program crashes.

Finally, Rust's borrow checker applies to references of all kinds, whether they point to a heap-allocated object, a stack-allocated object, an inline field inside a larger allocation, a single entry in an array, or even an object allocated on a different heap and passed over FFI. Its checks still apply even in scenarios where there is no heap. Generational references, on the other hand, are entirely specific to heap-allocated objects. They don't work for stack-allocated objects, they don't work for foreign objects allocated on a different heap, and they don't work in a scenario with no heap at all.

All of these are fundamental differences which mean that Vale's generational reference system is not at all a replacement for Rust's borrow checker. It's not zero-overhead, it doesn't catch errors at compile time, and it's fundamentally specific to heap-allocated objects. In these ways it's more comparable to Rust's Rc, which introduces runtime overhead and is specific to heap-allocated objects, or RefCell, which performs checks at runtime that can result in aborting the program.


Borrowing, in a way, looks a bit like the anchored pointers suggested by Henry Baker in [1]. I recall reading someone did a static analysis to detect what pointers could be anchored, but forgot which someone. However the "deferral" suggested is very different to the Deutsch-Bobrow deferral; what makes me think it is more like borrowing is that anchoring avoids destruction due to linear types, like borrowing avoids moving or copying. (Regions in Cyclone would be somewhere in between, too.)

[1] https://plover.com/~mjd/misc/hbaker-archive/LRefCounts.html


It doesn’t have to. If your allocator is the first-class citizen in the language, simply check the allocated memory when allocator itself is manually destroyed (exit its region) is enough for detecting memory leak.


D is adding everything, it seems.


I wonder if that could actually be hindering its adoption. I think the 'kitchen sink' C++ style is not so popular anymore.


When it comes to language popularity, we have to keep in mind that D is way more popular than so-called up and coming languages (where it's oddly ignored they came out many years ago) such as Zig, Nim, Crystal, etc... Dlang often floats around the top #30 in the TIOBE rankings.

It really is a very tough hill to climb to crack the top 10 among languages. Relatively speaking, D has done well for itself, and arguably is in the position to climb further up and possibly be in the top 20 of language rankings.

Have seen a lot of talk and theories about why D didn't or hasn't become more popular. Some of them have to do with circumstances, not just about the number of features. For instance, Dlang's DMD reference compiler was under license restrictions by Symantec, which wasn't changed until 2017 (now it's under the commercial friendly Boost license). A lot of companies and corporate sponsors get leery about licensing issues.

There were also early squabbles about API design, like Tango versus Phobos, which are mostly resolved these days. But, years ago, it was less clear about the direction or how it would get sorted out.

D, as a contender to climb even higher up the language ranking charts, is still a strong possibility. D might even remain and always be more popular than Zig will ever be (especially if it wants to fight and die on the more syntactically complex "better C" hill), despite this view on D adding so much. Which could even be more of a selling point (kitchen sink included), than an hindrance, and particularity as an already established and developed older language.


If D is in the top 30 of languages according to these rankings that just shows how flawed they are.


Fascinating (and heartening.) I wonder whether this work takes us any way toward an AI wizard "in between" Rust and Zig (say) as I've described in a longer comment here. Am I nuts to hope for such a thing?


There have been memory-safe non-GC languages before, like Ada SPARK, but they generally disallow heap allocation altogether. If you heap allocate you get a GC. There were also some academic languages, most notably ML Kit, that offered a safe no-GC region-based memory management system that has a lot in common with Rust. Also, there are academic languages like Cyclone that have a very Rust-like memory management system, but they all use GCs in the end. Still, if you know Rust, Cyclone's ideas will be very familiar.


Small point of contention, but it’s actually Rust that has a Cyclone-like memory management system.

Cyclone’s initial release was ‘02 and it finalized in ‘06.


Oh, I'm sorry, I didn't mean to imply that Rust invented any of those things. Cyclone was a big influence on Rust actually. We chatted with Dan Grossman early on, and his presentation "Existential Types and Imperative Languages" is the earliest description of the problem the borrow checker is trying to solve (check it out, it's a great presentation!)

We stand on the shoulders of giants :)


You just (accidentally, no doubt!) well-actually’d someone who helped build Rust and has been invoked in it since long before 1.0. ;-p He’s aware of the direction of influence and was just saying they’re similar, not saying one influenced the other.


I’m aware of pcwalton’s involvement in Rust. So it wasn’t accidental in any way. It was however slightly tongue-in-cheek, but I probably could have used a /s to indicate it wasn’t an attempt to score ‘I’m smart points’. I was also very aware that, despite anything else, they were almost certain to be aware of which came first.

I may take serious issue with pcwalton’s assessment of the performance characteristics of OS threads vs other concurrency implementations, but fun HN arguments aside; I have never seen or heard of any lack of acknowledgement of preceding language/implementation work leading to Rust’s development. So I certainly wasnt attempting to imply any stealing or denial of credit/attribution.


No worries, it's my fault for not being clear. The absolute last thing I want to do is to take credit for things we didn't do. Rust wouldn't exist without that academic work by Tofte, Talpin, Grossman, etc.


I said it in my sibling comment above, but to be clear, I was well aware you almost certainly knew Cyclone’s general age. I don’t want you to think I was implying or hinting anything near taking or denying credit.


Likeness is bidirectional and doesn’t imply derivedness, though.


I feel like I stuck my foot in my mouth with my comment, but separate from that…

This is an interesting perspective, I can’t think of a reason you would be incorrect, but I know I have some default intuition that likeness is most often used to imply derivedness. Also, that’s a great way to phrase what you meant.


I gather that in later versions of the ML Kit, the contents of regions were also garbage collected. The problem they had with the compile-time lifetime / region analysis was that in difficult cases an object would be given a longer lifetime than necessary, leading to space leaks


There are several languages which use linear types, Rust being the most popular. For example, https://github.com/austral/austral claims to be simpler than Rust and was discussed on hackernews a few days ago.


Another possibility is Rust continues to get more advanced in ways that remove constraints over time, making it less complex to use in practice. We already see a lot of (small, incremental) improvements of this type, where a bunch of compiler work goes into proving that new cases are actually safe and therefore allowed

Basically what I'm wondering is if Rust has room to become that language that's learned from the ways Rust has been clunky in the past, without introducing dramatically breaking changes


Rust is the first achieving commercial success (so to speak), several others have precedded in the use of affine types.

While we cannot, and should not, take away its glory in achieving this, we should also not address it as if all belongs to Rust alone.


If people are still typing words with fingers to make computers do things, in 100 years, technology has failed!


Maybe, maybe not. Technologies that are 'good enough' just stick around forever instead of being replaced by something entirely else. Wheels are still round, we still print words on paper, we still use glass lenses when eyesight starts to fail, rifles don't look fundamentally different from when they first appeared, etc etc...


> rifles don't look fundamentally different from when they first appeared, etc etc...

I would claim that a modern rifle flies at hundreds of miles an hour, and is operated remotely.


There is also Vlang [0] which the Zig core maintainer seems to both look up to [1] as well as look down upon [2].

[0] https://github.com/vlang/v [1] https://news.ycombinator.com/item?id=19086589 [2] https://news.ycombinator.com/item?id=31795445

EDIT to reflect critical attitude as indicated by [2].


I'm not sure how are you getting "look up to" vibe, Andy commented that[1] before vlang was released, solely based on feature lists from vlang's website.

[1] https://news.ycombinator.com/item?id=19086589


I don’t have a real opinion in this particular topic area, but I am certain many people will find the Andrew Kelly reference to be not entirely accurate.


Neither do I. The word seems was used as a tentative qualifier rather than an assertion. Feel free to provide context that would indicate otherwise if you deem it salient.



Thanks for pointing this out. At the very least, there are conflicting attitudes. Comment will be edited to reflect that. Initial reply made a valid point.


You did nothing wrong in pointing out the difference in attitude. The change in views is arguably indicative to the extent that Vlang was perceived as a real threat, competition, or could become more successful.

Vlang has continually maintained its development pace and popularity (https://github.com/vlang/v/releases), so that pretending to dismiss it as if it was nothing or attempting to smear the author just does not work. It's not going away. Vlang is also not a one-man show, but has numerous other committed developers, new contributors, and loyal long time supporters.


I think the reason Vlang gets so much heat is that they dismiss criticism and make claims that PL people think are misleading.

You can see this in the latest Vlang release thread. I suggest those curious take a look at it and judge for themselves.


Actually, what people can find is the author of Vlang has been shown to welcome constructive and helpful criticism versus outright trolling or blatant smearing. For example- "What don't you like about V / what would you like to be changed?" (https://github.com/vlang/v/discussions/7610).


It was rather a failure to deliver on the promises than perceived as a threat to Zig and the target audience for both differ enough that V doesn't have an effect on the success of Zig nor does Zig have an effect on the success of V. They are both reaching for success within their own domains with some but not much overlap.

I know you're very fond of V but misrepresenting criticism or a negative view of the project as "fear", "perceived threat", and so on is rather dishonest and a disservice to the project. The above is a comparison of before release and long after where there's been a chance to evaluate V for what it is rather than for what it advertised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: