It would be interesting to have a `reverse` blog post "why Go compilation time is fast" e.g. which optimizations are overlooked/done quickly by the go compiler ? Could the binary quality be improved by increasing the compilation time ?
AFAIK you don't have optimization flags in go (or the default one are optimum).
And to be honnest, I have no idea what the compiled Go code looks like. As much as I have no idea what actual instructions are executed by the python interpreter.
Well one big reason is the biggest feature people are complaining about: A lack of generics. According to [1], there's a fundamental trade-off with respect to generics:
"The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?"
At the moment, Go has no generics, which translates to "slow programmers", but not "slow compilers". It's likely that the simplicity of the Go language as a whole -- which leads to many complaints -- is a major factor in the fast compile times.
EDIT: Lest it seem like I'm bashing Go here, the quote above is from one of the core Golang developers (in 2009 no less), and is mirrored in this from the proposal [2] from another core Golang developer. It's meant to be a shorthand for, "Generics do actually save programmer time and effort, and Go programmers are working harder than necessary because the language doesn't have generics yet." The point of this post is to point out that generics have a cost (slow compilation time) and that fast compilation time has a cost (no generics -- at least, not without inventing a new way of doing generics).
It's possible to love something while still seeing its deficiencies and wishing its improvement. In fact, I'd argue that's the only way to truly love anyone or anything.
What I don't like about the "generics dilemma" framing is that the problem exists regardless of whether your language has generics or not.
Here's what I mean. Say you have a Vector class that can operate on ints or floats. You could make that a generic, in which case the compiler can either (a) duplicate the code for each type (monomorphize) or (b) do dictionary passing and get slower runtime. But if your language doesn't have generics, you have exactly the same problem: you as the programmer must (a) duplicate the code for ints and floats or (b) use an interface and get slower runtime. Not having generics doesn't solve anything. It just means that you, the programmer, have to do things that the compiler would otherwise do for you.
Although, one interesting aspect of the trade-off here is that a programmer who manually monomorphizes only has to do so once(^) -- their effort is reused across multiple compilation runs -- whereas the compiler normally has to monomorphize on each compilation run.
(^) Of course, you then have the burden of keeping multiple monomorphized implementations consistent when you make a change that needs to apply to all of them.
Monomorphization tends to happen "on the fly", and even if it wasn't, it would still be cheaper than duplicating all of the work of parsing and type-checking (which is needed if the user manually monomorphized).
That can't be right though. Java and Kotlin have generics, fast programmers, fast compile times and as generics are erased there's no more binary bloat than without them.
I think the assumption implicit in that statement is "generics over value types compiled AOT". But that's not the only way to do it.
Well, sure, in the context of a language that doesn't really have value types and is already doing everything w/ dynamic dispatch, you can have "zero cost" generics, in the sense that there's no code bloat and no perf penalty, vs what you already had.
But that's because your language is already leaving some performance on the table!
If you want a language that's as fast as possible, you want something like
GenericContainer<Foo> foos = ...;
for(var f in foos) f.DoSomethingFooish()
to be able to transform into a contiguous chunk of "Foo"s in memory, that the loop is traversing, and doing no dynamic dispatch (and potentially inlining!) in each of the "DoSomethingFooish" calls.
I don't think you can have a generics system capable of achieving that level of performance w/o also brining w/ it the downside of more code generation & extended comp time.
(P.S. Also, Java is getting support for user-defined value types, right? How will those interact w/ the generics system?)
Yes, that's a good point. I'd counter though that in C++ and similar languages value types and memory management get conflated in ways that hurt performance. Java has a really, really fast heap and allocations get laid out contiguously by the GC in ways that have a measurable + significant impact on cache hits and performance.
In C++ you see std::vector with large-ish values all the time, even when it doesn't really have any memory layout justification because that way you get semi-automatic memory management and with pointers you don't. This can easily lead to large amounts of pointless code bloat, hurting icache hit rates, compile times, binary sizes and more, even in cold paths where memory layout is the least of your concerns.
Not sure yet how generics in Java and value types will interact. There have been some prototypes of generics specialisation so it'll probably end up like C++ but, hopefully, with way less use of value types - restricted only to places where they make a real improvement. That'll be a lot easier to measure in Java because as long as you stay within the value-allowed subset of features you will be able to convert between value and non-value types transparently without rewriting use sites. So you can just toggle it on and off to explore the tradeoffs between code generation and pointer indirection.
And to be honnest, I have no idea what the compiled Go code looks like. As much as I have no idea what actual instructions are executed by the python interpreter.