Firstly, Java's generics did extend the class file format, quite significantly. Lots of metadata about generics and type variables makes it into the class files which is why you can reflect over them. It requires a gross trick involving creating anonymous objects that subclass a TypeHolder or TypeToken style class but you can do it because the data is there.
What they didn't want to do was break existing code and tie the JVM too deeply to their specific choice of generics. The downside is sometimes you can't do stuff you want to do, like overload a method in ways that only differ by a type variable. The upside is that there are still tons of libraries out there that are useful which at least internally have pre-generics code: backwards compatibility has real value. Also, other languages like Scala and Kotlin were able to try out different approaches to generics. Kotlin in particular improves on Java's existing generics and that's possible partly due to erasure.
Also, truly reified generics is really hard. Look at Valhalla and the "parametric JVM" documents. Doing it well involves a lot of complex design choices and support infrastructure. Most languages struggle with this. For instance it's a big part of why C++ libraries find it hard to export stable ABIs.
All of what your saying doesn't actually contradict what I wrote. The metadata for generics is not used by the VM during execution. It only exists so that you can compile against .class files as if they were source.
IMHO it was still a major design mistake that has cost everyone else more time in the end. A classic near-term/long-term miscalculation. For a counterpoint, C# introduced generics and rewrote the class library. They're in the bytecode, not just metadata. The VM dynamically specializes JIT code as necessary, and shares equivalent specializations to avoid code explosion.
> Also, truly reified generics is really hard.
I think this statement is false. It's not intrinsically hard; it's only hard because of a lot of other constraints. For example, Virgil has "reified"[1] generics since 2.0 and I just use monomorphization. MLton did the same, 20 years back. Code explosion isn't that bad for medium-sized programs.
[1] "reified" implies there is a runtime representation, which is not really what's going on. With static specialization, it's possible to completely compile away any additional metadata representation, as it either becomes implicitly part of a function specialization or the class metadata for specialized classes.
The only real reason I see to not use monomorphization is if you have polymorphic recursion, or first-class polymorphism. Virgil doesn't allow either of those.
And it is also a big part why all Microsoft attempts to have a full AOT story in .NET have not enjoyed much love.
NGEN, introduced since version 1.0, never went beyond providing a faster startup and drop back into JIT for more hard to AOT compile stuff.
MDIL taken from Singularity into Windows 8/8.1, adopted a mixed IL/native code binaries that were linked at installation time.
.NET Native is what .NET 1.0 should have been all along, but again it imposes some restrictions to regular .NET code so, and it required COM (now WinRT) to be extended to some some kind of lightweight generics ABI. Most likely will be killed when .NET 6 AOT comes out, and the whole Reunion project to bring core UWP stuff into regular Win32 stack.
Mono AOT and IL2CPP mostly work by mapping .NET generics into C++ template semantics, which work most of the time, but with caveats when taking any random .NET library not written with them in mind.
So while .NET generics model was a much better approach in general, it isn't without its own share of downsides.
Firstly, Java's generics did extend the class file format, quite significantly. Lots of metadata about generics and type variables makes it into the class files which is why you can reflect over them. It requires a gross trick involving creating anonymous objects that subclass a TypeHolder or TypeToken style class but you can do it because the data is there.
What they didn't want to do was break existing code and tie the JVM too deeply to their specific choice of generics. The downside is sometimes you can't do stuff you want to do, like overload a method in ways that only differ by a type variable. The upside is that there are still tons of libraries out there that are useful which at least internally have pre-generics code: backwards compatibility has real value. Also, other languages like Scala and Kotlin were able to try out different approaches to generics. Kotlin in particular improves on Java's existing generics and that's possible partly due to erasure.
Also, truly reified generics is really hard. Look at Valhalla and the "parametric JVM" documents. Doing it well involves a lot of complex design choices and support infrastructure. Most languages struggle with this. For instance it's a big part of why C++ libraries find it hard to export stable ABIs.