Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course, and dead code elimination can also be improved upon (compared to AOT compilers) because some inputs like command line flags will never change during the execution of the program. This means you can better predict which branches will be taken. These are optimizations that work for almost any program.

OTOH, another way in which JITs "should" generate better-performing code is by tailoring their output to the platform on which the program is currently being run. With AVX being quite prevalent on the server-grade CPUs on which many big JVM programs run on, I don't think it would be unreasonable to expect the JVM to have more support for AVX512 in its code generator than it apparently does. Is low hanging fruit like 10x speed improvements in sorting something you'd expect out of a very mature platform like the JVM?

I don't mean to harp on the JVM devs here, JIT development is a Very Hard Problem. It's just that I can understand why GGP is disappointed, JITs in general don't seem to quite deliver on the excitement they generated when they were new.



I don’t know — having almost native speed with all the benefits of a fat runtime like attaching a debugger to a prod process to check the number of objects, or streaming runtime logs without almost any overhead sounds like they do deliver.


> having almost native speed

But it doesn't unless your "almost" is very generous. Java is pretty consistently 2-10x slower than the major performance-focused AOT offerings (C, C++, Rust)

Now maybe you call 2x "almost", but let's phrase it in terms of CPU performance over time. That's equivalent to 10 years of CPU hardware advancements.

To me that's a lot of overhead. Depending on who is paying for the CPU time vs. the developer time it's regularly a cost worth paying, but at the same time don't pretend it's "almost native speed", either. It is a cost and a rather significant one at that. Just, so are engineers. They also aren't cheap.


That’s not so simple that we could reduce it to a number. Java is very close to C performance when you operate on primitives only. But it does have an overhead for objects and it can’t “hide” it as well as languages that have developer-control on stack allocation (List<Complex> will be an array of pointers in java, while it can be inlined values layed out sequentially in C/C++/Rust).

Also, which 10 years of CPU advancement do you mean? It is definitely not a linear graph, we have reached an almost plateau on single-core performance.


The most recent 10 years and that's exactly the point - we're plateauing on hardware advancements, so software inefficiencies are more of an issue now than ever before. 2x is now a huge change in performance. You're not getting that "for free" from just waiting a couple of years anymore.


You did not take into account my point about code operating on primitives definitely not having that kind of overhead.

And even code that does have an overhead is not as simple to judge. Could you write that same code in a lower level language that it will still remain correct and safe? Is the algorithm actually expressible in Rust’s much more restrictive style (to stay safe)? If you do locks and ref counting everywhere, will that code actually be still faster? For example, a compiler might very well be faster in Java/haskell/another managed language.


> You did not take into account my point about code operating on primitives definitely not having that kind of overhead.

Because it's not really interesting to debate. Nobody writes Java code like that, and even when they do there's still overhead to it. The exact amount of overhead is kinda irrelevant since the language very obviously doesn't want you to write code like that.

> Could you write that same code in a lower level language that it will still remain correct and safe?

Rust is safer than Java, so yes :)

But you're drifting into the productivity argument anyway, which I already pointed out is a reasonable reason to pay runtime overhead to get.


Plenty of places write Java that way, HFT might be a notable example.

Rust has data-race freedom, while java has “safe” data races (it is tear-free, so even in case of a data race you won’t corrupt the memory, which is not true of rust with any number of unsafe parts). So I don’t really buy the argument that Rust would be safer.


On the other hand .NET is much closer, because it supports value types and CLR was designed to be targed by C++ as well, so it supports most of the crazy C and C++ stuff (not counting UB related ones).

Likewise if you code in C, C++ or Rust with allocations everywhere, bad algorithms or data structures, being AOT won't help.


Java typically elides most allocations in hot loops where objects don't escape.


Yes and no, the effectivness depends on which JVM we are talking about.

GraalVM does it much better than OpenJDK, then there are OpenJ9, PTC, Aicas, Azul, ART (yeah not really, but close enough), microEJ, and a couple of nameless others from Ricoh, Xerox, Cisco, Gemalto,... on their devices.

Even if we stick to OpenJDK, the distributions based on it aren't all the same, for example Microsoft's fork has additional JIT improvements (-XX:+ReduceAllocationMerges).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: