Hacker News new | past | comments | ask | show | jobs | submit login

> and the JIT can actually make code faster than C++.

How? Demonstrate with real examples




The jit can optimize away virtual function calls.

    ParentClass g = ....might be a subclass...

    for (int i=0;i<100000;i++) {
      g.func(i);
    }
The JIT can optimize away the vtable lookups to find func, and sometimes inline the code. Ok, maybe you could do this in C++ too.

    ParentClass[] g = ....might be a subclass...

    for (int i=0;i<100000;i++) {
      g[i].func(i);
    }
Suppose 99% of g's are the same class. The JIT can optimize away most of the virtual function lookups (particularly if this is one of your hotspots). I.e. the code becomes:

    ParentClass[] g = ....might be a subclass...

    for (int i=0;i<100000;i++) {
      if (g.class == COMMONCLASS) {
        inlined_func(i);
      else {
        g[i].func(i);
      }
    }
In the common case, a simple pointer comparison + inlined code is a lot faster than a function dereferene.


Devirtualization is mostly an issue for Java since everything is virtual by default and the language doesn't have support for compile time monomorphization.

While C++ code does use virtuals, it's nowhere near the amount as Java - there are language constructs to avoid that and move the dispatch selection to compile time.


GCC in fact performs this optimization.


GCC can get runtime information at compile time? That's a truly advanced compiler.


It can in fact make predictions in a number of circumstances. The most common case is FDO (https://gcc.gnu.org/wiki/AutoFDO) - with FDO coverage GCC can easily demonstrate a particular vcall is ~always a particular type and emit exactly the code you describe. (To be clear, this isn't just in theory, but is actually happening in a wide variety of binaries I use.)

This is perhaps not fully "ahead of time", granted, but it's extremely easy to deploy and highly effective, and entirely accessible to C++.


Snark not warranted, profile guidance is a thing. But more likely many of those virtual functions will not be a virtual function in idiomatic C++ for performance oriented design. I have seen that I can get away with compile time polymorphism many a time. For the rest there is compiler devirtualization and profiles.


You could, I guess, annotate C++ child classes as likely, and then perform this optimization. That'd be an extension to the standard of course.


> the JIT can actually make code faster than C++ > How?

A JIT compiler can sometimes beat an AOT compiler because it has more information.

For example, it is entirely feasible for a JIT to heavily optimise a fast path even if the optimised code wouldn't be correct for all cases that the source could be called for. If the JIT detects an uncommon case it can just fall back to the interpreted code.

An AOT compiler will forego optimisations if it can't be sure that it will produce correct code. For example, C++ was generally considered slower than Fortran until restrict was added as the compiler had to be more conservative. However, restrict is, well, restrictive. Conceptually, a JIT could work around this if it had a function that didn't have restrict arguments but was usually called on distinct memory. It could hold an optimised path that assumed restrict but fall back if it detected otherwise.

Now, some of this benefit can be had in an AOT compiler with profile guided optimisations. But usually, the AOT compiler will still tend to the conservative to balance aggression with code bloat.


It principle yes, but in practice? You see a trend with interpreted, then JITed languages eventually getting an AOT compiler, but I can't recall any natively compiled language getting a JIT for performance reasons (LLVM was supposed to allow that, but the JIT hasn't seen much love)


Current Azul Zing. Is trending into a hybrid, storing previously compiled machine code to use upon restart if byte code matches.

I also think that the JVM is the only JIT that has had close to same number of resources pushed at it as some of the Fortran/C/C++ compilers. But to differing markets the JVM focused a lot less on easy to benchmark numeric and actually looks at other types of code.

Then of course there is the hybrid JVM/LLVM using Graal/Sulong [1]. Which hopes to better than JIT or AOT apart.

[1] https://github.com/graalvm/sulong


This mixes two questions: Can java be compiled faster than C++, and can a JIT compiler outperform an AOT compiler. Both java and C++ have problems, C++'s aliasing restrictions are nasty and I'd be surprised if anything java has is as bad.

The second is more dubious... sure, a JIT compiler has information if it spends RAM and cycles on collecting that, but it also has to run quickly and fit in the runtime environment, and an AOT compiler can run arbitrarily slowly, use a whole rackful of servers, and can use PGO without incurring any profile collection costs at (normal) runtime.


Tiered JITs are meant to allow slower and more aggressive optimizations to be done on truly hot code. However, you're right in that they still cannot spend as much time or resources as an AOT compiler.


The jit must be rerunned every time the process is loaded. In image based language like smalltalk the jit state can be saved in the image with performance optimizations. So next time the image is reloaded the jit status is hot.


Yes, Azul has a similar feature (ReadyNow).

This is nontrivial because lots of optimizations depend on class load ordering and runtime profile information.


The JIT has access to the actual run time class hierarchy and can devirtualize method calls. In C++, if a method is virtual, the system can never decide "oh, this method has only one implementation, so let's always call that" because it must statically assume multiple implementations to be present.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: