41

Is there any just-in-time compiler out there for compiled languages, such as C and C++? (The first names that come to mind are Clang and LLVM! But I don't think they currently support it.)

Explanation:

I think the software could benefit from runtime profiling feedback and aggressively optimized recompilation of hotspots at runtime, even for compiled-to-machine languages like C and C++.

Profile-guided optimization does a similar job, but with the difference a JIT would be more flexible in different environments. In PGO you run your binary prior to releasing it. After you released it, it would use no environment/input feedbacks collected at runtime. So if the input pattern is changed, it is probe to performance penalty. But JIT works well even in that conditions.

However I think it is controversial wether the JIT compiling performance benefit outweights its own overhead.

3
  • 1
    Off-topic- off-site resource.
    – DeadMG
    Commented Apr 12, 2016 at 19:55
  • 2
    Not sure if it fits the question, but for a usability prospective I find useful the Cxx package in the Julia language.. it gives you an interactive C++ prompt similar to those described in the @PhilippClaßen answer.
    – Antonello
    Commented Aug 8, 2019 at 9:24
  • 1
    GCC 9 now has a jit compiler gcc.gnu.org/onlinedocs/jit/intro/index.html Commented Nov 25, 2019 at 23:45

5 Answers 5

36

Yes, there are a couple of JIT compilers for C and/or C++.

CLing (as you might guess from the name) is based on Clang/LLVM. It acts like an interpreter. That is, you give it some source code, give a command for it to run, and it runs. The emphasis here is primarily on convenience and fast compiling, not maximum optimization. As such, although technically an answer to the question itself, this doesn't really suit the OP's intent very well.

Another possibility is NativeJIT. This fits the question somewhat differently. In particular, it does not accept C or C++ source code, and compile it and execute it. Rather, it is a small compiler that you can compile into your C++ program. It accepts an expression which is basically expressed as an EDSL inside your C++ program, and generates actual machine code from that, which you can then execute. This fits much better with a framework where you can compile most of your program with a normal compiler, but have a few expressions that you won't know until run-time, which you want to execute with something that approaches optimum execution speed.

As for the apparent intent of original question, I think the basic point of my original answer still stands: while a JIT compiler can adapt to such things as data that varies from one execution to the next, or even varying dynamically during a single execution, the reality is that this makes relatively little difference at least as a general rule. In most cases, running a compiler at run time means you need to forego quite a bit of optimization, so about the best you usually even hope for is that it's close to as fast as a conventional compiler would produce.

Although it's possible to postulate situations where information available to a JIT compiler could allow it to generate substantially better code than a conventional compiler, instances of this happening in practice seem to be pretty unusual (and in most cases where I've been able to verify its happening, it was really due to a problem in the source code, not with the static compilation model).

[Also, see edit history for quite a different answer that's now basically obsolete.]

4
  • 2
    Why don't JITs save a cache-like file so they can skip relearning everything from scratch?
    – JohnMudd
    Commented Jun 26, 2015 at 14:47
  • 4
    @JohnMudd: I suspect the reasoning is security. E.g., modify the cached code, then the next time the VM starts, it executes code I put there instead of what it wrote there. Commented Jun 27, 2015 at 4:27
  • 4
    OTOH, if you can modify caches, you can also modify source files. Commented Nov 19, 2015 at 1:06
  • 1
    @user3125367: Yes, but in many cases the compiler does various type checking and such that might be bypassed if you load compiled code directly from the cache. Depends on the JIT, of course--Java does a lot of enforcement work when loading a (compiled) .class file, but many others do a lot less (nearly none, in many cases). Commented Nov 19, 2015 at 3:36
11

Yes, there are JIT compilers for C++. From a pure performance perspective, I think Profile Guided Optimization (PGO) is still superior.

However, that does not mean that JIT compilation is not yet used in practice. For example, Apple uses LLVM as a JIT for their OpenGL pipeline. That is a domain where you have significantly more information at runtime, which can be used to remove a lot of dead code.

Another interesting application of JIT is Cling, an interactive C++ interpreter based on LLVM and Clang: https://root.cern.ch/cling

Here is a sample session:

[cling]$ #include <iostream>
[cling]$ std::cout << "Hallo, world!" << std::endl;
Hallo, world!
[cling]$ 3 + 5
(int const) 8
[cling]$ int x = 3; x++
(int) 3
(int const) 3
[cling]$ x
(int) 4

It is no toy project but it is actually used in CERN, for example, to develop the code for the Large Hadron Collider.

7

C++/CLI is jitted. Granted, C++/CLI is not C++ but it is pretty close. That said Microsoft's JIT doesn't do the super clever/cute kinds of runtime behavior based optimizations you're asking about, at least not to my knowledge. So this really doesn't help.

http://nestedvm.ibex.org/ turns MIPS into Java bytecode which would then be jitted. The problem with this approach from your question is that you throw away a lot of the useful information by the time it gets to the JIT.

2

Firstly, I assume you'd want a tracing jit rather than a method jit.

The best approach to take would be compiling the code to llvm IR, then adding in tracing code, before producing a native executable. Once a code block becomes sufficiently well used and once enough information about the values (not the types like in dynamic languages) of variables has been collected then the code can be recompiled (from the IR) with guards based on values of the variables.

I seem to remember there was some progress on making a c/c++ jit in clang under the name libclang.

2
  • 1
    AFAIK, libclang is most of the clang functionality factored out as a library. so, you can use it to analyze source code to create sophisticated syntax coloring, lints, code browsing, etc.
    – Javier
    Commented Dec 23, 2010 at 21:55
  • @Javier, that sounds about right. I think there was a function in the library that took a const char* of source code and produced llvm ir, but thinking now, it's probably better to jit based on the ir rather than the source. Commented Dec 24, 2010 at 8:22
0

There is Apple’s bitcode: when you upload software to be distributed on the AppStore, you can choose to upload code compiled your “bitcode”, with the actual code generated as the customer downloads the app. This allows optimisation for the specific processor used by the customer’s device, including processors that didn’t exist when the code was compiled by the developer.

There are compilers taking feedback from profiling. If that optimisation is to be done by the end users computer, either it is repeated every run, making things slower, or the optimised code replaces the original code - which means code is writable with all the associated security risks.

Not the answer you're looking for? Browse other questions tagged or ask your own question.