Skip to content(if available)orjump to list(if available)

An Attempt to Catch Up with JIT Compilers

Voultapher

Love to see negative results published, so so important.

Please let's all go towards research procedure that enforces the submission of the hypothesis before any research is allowed to commence and includes enforced publishing regardless of results.

adrian_b

When I was a young student, hearing all the marketing talk from various companies about all their valuable intellectual property that is supposedly incorporated in their products and about their valuable trade secrets that are supposedly guarded from their competitors, I thought that when I will start working at such a company I would learn a lot of useful things, much above what I was learning as a student.

However, after working at many companies, big and small, I was disappointed to find out that my expectations had been naive. In no such company have I seen any useful secret. There has been only one case when I have thought at first that I have learned something not widely known, but then, through a search through the older literature, I have found that fact published in an old research paper.

The only really useful information that I have found at every such workplace in a successful company, was the know-how about a long list of engineering solutions that I could think of when confronted with solving a new problem, but which were known by the experienced staff as dead ends, which had been tried by them, but for various reasons were not acceptable solutions.

The know-how about such solutions that do not work and especially why they do not work, was much more valuable than what was officially considered as intellectual property, e.g. patents or copyrights.

Voultapher

Thanks for sharing this fascinating insight.

I'd expect even if we were to move towards preregistration widely, that this situation would remain to some degree. Because universities lack the resources, pressure and time needed to turn a novel idea into a commercial product. As seen with battery research, being good at one thing is not enough, the solution needs to be bad at nearly nothing to compete with li-ion. In my experience some seemingly solvable roadblocks can turn into showstoppers very late and some showstoppers were not anyones radar whiling conceptualizing the solution.

djoldman

Huge upvote from me as well. Think of all the folks out there who have this idea and instead of searching for it, finding nothing, and implementing, now they can either move on or try to fiddle with this works' output.

RNGesus83

> Love to see negative results published, so so important. > > Please let's all go towards research procedure that enforces the submission of the hypothesis before any research is allowed to commence and includes enforced publishing regardless of results.

Grounded theory? https://en.m.wikipedia.org/wiki/Grounded_theory

Voultapher

Not quite, I think the common term for this idea is preregistration https://en.wikipedia.org/wiki/Preregistration_(science)

RNGesus83

Interesting! Thank you for the good link

api

I've never heard that idea before, and it's so obvious. All science should be done this way.

It kind of does happen in areas of science that are capital intensive like space, high energy physics, etc., because people hear about what is to be done before it is done, but it's not formalized. It should be, and it should be done with everything.

Voultapher

If we are talking reforms to science procedure, I'd also love to see 30% or so of the research funds locked away, to be then given to another team ideally at another university that get's access only to the original teams' publication and has the goal to reproduce the study. The vast majority of papers released don't contain enough information to actually repeat their work.

pizlonator

I think the missing piece here is that JavaScriptCore (JSC) and other such systems don't just use inline caching to speed up dynamic accesses; they use them as profiling feedback.

So, anytime you have an IC in interpreter, baseline, or lightly optimized code, then that IC is monitored to see how polymorphic it gets, and that data is fed back into the optimization pipeline.

Just having an IC as a dead-end, where you don't use it for profiling is way less profitable than having an IC that feeds into profiling.

kannanvijayan

Well on dynamic languages the ICs do give a nice order of magnitude speed-up by themselves, since the guard eliminates a whole hashtable (or linear) lookup instead of (in this case) a single memory indirection.

But yeah - on spidermonkey we found that orienting our ICs towards being stable and easy to work with, as opposed to just being fast, ended up leading to a much better design.

This is a nice result though. Negative, but good that they published it.

What would be a good next step is some QEMU-style transformation, pull out basic blocks, profile them for both hotness, and incoming arguments at function starts, and dynamic dispatch targets.. then use that to re-compile the whole thing using method-jit and in particular inlining across call-paths with GVN and DCE applied.

I kind of expect the results to be very positive, just based on intuition.. but it'd be cool to see how it actually turned out.

bjoli

A minor nitpick: ICs don't give that much benefit in monomorphic languages like scheme.

kannanvijayan

Apologies if this response seems aggressive - this is just a topic I'm very passionate about :)

I think technically in languages like scheme, the opportunity would be to optimize other sorts of dispatches. The classic dispatch mechanism in scheme is the "assoc" style list-of-pairs lookup.

In this case, the "monomorphization" would be extracting runtime information on the common lookups that are taken. This is doable in a language like scheme, but it requires identifying parts of data structures that are less likely to change over time - where it makes sense to lift them up into hidden types and effectively make them "static".

Imagine if you could designate particular `(list (cons key value) ...)` value as "optimizable" - maybe even with a macro/function call : `(optimizable ((a 1) (b 2) ...))`

This would build a hidden shape for the association's "backbone" and give you back a shaped assoc list, and then you would be able to optimize all uses of `(assoc ...)` on lists of that kind in the same way you optimize shaped objects.

A plumbing exposed version of this would just let you do `(let my-shape (make-shape '(prop1 prop2 ...)))` and later `(my-shape '(1 2 ...))` to build the shape-optimized association list.

It's kind of neat when you realize that almost everything the runtime type-inference regime in a JIT compiler does.. is enable eliding lookups across data structures where we can assume that some part of that data structure is "more static" than other parts.

In JS that data structure is a linked-list-of-hashtables, where the hashtable keys and the linked list backbone are expected to be stable.

But the general idea applies to literally any structure you'd want to do lookups across. If you can extract a 'conserved shape', you can apply this optimization.

sitkack

Couldn't PICs and monomorphization be seen as duals? They are both solving the problem of how to make polymorphic code have fewer branches.

titzer

Indeed, this was literally the conclusion of the first paper that introduced polymorphic inline caches.

I'll add that the real benefit of ICs isn't just that compiled code is specialized to the seen types, but the fact that deoptimization guards are inserted, which split diamonds in the original general cases so that multiple downstream checks become redundant. So specialization is not just a local simplification but a global simplification to all dominated code in the context of the compilation unit.

mintplant

SpiderMonkey actually ditched most of the profiling stuff in favor of transpiling the ICs generated at runtime into the IR used by the optimizing compiler, inlining them into the functions when they're used, and then sending the whole thing through the usual optimization pipeline. The technique is surprisingly effective.

I don't know what the best reference to link for this would be, but look up "Warp" or "WarpMonkey" if you're interested.

kannanvijayan

WarpMonkey doesn't get rid of the profiling stuff - the profiling is inherent in ICs - we keep hitcounts and other information for various paths taken through code (including ICs) and use that to guide compilation later.

Warp's uniqueness is in how it implements the ICs. The design goal when we built baseline JIT in SpiderMonkey was to split the code and data components of ICs. At the time, we were looking at V8 ICs which where basically compiled code blocks with the relevant parameter data (e.g. pointer to the hidden type to compare against) baked into the code.

We wanted to segregate the data from the code - e.g. so that all ShapedGetProp ICs can have a data stub with a pointer to their own shape, but share a pointer to the code. Effectively your ICs end up looking like small linked lists of C++ pure virtual objects (without the vtable indirection and just a single code pointer hanging off of the stub).

Originally the "shared code" was emitted by a bunch of statically defined methods that emitted a fixed bit of assembly (one for each kind of stub). That became unweildy as we added more stubs, so CacheIR was designed. CacheIR was a simple bytecode language that the stubs could express their logic in, which would get compiled down to machine code. The CacheIR bytecode would be a key to the compiled stubcode.

That let stubs generate arbitrary CacheIR for their logic, but still share code between stubs that emitted the same logic.

That led to the idea of Warp, where we noticed that one could build the input for an optimized method-jit compiler just by combining the profiling info that stubs produced, and the CacheIR bytecode for those stubs.

Normally you'd start from bytecode, build an SSA, then do a pass where you apply type information.

With Warp, the design simplifies into stitching together a bunch of CacheIR chunks which already embed the optimization information you care about, and then compiling that.

Ultimately it does the same thing as the other JITs, but it goes about it in a really nice and clean way. It kind of expresses some of the ideas that Maxime Boisvert-Chevalier was exploring in their work with basic block versioning.

mintplant

Thanks for the more complete explanation!

> Normally you'd start from bytecode, build an SSA, then do a pass where you apply type information.

> With Warp, the design simplifies into stitching together a bunch of CacheIR chunks which already embed the optimization information you care about, and then compiling that.

This is what I meant by ditching most of the profiling stuff; I suppose I should have said "type inference stuff" to be more precise.

> Originally the "shared code" was emitted by a bunch of statically defined methods that emitted a fixed bit of assembly (one for each kind of stub). That became unweildy as we added more stubs, so CacheIR was designed.

I remember all too well :) I worked on the first pass at implementing megamorphic caches into the original stub generators that spit out (macro)assembly directly, before we had CacheIR. So much code duplication...

hinkley

My understanding is that branch prediction got better in the ‘10s and a bunch of techniques that didn’t work before do now.

pizlonator

The modern VM technique looks almost exactly like what the original PIC papers talked about in the 90s. There are some details that are different, but I'm not sure that the details come down to exploiting changes in branch prediction efficiency. I think the things that changed come mostly down to the fact that the original PIC paper was a first stab by a small team whereas modern VMs involve decades of engineering by larger teams (so everything that could get more complex as a consequence of tuning did get more complex).

So, while it's true that microarches changed in a lot of ways, the overall implications for how you build VMs are not so big.

gopalv

> that branch prediction got better in the ‘10s and a bunch of techniques that didn’t work before do now.

They got better than they had any right to be, but then we found out that Spectre & Meltdown were vulnerabilities rather than optimizations.

For example, a switch based interpreter was fast as a CGOTO one for a brief period between 2012 and 2018, but suddenly got slower again as the CPUs could no longer rely on branch prediction to do prefetching.

IainIreland

We talk about this a bit in our CacheIR paper. Search for "IonBuilder".

https://www.mgaudet.ca/s/mplr23main-preprint.pdf

pizlonator

It sounds like you're describing something similar to what the other JS VMs do

IainIreland

The main thing we're doing differently in SM is that all of our ICs are generated using a simple linear IR (CacheIR), instead of generating machine code directly. For example, a simple monomorphic property access (obj.prop) would be GuardIsObject / GuardShape / LoadSlot. We can then lower that IR directly to MIR for the optimizing compiler.

It gives us a lot of flexibility in choosing what to guard, without having to worry as much about getting out of sync between the baseline ICs and the optimizer's frontend. To a first approximation, our CacheIR generators are the single source of truth for speculative optimization in SpiderMonkey, and the rest of the engine just mechanically follows their lead.

There are also some cool tricks you can do when your ICs have associated IR. For example, when calling a method on a superclass, with receivers of a variety of different subclasses, you often end up with a set of ICs that all 1. Guard the different shapes of the receiver objects, 2. Guard the shared shape of the holder object, then 3. Do the call. When we detect that, we can mechanically walk the IR, collect the different receiver shapes, and generate a single stub-folded IC that instead guards against a list of shapes. The cool thing is that stub folding doesn't care whether it's looking at a call IC, or a GetProp IC, or anything else: so long as the only thing that differs is the a single GuardShape, you can make the transformation.

mintplant

This is unique to SpiderMonkey, as far as I'm aware.

hinkley

One of the last pieces of really good advice I got before I gave up on writing a programming language myself is that if you instrument the paths that are already expected to be slow, you can get most of the value of instrumentation with a fraction of the cost per call. Because people avoid making the slow calls, and if they don’t the app was going to be slower anyway so why not an extra couple percent? Versus the fast path where the instrumentation may be a quarter or more of runtime.

sitkack

The answer is always more feedback. I am excited about DNN powered static profilers. The training data will come from the JIT saving the results of their experiments.

mike_hearn

Ask and ye shall receive:

https://www.sciencedirect.com/science/article/abs/pii/S01641...

It's XGBoost rather than DNN powered, but that might make sense from a runtime throughput perspective.

sitkack

I think your original post turned me on to static profiling. The same researchers have another paper out, https://www.semanticscholar.org/paper/GraalNN%3A-Context-Sen...

This one is Open Access (thanks ACM!)

While the GraalSP paper is pay walled, there is a paper in Serbian by the same author. https://infom.fon.bg.ac.rs/index.php/infom/article/download/...

pizlonator

That's an exciting direction!

sitkack

Profile Guided Optimization without Profiles: A Machine Learning Approach

https://www.semanticscholar.org/paper/Profile-Guided-Optimiz...

c-smile

Slightly orthogonal...

In my Sciter, that uses QuickJS (no JIT), instead of JIT I've added C compiler. That means we can add not just JS modules but C modules too:

   import * as cmod from "./cmodule.c"
Such cmodule will be compiled and executed on the fly into native code. Idea is simple each language is good for specific tasks. JS is flexible and C is performant - just use right tool that is most optimal for a task.

c-modules play two major roles: FFI and number crunching code execution.

Sciter uses TCC compiler and runtime.

In total size of QuickJS + TCC binary bundle 500k + 220k = 720k.

For the comparison: V8 is of 40mb size.

https://sciter.com/c-modules-in-sciter/ https://sciter.com/here-we-go/

vanderZwan

Interesting project! After clicking around on the website:

> In almost 10 years, Sciter UI engine has become the secret weapon of success for some of the most prominent antivirus products on the market: Norton Antivirus and Internet Security, Comodo Internet Security, ESET Antivirus, BitDefender Antivirus, and others.

What an intriguingly specific niche of customer! How come all these different anti-virus companies decided to use your platform?

c-smile

> anti-virus companies decided to use your platform?

One of the reasons: AV application should look modern to give an impression that the app is adequate to modern threats. So while app backend is relatively stable, its UI shall be easily tweakable. CSS/HTML is good for that.

Check this: https://sciter.com/wp-content/uploads/2018/06/n360.png

mathverse

I actually really love it. Typically AV products UIs feel snappy and lightweight and it is the backend engine that does most of the work and feels horrendously as a bottleneck. Which I think is an interesting phenomena when considering modern desktop applications where typically the backend code does very little and the frontend one is the one being bloated (Electron).

It's a bit sad that there is not a lot of talk and re-usable components from these companies for Sciter that can help us create snappy apps!

pjmlp

Even if I am not a big C fan, the idea is rather cool, it is a bit like having C++ on .NET via C++/CLI.

tonnydourado

Tangentially, fuck yeah, negative results, just as good as positive ones

rhelz

Amen. This paper is worth more than all of the fraudulent, unreproducible papers we are inundated with, put together and squared.

VeejayRampay

the people who came up with this are obviously brilliant but being french myself, I really wonder why no one is proof-reading the english, this gives an overall bad impression of the work imho

rhelz

Being a native English speaker I absolutely love reading and listening to speakers of English as a second language. Speaking is actually a subspecies of singing, and it's always cool to hear the same old lyrics remixed to a new melody and a new beat.

English has no 'correct' way to be written or spoken, nor does it need one, nor would it benefit from one, therefore, nor should it have one.

Speakers of English as a second language: you are what makes English a great language.

davidgay

> English has no 'correct' way to be written or spoken, nor does it need one, nor would it benefit from one, therefore, nor should it have one.

There may be no 'correct' way, but there are plenty of 'incomprehensible' ways. I once encountered a research paper that had clearly [0] been translated word-for-word from French into English and made no sense until I translated it word-for-word back to French...

[0]: actually it was only clear after I realised I should attempt the reverse translation ;)

rhelz

Sure, but frankly, I've heard plenty of people speaking the most flawless King's English who didn't make any sense at all.

re: translated math papers: haha we've all been there. Once I had to read a bunch of 70's-era papers from Russian Mathematicians. The translators, bless their hearts, I'm sure knew everything there was to know about Dickens and Dostoevsky, but it was clear they had no clue what the math was all about :-)

Oh well, Math is the universal language, right? chuckle

tredre3

That's a beautiful way of seeing things! Unfortunately, as you're well aware I'm sure, most people do not share your idyllic view of polyglots and, for better or worse, they will assume that bad english = bad quality work. And bad doesn't have to mean mistakes. Just an unusual wording is enough to throw the average person off, in my experience.

rhelz

I'm not as worried about those who have ears, but don't hear, as I am about the effect LLM's will have on English.

Grammerly was bad enough. One of my oldest friends is from Transylvania, and he could tell such great stories in his eastern-european accent and cadence. When he collected those stories into a book, he ran everything through grammerly, and the book reads like a soulless newscaster ;-(

When people start en mass to run their prose through LLM's to "correct" it, English will lose one of its main arteries.

vanderZwan

> Speaking is actually a subspecies of singing, and it's always cool to hear the same old lyrics remixed to a new melody and a new beat.

What a lovely take on this topic! :)

(does this imply you're a fellow believer in the hypothesis that singing evolved before language?)

rhelz

Ha, I don't know anything at all about how language evolved. But, when you listen to somebody speaking--if you can bracket the meaning (which tends to soak up all our conscious attention)--you can hear the rhythm and you can hear the melodies. You can hear the music.

In order to understand somebody who speaks English in a different enough dialect, you have to really listen to the rhythm and melody--in order to puzzle out the meanings. The meanings are not hitting you in the face, they are more coy, and you have to seek them out while listening to songs you've never heard before!

Same goes with speaking with somebody who speaks English as a second language. You can hear the music in a way which is hard to do when listening to native speakers. Not impossible--once you realize what is happening, you can learn to pay attention to it.

But think about all the different ways you've hear English spoken...French accents, Nigerian accents, German accents, Russian accents, north Indian and south Indian accents, Mexican accents,.....It's like turning into a radio station playing the music of the world.

And unless they all were taking the time to learn English, we would not be hearing their music. And we would not be able to avail ourselves of an inexhaustible supply of new idioms, new ways of emphasizing, new ways of conveying subtle emotional cues...

indolering

It's a preprint.

tsunego

chasing inline cache micro-optimizations with dynamic binary modification is a dead end. modern CPUs are laughing at our outdated compiler tricks. maybe it's time to accept that clever hacks won’t outrun silicon.

saagarjha

JITs typically are too broken for compiler tricks so I don't think it's time to accept that just yet.

andrekandre

what is the better approach?

Sparkyte

You don't, there are equal trade offs. JIT might use more memory because of what it does at the runtime, but it is also the exact reason it is faster to start. A good trade off is just using the type of languages best suited for the workload.

null

[deleted]

ErikCorry

It's good that they post negative results, but it's hard to know exactly why their attempt failed, and it's tempting for me to make guesses without doing any measurements, so let me fall for that temptation:

They are patching inline-cache sites in an AOT binary and not seeing improvements.

Only 17% of the inline-cache sites could be optimized to what they call O2 level (listing 7). Most could only be optimized to O1 level (listing 6). The only difference from the baseline (listing 5) to O1 is that they replaced:

mov 0x101c(%rip), %rax # load the offset

with

mov 0x3, %rax # load the offset

I'm not very suprised that this did not help much. The old load is probably hoisted up and loaded into a renamed register very early, and it won't miss in the cache.

Basically they already have a pretty nice inline cache system at least for the monomorphic case, and messing with the exact instructions used to implement it doesn't help much. A JIT is able to do so much more, eg polymorphic cases, inlining of simple methods, and eliminating repeated checks of the same hidden class. Not to mention detecting at runtime that some unknown object is almost always an integer or a float and JITting code specialized for that.

People new to virtual machines often focus on the compiler, whereas the stuff that moves the needle is often around the runtime. How tagged and typed data is represented, the GC implementation, and the object layout. Eg this paper explores an interesting new tagging technique and makes a huge difference to performance (there's some author overlap): https://www.researchgate.net/figure/The-three-representation...

Incidentally the assembly syntax in the "Attempt to catch up" article is a bit confusing. It looks like the IC addresses are very close to the code, like almost on the same page. Stack overflow explains it:

GAS syntax for RIP-relative addressing looks like symbol + current_address (RIP), but it actually means symbol with respect to RIP.

There's an inconsistency with numeric literals:

[rip + 10] or AT&T 10(%rip) means 10 bytes past the end of this instruction

[rip + a] or AT&T a(%rip) means to calculate a rel32 displacement to reach a, not RIP + symbol value. (The GAS manual documents this special interpretation)

mannyv

[flagged]

mannyv

I wonder if you could use clang/llvm to do a super-JIT by having it recompile its IR as the program runs, taking advantage of profiling to optimize the hot paths.

SkiFire13

Profile guided optimizations are already a thing and don't need JITting your program.

mannyv

Profile guided optimization is a static operation that's done after profiling the running app - unless the state of the art has changed in the last few years.

null

[deleted]

ajross

This seems poorly grounded. In fact almost three decades after the release of the Java HotSpot runtime we're still waiting for even one system to produce the promised advantages. I guess consensus is that V8 has come closest?

But the reality is that hand-optimized AoT builds remain the gold standard for performance work.

noelwelsh

The benchmarks I have seen show Hotspot is ahead of V8. E.g. https://stefan-marr.de/papers/oopsla-larose-et-al-ast-vs-byt...

What makes this very complicated is that 1) language design plays a big part in performance and 2) CPUs change as well and this anecdotally seems to have more impact on interpreter than compiler performance.

With regards to 1), consider optimizing Javascript. It doesn't have machine integers, so you have to do a bunch of analysis to figure when something is being used as an integer and then you can make that code fast. There are many other cases. Python is even worse in this regard. In comparison AOT compiled languages are usually designed to be fast, so they make tradeoffs that favour performance at the cost of some level of abstraction / expressivity. The JVM is somewhere in the middle, and so is its performance.

With regards to 2) this paper is an example, as is https://inria.hal.science/hal-01100647/file/InterpIBr-hal.pd...

MaxBarraclough

> you have to do a bunch of analysis to figure when something is being used as an integer and then you can make that code fast

It doesn't get much attention now that WASM exists, but asm.js essentially solves this, so a more head-to-head comparison ought to be possible. (V8 has optimisations specific to asm.js.)

https://en.wikipedia.org/wiki/Asm.js

IainIreland

asm.js solves this in the specific case where somebody has compiled their C/C++ code to target asm.js. It doesn't solve it for arbitrary JS code.

asm.js is more like a weird frontend to wasm than a dialect of JS.

ajross

With all respect that sounds like excuse-making. I mean, yeah, Javascript and JVM and .NET are slower runtimes than C or Rust[1]. Nonetheless that's the world we live in, and if you have a performance-sensitive problem to solve you pick up rustc or g++ and not a managed runtime. If that's wrong, someone's got to actually show that it's wrong.

[1] Maybe Go or Swift would be more apples-to-apples. But even then are there clear benchmarks showing Kotlin or C# beating similar AoT code? If anything the general sense of the community is that Go is faster than Java.

noelwelsh

Excuses for what? I'm not the elected representative for JIT compiled languages, sworn to defend them. There are technical reasons they tend to be slower. I was sketching some of them.

wiseowise

https://devblogs.microsoft.com/oldnewthing/20060731-15/?p=30...

https://blog.codinghorror.com/on-managed-code-performance-ag...

And that was 2005. Modern .NET is much, much faster.

> If anything the general sense of the community is that Go is faster than Java.

Faster where?

pca006132

When things are performance-sensitive, you want things to be tunable and predictable. Good luck playing with the JIT if you rely that for performance...

titzer

> But the reality is that hand-optimized AoT builds remain the gold standard for performance work.

It's considerably more complicated than that. After working in this area for 25 years, I have vacillated between extremes over decades-long arcs. The reality is much more nuanced than a four sentence HN comment. Profile and measure and stare at machine code. If you don't do that daily, it's hand waving and having hunches.

cogman10

I'd also point out that it's an ever-shifting landscape. What was slow yesterday might not be today.

In my experience, while there are some negatives of the runtime selected, the vast majority of performance is won or lost at the algorithm level. It really doesn't matter that rust can be faster than ruby if you chose an O(n^3) algorithm. Rust will run the O(n^3) algorithm faster than ruby, for sure, but ruby will beat the pants off of rust if someone converts it into an O(n) algorithm.

It only starts mattering if you've already have an O(n) algorithm. However, in my experience, a LOT of programmers are happy writing a n^3 and moving on to the next task without considering what this will do.

    for (i : foo) { 
      for (j : foo) { 
        for (k : foo) { 
          bar(i, j, k)
        }
      }
    }

neonsunset

You may be underestimating the degree of difference in performance between Ruby and Rust.

Here's comparison of Ruby with JS, and Rust is of course faster still: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

If the code runs 100 times faster, it might just offset even highly inefficient implementation.

> a LOT of programmers are happy writing a n^3

I have the same experience.

Unfortunately, and this is an issue I keep fighting with in some .NET communities, languages like C, C++ and Rust tend to select for engineers which are more likely to care about writing reasonably efficient implementation.

At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.

pjmlp

JVM implementations, especially those with PGO feedback loop across runs do quite well.

Likewise modern Android, runs reasonably well with its mix of JIT, AOT with JIT PGO metadata, baseline profiles shared across devices via Play Store.

The gold standard for anyone that actually cares about ultimate performance is hand written Assembly, naturally guided with a profilers capable to measure everything that the CPU is doing like VTune.

IshKebab

I agree, the "JITs can be faster because X Y Z" arguments have never turned into "JITs are actually faster".

Maybe that's because JIT is almost always used in languages that were slowed in the first place, e.g. due to GC.

Is there a JITing C compiler, or something like that? Would that even make sense?

sitkack

Binary Translation could be seen as a generalized JIT for native code.

Dynamo: A Transparent Dynamic Optimization System https://dl.acm.org/doi/pdf/10.1145/358438.349303

> We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. The input native instruction stream to Dynamo can be dynamically generated (by a JIT for example), or it can come from the execution of a statically compiled native binary. This paper evaluates the Dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. Our experiments demonstrate that even statically optimized native binaries can be accelerated Dynamo, and often by a significant degree. For example, the average performance of -O optimized SpecInt95 benchmark binaries created by the HP product C compiler is improved to a level comparable to their -O4 optimized version running without Dynamo. Dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. Dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. The Dynamo prototype presented here is a realistic implementation running on an HP PA-8000 workstation under the HPUX 10.20 operating system.

https://www.semanticscholar.org/paper/Dynamo%3A-a-transparen...

remexre

Maybe the "allocate as little as possible, use sun.misc.Unsafe a lot, have lots of long-lived global arrays" style of Java programming some high-performance Java programs use would get close to being a good stand-in.

o11c

I'm pretty sure the major penalty is the lack of inline objects (thus requiring lots of pointer-chasing), rather than GC. GC will give you unpredictable performance but allocation has a penalty regardless of approach.

For purely array-based code, JIT is the only factor and Java can seriously compete with C/C++. It's impossible to be competitive with idiomatic Java code though.

C# has structs (value classes) if you bother to use them. Java has something allegedly similar with Project Valhalla, but my observation indicates they completely misunderstand the problem and their solution is worthless.

cogman10

Inline objects is a huge hit that hopefully gets solved soon.

But I'd posit that one programming pattern enabled by a GC is concurrent programming. Java can happily create a bunch of promises/futures, throw them at a thread pool and let that be crunched without worrying about the lifetimes of stuff sent in or returned from these futures.

For single threaded stuff, C probably has java beat on memory and runtime. However, for multithreading it's simply easier to crank out correct threaded code in Java than it is in C.

IMO, this is what has made Go so appealing. Go doesn't produce the fastest binaries on the planet, but it does have nice concurrency primitives and a GC that makes highly parallel processes easy.

cempaka

> Java has something allegedly similar with Project Valhalla, but my observation indicates they completely misunderstand the problem and their solution is worthless.

Hahah spicy take, I'd be interested to hear more. It definitely might not bode well that they opened the "Generics Reification" talk at JVMLS 2024 with "we have no answers, only problems."

neonsunset

To be fair, .NET has way more than just structs. But yes, they are a starting point.

azakai

> Is there a JITing C compiler, or something like that?

Yes, for example, compiling C to JavaScript (or asm.js, etc. [0]) leads to the C code being JITed.

And yes, there are definitely benchmarks where this is actually faster. Any time that a typical C compiler can't see that inlining makes sense is such an opportunity, as the JIT compiler sees the runtime behavior. The speedup can be very large. However, in practice, most codebases get inlined well using clang/gcc/etc., leaving few such opportunities.

[0] This may also happen when compiling C to WebAssembly, but it depends on whether the wasm runtime does JIT optimizations - many do not and instead focus on static optimizations, for simplicity.

pjmlp

C++/CLI is one example, it is C++, not C, but example holds.

do_not_redeem

Now the money question: can anyone come up with a benchmark where, due to the JIT, C++/CLI runs faster than normal C++ compiled for the same CPU?

zabzonk

It is not C++ (or C) but a Microsoft invented language - which is OK, but don't confuse it with C++ anymore than MS have already done

neonsunset

If you pit virtual-call-heavy code written in C++ against C#, C# will come out on top every single time, especially if you consume dynamically-linked dependencies or if you can't afford to wait until the heat death of the universe when all the LTO plugins finish their job.

Or if you use SIMD-heavy path and your binary is built against, say, X86-64-v2/3 and the target supports AVX512, .NET will happily use the entirety of AVX512 thanks to JIT even when still using 256b-wide operations (i.e. bespoke path that uses Vector256) with AVX512VL. This tends to surpass what you can get out of runtime dispatch under LLVM.

re: Java challenges - those stem from the JVM bytecode being a very difficult optimization target i.e. every call is virtual by default with complex dispatch strategy, everything is a heap-allocated object by default save for very few primitives, generics lose type information and are never monomorphized - PGO optimization through tiered compilation and resulting guarded devirtualization and object escape analysis is something that reclaims performance in Java and makes it acceptable. C and C++ with templates are a massively easier optimization target for GCC, and GCC does not operate under strict time constraints too. Therefore we have the results that we do.

Also interesting data points here if you'd like to look at AOT capabilities of higher-level languages:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

ForTheKidz

> I guess consensus is that V8 has come closest?

V8 better than the JVM? Insanity, maybe it can come to within an order of magnitude in terms of performance.

edflsafoiewq

Comes closest to realizing the concept of a JIT that is better than AOT.

ForTheKidz

I think that's completely silly framing; you can AOT compile any code better—or at least, just as well—if you already know how you want it to perform at runtime. Any efficiency gain would necessarily need to be in the context of total productivity.

pizlonator

> Java HotSpot runtime we're still waiting for even one system to produce the promised advantages.

What promised advantages are you waiting on?

There are lots of systems that have architectures that are similar to HotSpot, or that surpass it in some way. V8 is just one.

CamouflagedKiwi

There were many many statements made that JIT compilers could be faster than AOT compilers because they had more information to use at runtime - originally this was mostly aimed at Java/HotSpot which has not, in practice, significantly displaced languages like C or C++ (or these days Rust) from high-performance work.

pizlonator

Yeah those statements were overly optimistic and I don’t think they’re representative of what most people in the JIT field think. It’s also not what I as a JIT engineer would have promised you.

The actual promise is just: JITs make dynamic languages faster and they are better at doing that than AOTs. I think lots of systems have delivered on that promise.

mike_hearn

It has in a bunch of places. C# is widely used in video games, and Java is widely used in financial trading including HFT scenarios where every millisecond matters. And obviously in Android it's used to write large parts of the OS.

There are places where it hasn't, but that's more due to missing features than JIT vs AOT. Java only got SIMD support recently and it's still in a preview mode, partly because it's all blocking on Valhalla value types.

PGO can make a big difference to C++ codebases, and as JIT is basically PGO with better deployment/developer ergonomics it could probably also work in C++ too. It's just that the most performance sensitive C++ codebases like Chrome prefer to take the build system complexity hit and get the benefits of PGO without the costs, and most C++ codebases just go without.

pjmlp

I guess distributed systems and OS GUI frameworks aren't it then.

paulddraper

> we're still waiting for even one system to produce the promised advantages

To be clear, successful JIT do runtime profiling+optimization, at significant benefit.

But on net, JIT languages are slower.

It is a valid question to ask whether AOT binaries can selectively use runtime optimizations, making them even faster.

devit

The paper seems to start with the bizarre assumption that AOT compilers need to "catch up" with JIT compilers and in particular that they benefit from inline caches for member lookup.

But the fact is that AOT compilers are usually for well-designed languages that don't need those inline caches because the designers properly specified a type system that would guarantee a field is always stored at the same offset.

They might benefit from a similar mechanism to predict branches and indirect branches (i.e. virtual/dynamic dispatch), but they already have compile-time profile-guided optimization and CPU branch predictors at runtime.

Furthermore, for branches that always go in one direction except for seldom changes, there are also frameworks like the Linux kernel "alternatives" and "static key" mechanisms.

So the opportunity for making things better with self-modifying code is limited to code where all those mechanisms don't work well, and the overhead of the runtime profiling is worth it.

Which is probably very rare and not worth bringing it a JIT compiler for.

pizlonator

AOTs are behind JITs for dynamic languages. It’s super interesting to study how to make AOTs catch up in that space, so I’m glad that these folks made an effort and reported the results!

Sparkyte

The trade offs between them are meaningful. Also Rust ain't bad for an AOT.

null

[deleted]