Skip to content(if available)orjump to list(if available)

Reflections on 2 years of CPython's JIT Compiler

eigenspace

It turns out that if you have language semantics that make optimizations hard, making a fast optimizing compiler is hard. Who woulda thunk?

To be clear, this seems like a cool project and I dont want to be too negative about it, but i just think this was an entirely foreseeable outcome, and the amount of people excited about this JIT project when it was announced shows how poorly a lot of people understand what goes into making a language fast.

jerf

I was active in the Python community in the 200x timeframe, and I daresay the common consensus is that language didn't matter and a sufficiently smart compiler/JIT/whatever would eventually make dynamic scripting languages as fast as C, so there was no reason to learn static languages rather than just waiting for this to happen.

It was not universal. But it was very common and at least plausibly a majority view, so this idea wasn't just some tiny minority view either.

I consider this idea falsified now, pending someone actually coming up with a JIT/compiler/whatever that achieves this goal. We've poured millions upon millions of dollars into the task and the scripting languages still are not as fast as C or static languages in general. These millions were not wasted; there were real speedups worth having, even if they are somewhat hard on RAM. But they have clearly plateaued well below "C speed" and there is currently no realistic chance of that happening anytime soon.

Some people still have not noticed that the idea has been falsified and I even occasionally run into someone who thinks Javascript actually is as fast as C in general usage. But it's not and it's not going to be.

amval

> I was active in the Python community in the 200x timeframe, and I daresay the common consensus is that language didn't matter and a sufficiently smart compiler/JIT/whatever would eventually make dynamic scripting languages as fast as C, so there was no reason to learn static languages rather than just waiting for this to happen.

To be very pedantic, the problem is not that these are dynamic languages _per se_, but that they were designed with semantics unconcerned with performance. As such, retrofitting performance can be extremely challenging.

As a counterexample of fast and dynamic: https://julialang.org/ (of course, you pay the prize in other places)

I agree with your comment overall, though.

throw10920

What are examples of those semantics? I'm guessing rebindable functions (and a single function/variable namespace), eval(), and object members available as a dict.

yxhuvud

While what you say is true, there is still a huge gap between the performance of javascript (and even Ruby) and that of Python. The efforts to optimize Python are lagging behind, so there is a lot of things that still can be made faster.

Sesse__

Python is also choosing to play with one hand behind its back; e.g., the “no extension API changes” rule which means any hope of a faster value representation (one of the most important factors in making a dynamic language fast!) goes out the window, refusing to change the iterator API (which means that throwing and handling exceptions is something that needs to be handled by the fast path of basically everything), and so on.

umanwizard

Google has been pouring huge amounts of effort into making their JS interpreter fast for many years at this point. They have a lot more resources than the Python foundation.

beebmam

I don’t understand the sentiment of not wanting to learn a language. LLMs make learning and understanding trivial if the user wants that. I think many of those complaining about strongly typed languages (etc) are lazy. In this new world of AI generated code, strongly typed languages are king

PaulHoule

Javascript and Common Lisp aren't as fast as C but they are faster than Python.

morkalork

I remember this from the early 2010s "compilation of a dynamic language is a superset of compilation of static languages ergo we should be able to achieve both optimizations static languages can do and more because there are opportunities that only become apparent at runtime". When really its all about the constraints you can put on the user that set you up for better optimization.

ngrilly

Agreed. I'd like CPython to offer the possibility to opt in semantics that are more amenable to optimizations, similar to what Cider is enabling with their opt-in strict modules and static classes: https://github.com/facebookincubator/cinder.

pjmlp

Especially when one keeps ignoring the JITs of dynamic languages, that were in the genesis of all high end production JITs being used nowadays, tracing back to Smalltalk, Self, Lisp, Prolog.

All those languages are just as dynamic as Python, more so given the dynamically loading of code with image systems, across network, with break into debugger/condition points and redo workflows.

manypineapples

pypy manages

pjmlp

The black swan of Python JITs, mostly ignored by the community, unfortunately.

null

[deleted]

almostgotcaught

> It turns out that if you have language semantics that make optimizations hard, making a fast optimizing compiler is hard. Who woulda thunk?

Is this in the article? I don't see Python's semantics mentioned anywhere as a symptom (but I only skimmed).

> shows how poorly a lot of people understand what goes into making a language fast.

...I'm sorry but are you sure you're not one of these people? Some facts:

1. JS is just as dynamic and spaghetti as Python and I hope we're all aware that it has some of the best jits out there;

2. Conversely, C++ has many "optimizing compiler[s]" and they're not all magically great by virtue of compiling a statically typed, rigid language like C++.

o11c

JS is absolutely not as dynamic as Python. It supports `const`ness, and uses it by default for classes and functions.

dontlaugh

More importantly, there's nothing like locals[] or __getattribute__.

serjester

This article doesn't do the best job explaining the broader picture.

- Most of the work up this point has just been plumbing. Int/float unboxing, smarter register allocation, free-threaded safety land in 3.15+.

- Most JIT optimizations are currently off by default or only triggers after a few thousand hits, and skips any byte-codes that look risky (profiling hooks, rare ops, etc.). Stability is their #1 priority.

Recommend this talk with one of the Microsoft JIT developers, https://www.youtube.com/watch?v=abNY_RcO-BU

bgwalter

According to the promises of the Faster CPython Team, the JIT with a >50% speedup should have happened two years ago.

Everyone knows Python is hard to optimize, that's why Mojo also gave up on generality. These claimed 20-30% speedups, apparently made by one of the chief liars who canceled Tim Peters, are not worth it. Please leave Python alone.

notatallshaw

Two years ago was Python 3.11, my real world workloads did see a ~15-20% improvement in performance with that release.

I don't remember the Faster CPython Team claiming JIT with a >50% speedup should have happened two years ago, can you provide a source?

I do remember Mark Shannon proposed an aggressive timeline for improving performance, but I don't remember him attributing it to a JIT, and also the Faster CPython Team didn't exist when that was proposed.

> apparently made by one of the chief liars who canceled Tim Peters

Tim Peters still regularly posts on DPO so calling him "cancelled" is a choice: https://discuss.python.org/u/tim.one/activity.

Also, I really can not think who you would be referring to as part of the Faster CPython Team, of which all the former members I am aware of largely stayed out of the discussions on DPO.

null

[deleted]

ecshafer

Does anyone know why for example the Ruby team is able to create JITs that are performant with comparative ease to Python? They are in many ways similar languages, but Python has 10x the developers at this point.

dfox

Ruby in both its semantics and implementation is very close to smalltalk and does not really use the Python's object model that can be summarized as "everything is a dict with string keys". That makes all the tricks discovered over last 40 years of how to make Smalltalk and Lisp fast much more directly applicable in Ruby.

abhorrence

My complete _guess_ (in which I make a bunch of assumptions!) is that generally it seems like the Ruby team has been more willing to make small breaking changes, whereas it seems a lot like the Python folks have become timid in those regards after the decade of transition from 2 -> 3.

gkbrk

Python has made many breaking changes after 2->3 as well. They don't even bother to increment the major version number any more.

I haven't checked, but I wouldn't be surprised if more Python versions contained breaking changes than not.

zahlman

> Python has made many breaking changes after 2->3 as well.

Aside from the `async` keyword (experience with which seems like it may have driven the design of "soft keywords" for `match` etc.), what do you have in mind that's a language feature as opposed to a standard library deprecation or removal?

Yes, the bytecode changes with every minor version, but that's part of their attempts to improve performance, not a hindrance.

adgjlsfhk1

I think a major factor is C API prevalence. The python C-api is bad and widely used so it's very difficult to improve.

pjmlp

Community.

Smalltalk, Self, Lisp, are highly dynamic, their JIT research are the genesis of modern JIT engines.

For some strange reason, Python community rather learns C, calls it "Python", instead of focusing why languages that are just as dynamic, have managed already a few decades ago.

cuchoi

Funding?

Seems like the development was funded by Shopify and they got a ~20% performance improvement. https://shopify.engineering/ruby-yjit-is-production-ready

A similar experience in the Python community is that Microsoft funded "Faster CPython" and they made Python 20-40% faster.

ecshafer

The funding is one angle, but the Shopify Ruby team isn't that big (<10 people iirc). Python is used extensively at just about every tech company, and Meta, Apple, Microsoft, Alphabet, and Amazon each have at least 10x as many engineers as Shopify. This makes me think that there must be some kind of language/ecosystem reason that makes Python much harder than Ruby to optimize.

UncleEntity

Probably the methods they use as well.

I may not be completely accurate on this because there's not a whole lot of information on how Python is doing their thing so...

The way (I believe) Python is doing it is to take code templates and stitching them together (copy & patch compilation) to create an executable chunk of code. If, for example, one were to take the py-bytecode and just stitch all the code chunks together all you can realistically expect to save is the instruction dispatch operations, which the compiler should make really fast anyway, which leaves you at parity with the interpreter since each code chunk is inherently independent so the compiler can't do its magic on the entire code chunk. Basically this is just inlining the bytecode operations.

To make a JIT compiler really excel you'd need to do something like take all the individual operations of each individual opcode and lower that to an IR and then optimize over the entire method using all the bells and whistles of modern compilers. As you can imagine this is a lot more work than 'hacking' the compiler into producing code fragments which can be patched together. Modern compilers are really good at these sorts of things and people have been trying to make the Python interpreter loop as efficient as possible for a long time so there's a big hurdle to overcome here.

I've (or more accurately, Claude) has been writing a bytecode VM and the dispatch loop is basically just a pointer dereference and a function call which is about as fast as you can get. Ok, theoretically, this is how it works as there's also a check to make sure the opcode is within range as the compiler part is still being worked on and it's good for debugging but foundationally this is how it works.

From what I've gleaned from the literature the real key to making something like copy & patch work is super-instructions. You take common patterns, like MULT+ADD, and mash them together so the C compiler can do its magic. This was maybe mentioned in the copy & patch paper or, perhaps, they only talked about specialization based on types, don't actually remember.

So, yeah, if you were just competing against a basic tree-walking interpreter then copy & patch would blow it out of the water but C compilers and the Python interpreter have both had million of people hours put into them so that's really tough competition.

ggm

What fundamentals would make the jit, this specific jit faster? Because if it's demonstrably slower, it begs the question if it can be faster or is inherently slower than a decent optimisation path through a compiler.

At this point it's a great didactic tool and a passion project surely? Or, has advantages in other dimensions like runtime size, debugging, and .pyc coverage, or in thread safe code or ...

teruakohatu

The article points out they have only begun adding optimisers to the jit compiler.

Unoptimised jit < optimised interpreter (at least in this instance)

They are working on it presumably because they think there will eventually be a speed ups in general or at least for certain popular workloads.

taeric

The article also specifically calls out machine code generation as a separate thing. I confess that somewhat surprises me, as I would expect getting machine code generated would be a main source of speed up for a JIT? That and counter based choices on what optimizations to perform?

Still, to directly answer the first question, I would hope even if there wasn't obvious performance improvements immediately, if folks want to work on this, I see no reason not to explore it. If we are lucky, we find improvements we didn't expect.

MobiusHorizons

The way I understand it, the machine code generator emits machine code for some particular piece of bytecode (or whatever the JIT IR is). This is almost like an assembler and probably has templates that it expands. It is important for this machine code to be fast, but it each template is at a pretty low level, and lacks the context for structural optimizations. The optimizer works at a higher level of abstraction, and can make these structural optimizations. You can get very large speed-ups when you can remove code that isn't necessary, or emit equivalent code that has a lower complexity or memory overhead. Typical examples of things optimizers do are * use registers instead of memory for function arguments * constant folding * function inlining * loop unrolling

I don't know if that's exactly how it works for this particular effort, but that would be my expectation.

adrian17

> I confess that somewhat surprises me, as I would expect getting machine code generated would be a main source of speed up for a JIT?

My understanding is that the basic copy-and-patch approach without any other optimizations doesn’t actually give that much. The difference between an interpreter running opcodes A,B,C and a JIT emitting machine code for opcode sequence A,B,C is very little - the CPU running the code will execute roughly the same instructions for both, the only difference is that the jit avoids doing an op dispatch between each op - but that’s already not that expensive due to jump threading in the interpreter. Meanwhile the JIT adds an extra possible cost of more work if you ever need to jump from JIT back to fallback interpreter.

But what the JIT allows is to codegen machine code corresponding to more specialized ops that wouldn’t be that beneficial in the interpreter (as more and smaller ops make it much worse for icaches and branch predictors). For example standard CPython interpreter ops do very frequent refcount updates, while the JIT can relatively easily remove some sequences of refcount increments followed by immediate decrements in the next op.

Or maybe I misunderstood the question, then in other words: in principle copy-and-patch’s code generation is quite simple, and the true benefits come from the optimized opcode stream that you feed it that wouldn’t have been as good for the interpreter.

moregrist

A byte code interpreter is, very approximately, a lookup table of byte code instructions that dispatches each instruction to highly optimized assembly.

This will almost certainly outperform a straight translation to poorly optimized machine code.

Compilers are structured in conceptual (and sometimes distinct) layers. In a classic statically-typed language will only compile-time optimizations, the compiler front-end will parse the language into a abstract syntax tree (AST) via a parse tree or directly, and then convert the AST into the first of what may be several intermediate representations (IRs). This is where a lot of optimization is done.

Finally the last IR is lowered to assembly, which includes register allocation and some other (peephole) optimization techniques. This is separate from the IT manipulation so you don’t have to write separate optimizers for different architectures.

There are aspects of a tracing JIT compiler that are quite different, but it will still use IR layers to optimize and have architecture-dependent layers for generating machine code.

firesteelrain

We have had really good success using Cython which makes many calls into the CPython interpreter and CPython Standard Libraries.

throwaway032023

I remember when pypy was only 25x slower than c python.

null

[deleted]