Skip to content(if available)orjump to list(if available)

Python has had async for 10 years – why isn't it more popular?

atomicnumber3

The author gets close to what I think the root problem is, but doesn't call it out.

The truth is that in python, async was too little, too late. By the time it was introduced, most people who actually needed to do lots of io concurrently had their own workarounds (forking, etc) and people who didn't actually need it had found out how to get by without it (multiprocessing etc).

Meanwhile, go showed us what good green threads can look like. Then java did it too. Meanwhile, js had better async support the whole time. But all it did was show us that async code just plain sucks compared to green thread code that can just block, instead of having to do the async dances.

So, why engage with it when you already had good solutions?

throw-qqqqq

> But all it did was show us that async code just plain sucks compared to green thread code that can just block, instead of having to do the async dances.

I take so much flak for this opinion at work, but I agree with you 100%.

Code that looks synchronous, but is really async, has funny failure modes and idiosyncracies, and I generally see more bugs in the async parts of our code at work.

Maybe I’m just old, but I don’t think it’s worth it. Syntactic sugar over continuations/closures basically..

lacker

I'm confused, I feel like the two of you are expressing opposite opinions.

The comment you are responding to prefers green threads to be managed like goroutines, where the code looks synchronous, but really it's cooperative multitasking managed by the runtime, to explicit async/await.

But then you criticize "code that looks synchronous but is really async". So you prefer the explicit "async" keywords? What exactly is your preferred model here?

throw-qqqqq

First, I don’t mean to criticize anything or anyone. People value such things subjectively, but for me the async/sync split does no good.

Goroutines feel like old-school, threaded code to me. I spawn a goroutine and interact with other “threads” through well defined IPC. I can’t tell if I’m spawning a green thread or a “real” system thread.

C#’s async/await is different IMO and I prefer the other model. I think the async-concept gets overused (at my workplace at least).

If you know Haskell, I would compare it to overuse of laziness, when strictness would likely use fewer resources and be much easier to reason about. I see many of the same problems/bugs with async/await..

throwaway81523

No, goroutines are preemptive. They avoid async hazards though of course introduce some different ones.

kibwen

> Code that looks synchronous, but is really async, has funny failure modes and idiosyncracies

But this appears to be describing languages with green threads, rather than languages that make async explicit.

pclmulqdq

Without the "async" keyword, you can still write async code. It looks totally different because you have to control the state machine of task scheduling. Green threads are a step further than the async keyword because they have none of the function coloring stuff.

You may think of use of an async keyword as explicit async code but that is very much not the case.

If you want to see async code without the keyword, most of the code of Linux is asynchronous.

markandrewj

I can tell you guys work with languages like Go, so this isn't true for yourselves, but I usually find it is developers that only ever work with synchronous code who find async complicated. Which isn't surprising, if you don't understand something it can seem complicated. My views is almost that people should learn how to write async code by default now. Regardless of the language. Writing modern applications basically requires it, although not all the time obviously.

Yoric

Hey, I'm one of the (many, many) people who made async in JavaScript happen and I find async complicated.

larusso

async is like a virus. I think the implementation in js and .NET is somewhat ok’ish because your code is inside an async context most of the time. I really hate the red / blue method issues where library functions get harder to compose. Oh I have a normal method because there was no need for async. Now I change the implementation and need to call an async method. There are ways around this but more often than not will you change most methods to be async.

To be fair that also happens with other solutions.

DanielHB

It is not nearly as much of a problem in JS because JS only has an event loop, there is no way to mix in threads with async code because there are no threads. Makes everything a lot simpler and a lot of the data structures a lot faster (because no locks required). But actual parallelization (instead of just concurrency) is impossible[1].

A lot of the async problems in other languages is because they haven't bought up into the concept fully with some 3rd party code using it and some don't. JS went all-in with async.

[1]: Yes I know about service workers, but they are not threads in the sense that there is no shared memory*. It is good for some types of parallelization problems, but not others because of all the memory copying required.

[2]: Yes I know about SharedArrayBuffer and there is a bunch of proposals to add support for locks and all that fun stuff to them, which also brings all the complexity back.

Uptrenda

I'm a person who wrote an entire networking library in Python and I agree with you. The most obvious issue with Python's single-threaded async code is any slow part of the program delays the entire thing. And yeah -- that's actually insanely frigging difficult to avoid. You write standard networking code and then find out that parts you expected to be async in Python actually ended up being sync / blocking.

DESPITE THAT: even if you're doing everything "right" (TM) -- using a single thread and doing all your networking I/O sequentially is simply slow as hell. A very very good example of this is bottle.py. Lets say you host a static web server with bottle.py. Every single web request for files leads to sequential loading, which makes page load times absolutely laughable. This isn't the case for every python web frame work, but it seems to be a common theme to me. (Cause: single thread, event loop.)

With asyncio, the most consistent behavior I've had with it seems to be to avoid having multiple processes and then running event loops inside them. Even though this approach seems like its necessary (or at least threading) to avoid the massive down sides of the event loop. But yeah, you have to keep everything simple. In my own library I use a single event loop and don't do anything fancy. I've learned the hard way how asyncio punishes trying to improve it. It's a damn cool piece of software, just has some huge limitations for performance.

pnathan

Async taints code, and async/await fall prey to classic cooperative multitasking issues. "What do you mean that this blocked that?"

The memory and execution model for higher level work needs to not have async. Go is the canonical example of it done well from the user standpoint IMO.

hinkley

The function color thing is a real concern. Am I wrong or did a python user originally coin that idea?

throwawayffffas

No it was a js dev complaining about callbacks in node. Mainly because a lot of standard library code back then only came in callback flavour. i.e. no sync file writes, etc.

meowface

gevent has been in Python for ages and still works great. It basically adds goroutine-like green thread support to the language. I still generally start new projects with gevent instead of asyncio, and I think I always will.

pdonis

I've used gevent and I agree it works well. It has prevented me from even trying to experiment with the async/await syntax in Python for anything significant.

However, gevent has to do its magic by monkeypatching. Wanting to avoid that, IIRC, was a significant reason why the async/await syntax and the underlying runtime implementation was developed for Python.

Another significant reason, of course, was wanting to make async functions look more like sync functions, instead of having to be written very differently from the ground up. Unfortunately, requiring the "async" keyword for any async function seriously detracted from that goal.

To me, async functions should have worked like generator functions: when generators were introduced into Python, you didn't have to write "gen def" or something like it instead of just "def" to declare one. If the function had the "yield" keyword in it, it was a generator. Similarly, if a function has the "await" keyword in it, it should just automatically be an async function, without having to use "async def" to declare it.

gen220

As somebody who's written and maintained a good bit of Python in prod and recently a good amount of server-side typescript... this would be my answer.

I'd add one other aspect that we sort of take for granted these days, but affordable multi-threaded CPUs have really taken off in the last 10 years.

Not only does the stack based on green-threads "just work" without coloring your codebase with async/no-async, it allows you to scale a single compute instance gracefully to 1 instance with N vCPUs vs N pods of 2-vCPU instances.

hinkley

Async is pretty good “green threads” on its own. Coroutines can be better, but they’re really solving an overlapping set of problems. Some the same, some different.

In JavaScript async doesn’t have a good way to nice your tasks, which is an important feature of green threads. Sindre Sorhus has a bunch of libraries that get close, but there’s still a hole.

What coroutines can do is optimize the instruction cache. But I’m not sure goroutines entirely accomplish that. There’s nothing preventing them from doing so but implementation details.

6r17

I feel like async is just an easier way to reason about something but it leaves out a lot of cheating open ; tough sometimes it's just more comfortable to write - but that cheating comes with a lot of hidden responsibilities that are just not presented in python (things like ownership) - even tough it present tools to properly solve these issues - anyone who would really want to dive into technical wouldn't choose python anyway

pkulak

Green threads can be nicer to program in, but it’s not like there’s no cost. You still need a stack for every green thread, just like you need one for every normal thread. I think it’s worth it to figure out a good system for stackless async. Something like Kotlin is about as good as it gets. Rust is getting there, despite all the ownership issues, which would exist in green threads too.

b33j0r

For me, once I wanted to scale asyncio within one process (scaling horizontally on top of that), only two things made sense: rust with tokio or node.js.

Doing async in python has the same fundamental design. You have an executer, a scheduler, and event-driven wakers on futures or promises. But you’re doing it in a fundamentally hand-cuffed environment.

You don’t get benefits like static compilation, real work-stealing, a large library ecosystem, or crazy performance boosts. Except in certain places in the stack.

Using fastapi with async is a game-changer. Writing a cli to download a bunch of stuff in parallel is great.

But if you want to use async to parse faster or make a parallel-friendly GUI, you are more than likely wasting your time using python. The benefits will be bottlenecked by other language design features. Still the GIL mostly.

I guess there is no reason you can’t make tokio in python with multiprocessing or subinterpreters, but to my knowledge that hasn’t been done.

Learning tokio was way more fun, too.

hinkley

I don’t know where Java is now but their early promise and task queue implementations left me feeling flat. And people who should know better made some dumb mistakes around thread to CPU decisions that just screamed “toy solution”. They didn’t compose.

ciupicri

GIL is not part of the language design, it's just a detail of the most common implementation - CPython.

b33j0r

Fair and accurate. But that’s pretty much what people use, right?

I am happy to hear stories of using pypy or something to radically improve an architecture. I don’t have any from personal experience.

I guess twisted and stackless, a long time ago.

smw

Or just golang?

iknowstuff

Segfaults

gshulegaard

I also think asyncio missed the mark when it comes to it's API design. There are a lot of quirks and rough edges to it that, as someone who was using `gevent` heavily before, strike me as curious and even anti-productive.

xg15

I learned about the concept of async/await from JS and back then was really amazed by the elegance of it.

By now, the downsides are well-known, but I think Python's implementation did a few things that made it particularly unpleasant to use.

There is the usual "colored functions" problem. Python has that too, but on steroids: There are sync and async functions, but then some of the sync functions can only be called from an async function, because they expect an event loop to be present, while others must not be called from an async function because they block the thread or take a lot of CPU to run or just refuse to run if an event loop is detected. That makes at least four colors.

The API has the same complexity: In JS, there are 3 primitives that you interact with in code: Sync functions, async functions and promises. (Understanding the event loop is needed to reason about the program, but it's never visible in the code).

Whereas Python has: Generators, Coroutines, Awaitables, Futures, Tasks, Event Loops, AsyncIterators and probably a few more.

All that for not much benefit in everyday situations. One of the biggest advantages of async/await was "fearless concurrency": The guarantee that your variables can only change at well-defined await points, and can only change "atomically". However, python can't actually give the first guarantee, because threaded code may run in parallel to your async code. The second guarantee already comes for free in all Python code, thanks to the GIL - you don't need async for that.

mcdeltat

I think Python async is pretty cool - much nicer than threading or multiprocessing - yet has a few annoying rough edges like you say. Some specific issues I run into every time:

Function colours can get pretty verbose when you want to write functional wrappers. You can end up writing nearly the exact same code twice because one needs to be async to handle an async function argument, even if the real functionality of the wrapper isn't async.

Coroutines vs futures vs tasks are odd. More than is pleasant, you have one but need the other for an API for no intuitive reason. Some waiting functions work on some types and not on others. But you can usually easily convert between them - so why make a distinction in the first place?

I think if you create a task but don't await it (which is plausible in a server type scenario), it's not guaranteed to run because of garbage collection or something. That's weird. Such behaviour should be obviously defined in the API.

tylerhou

> You can end up writing nearly the exact same code twice because one needs to be async to handle an async function argument, even if the real functionality of the wrapper isn't async.

Sorry for the possibly naive question. If I need to call a synchronous function from an async function, why can't I just call await on the async argument?

    def foo(bar: str, baz: int):
      # some synchronous work
      pass
    
    async def other(bar: Awaitable[str]):
      foo(await bar, 0)

xg15

I think the general idea of function colors has some merit - when done right, it's a crude way to communicate information about a function's expected runtime in a way that can be enforced by the environment: A sync function is expected to run short enough that it's not user-perceptible, whereas an async function can run for an arbitrary amount of time. In "exchange", you get tools to manage the async function while it runs. If a sync function runs too long (on the event loop) this can be detected and flagged as an error.

Maybe a useful approach for a language would be to make "colors" a first-class part of the type system and support them in generics, etc.

Or go a step further and add full-fledged time complexity tracking to the type system.

munificent

> Maybe a useful approach for a language would be to make "colors" a first-class part of the type system and support them in generics, etc.

Rust has been trying to do that with "keyword generics": https://blog.rust-lang.org/inside-rust/2023/02/23/keyword-ge...

nateglims

The API complexity really threw me when I last tried async python. It's very different from other async systems and is incredibly different from gevent or twisted which were popular when I was last writing server python.

gloomyday

I remember trying to use async in Python for the first time in 2017, and I actually found it easier to learn the basics of Go to create a coroutine, export it as a shared library, and create the bindings. I'm not exaggerating.

If I remember correctly, the Python async API was still in experimental phase at that time.

int_19h

Generators are orthogonal to all this. They are the equivalent of `function*` in JS. And yes, they are also coroutines, but experience has shown that keeping generators separate from generic async functions is more ergonomic (hence why C# and JS both do the same thing).

xg15

True. I think the connection is more a historical one became the first async implementation was done using generators and lots of "yield from" statements AFAIK.

But I think generators are still sometimes mentioned in tutorials for this reason.

int_19h

Implementing what was essentially an equivalent of `await` on top of `yield` (before we got `yield from` even) was a favorite pastime at some point. I worked on a project that did exactly that for WinRT projection to Python. And before that there was Twisted. It's very tempting because it gets you like 90% there. But then eventually you want something like `async for` etc...

Retr0id

> some of the sync functions can only be called from an async function, because they expect an event loop to be present

I recognise that this situation is possible, but I don't think I've ever seen it happen. Can you give an example?

xg15

Everything that directly interacts with an event loop object and calls methods such as loop.call_soon() [1].

This is used by most of asyncio's synchronization primitives, e.g. async.Queue.

A consequence is that you cannot use asyncio Queues to pass messages or work items between async functions and worker threads. (And of course you can't use regular blocking queues either, because they would block).

The only solution is to build your own ad-hoc system using loop.call_soon_threadsafe() or use third-party libs like Janus[2].

[1] https://github.com/python/cpython/blob/e4e2390a64593b33d6556...

[2] https://github.com/aio-libs/janus

svieira

I used to keep plugging Unyielding [1] vs. What Color Is Your Function [2] as the right matrix to view these issues within. But then Notes on structured concurrency [3] was written and I just point to that these days.

But, to sum it all up for those who want to talk here, there are several ways to look at concurrency but only one that matters. Is my program correct? How long will it take to make my program correct? Structured concurrency makes that clear(er) in the syntax of the language. Unstructured concurrency requires that you hold all the code in your head.

[1]: https://glyph.twistedmatrix.com/2014/02/unyielding.html

[2]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...

[3]: https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

heisenzombie

I'll second the plug for structured concurrency (and specifically the Trio [1] library that the author wrote.

[1] https://github.com/python-trio/trio

rybosome

I suppose my negative experiences with async fall under #3, that it is hard to maintain two APIs.

One of the most memorable "real software engineering" bugs of my career involved async Python. I was maintaining a FastAPI server which was consistently leaking file descriptors when making any outgoing HTTP requests due to failing to close the socket. This manifested in a few ways: once the server ran out of available file descriptors, it degraded to a bizarre world where it would accept new HTTP requests but then refuse to transmit any information, which was also exciting due to increasing the difficulty of remotely debugging this. Occasionally the server would run out of memory before running out of file descriptors on the OS, which was a fun red herring that resulted in at least one premature "I fixed the problem!" RAM bump.

The exact culprit was never found - I spent a full week debugging it, and concluded that the problem had to do with someone on the library/framework/system stack of FastAPI/aiohttp/asyncio having expectations about someone else in the stack closing the socket after picking up the async context, but that never actually occurring. It was impenetrable to me due to the constant context switching between the libraries and frameworks, such that I could not keep the thread of who (above my application layer) should have been closing it.

My solution was to monkey patch the native python socket class and add a FastAPI middleware layer so that anytime an outgoing socket opened, I'd add it to a map of sockets by incoming request ID. Then when the incoming request concluded I'd lookup sockets in the map and close them manually.

It worked, the servers were stable, and the only follow-up request was to please delete the annoying "Socket with file descriptor <x> manually closed" message from the logs, because they were cluttering things up. And thus, another brick in the wall of my opinion that I do not prefer Python for reliable, high-performance HTTP servers.

Scramblejams

> it is hard to maintain two APIs.

This point doesn't get enough coverage. When I saw async coming into Python and C# (the two ecosystems I was watching most closely at the time) I found it depressing just how much work was going into it that could have been productively expended elsewhere if they'd have gone with blocking calls to green threads instead.

To add insult to injury, when implementing async it seems inevitable that what's created is a bizarro-world API that mostly-mirrors-but-often-not-quite the synchronous API. The differences usually don't matter, until they do.

So not only does the project pay the cost of maintaining two APIs, the users keep paying the cost of dealing with subtle differences between them that'll probably never go away.

> I do not prefer Python for reliable, high-performance HTTP servers

I don't use it much anymore, but Twisted Matrix was (is?) great at this. Felt like a superpower to, in the oughties, easily saturate a network interface with useful work in Python.

lormayna

> I don't use it much anymore, but Twisted Matrix was (is?) great at this.

You must be an experienced developer to write maintenable code with Twisted, otherwise, when the codebase increase a little, it will quickly become a bunch of spaghetti code.

stackskipton

Glad I'm not only one in the boat. We have Python HTTP Server doing similar. No one can figure it out, Containerd occasionally OOM kills it, everyone just shrugs and move on.

null

[deleted]

PaulHoule

I went through a phase of writing asyncio servers for my side projects. Probably the most fun I had was writing things that were responsive in complex ways, such as a websockets server that was also listening on message queues or on a TCP connection to a Denon HEOS music player.

Eventually I wrote an "image sorter" that I found was hanging up when the browser was trying to download images in parallel, the image serving should not have been CPU bound, I was even using sendfile(), but I think other requests would hold up the CPU and would be block the tiny amount of CPU needed to set up that sendfile.

So I switched from aiohttp to the flask API and serve with either Flask or Gunicorn, I even front it with Microsoft IIS or nginx to handle the images so Python doesn't have to. It is a minor hassle because I develop on Windows so I have to run Gunicorn inside WSL2 but it works great and I don't have to think about server performance anymore.

tdumitrescu

That's the main problem with evented servers in general isn't it? If any one of your workloads is cpu-intensive, it has the potential to block the serving of everything else on the same thread, so requests that should always be snappy can end up taking randomly long times in practice. Basically if you have any cpu-heavy work, it shouldn't go in that same server.

acdha

Indeed. async is one of those things which makes a big difference in a handful of scenarios but which got promoted as a best-practice for everything. Python developers have simply joined Node and Go developers in learning that it’s not magic “go faster” spray and reasoning about things like peak memory load or shared resource management can be harder.

materielle

Traditionally, there are two strategies:

1) Use the network thread pool to also run application code. Then your entire program has to be super careful to not block or do CPU intensive work. This is efficient but leads to difficult to maintain programs.

2) The network thread pool passes work back and forth between an application executor. That way, the network thread pool is never starved by the application, since it is essentially two different work queues. This works great, but now every request performs multiple thread hops, which increases latency.

There has been a lot of interest lately to combine scheduling and work stealing algorithms to create a best of both worlds executor.

You could imagine, theoretically, an executor that auto-scales, and maintains different work queues and tries to avoid thread hops when possible. But ensures there are always threads available for the network.

PaulHoule

My system is written in Python because it is supported by a number of batch jobs that use code from SBERT, scikit-learn, numpy and such. Currently the server doesn't do any complex calculations but under asyncio it was a strict no-no. Mostly it does database queries and formats HTML responses but it seems like that is still too much CPU.

My take on gunicorn is that it doesn't need any tuning or care to handle anything up to the large workgroup size other than maybe "buy some more RAM" -- and now if I want to do some inference in the server or use pandas to generate a report I can do it.

If I had to go bigger I probably wouldn't be using Python in the server and would have to face up to either dual language or doing the ML work in a different way. I'm a little intimidated about being on the public web in 2025 though with all the bad webcrawlers. Young 'uns just never learned everything that webcrawler authors knew in 1999. In 2010 there were just two bad Chinese webcrawlers that never sent a lick of traffic to anglophone sites, but now there are new bad webcrawlers every day it seems.

nly

OS threads are for CPU bound work.

Async is for juggling lots of little initialisations, completions, and coordinating work.

Many apps are best single threaded with a thread pool to run (single threaded) long running tasks.

Townley

It’s heartening that there are people who find the problem you described “fun”

Writing a FastAPI websocket that reads from a redis pubsub is a documentation-less flailfest

mjd

I haven't read the article yet, but I do have something to contribute: several years ago I was ay PyCon and saw a talk in which someone mentioned async. I was interested and wanted to learn to use it. But I found there was no documentation at all! The syntax was briefly described, but not the semantics.

I realized, years later, that the (non-)documentation was directed at people who were already familiar with the feature from Javascript. But I hadn't been familiar with it from Javascript and I didn't even know that Javascript had had such a feature.

So that's my tiny contribution to this discussion, one data point: Python's async might have been one unit more popular if it had had any documentation, or even a crossreference to the Javascript documentation.

notatoad

this was my initial experience with python async as well (which i now use heavily)

the documentation is directed at people who want coroutines and futures, and know what that means. if you don't know what coroutines and futures are, the python docs aren't going to help you. the documentation isn't going to guide anybody into using the async features who aren't already seeking them out. and maybe that's intentional, but it's not going to grow adoption of the async features.

int_19h

FWIW Python got async/await before JavaScript did. I believe at the time the main inspiration was C#.

lyu07282

JavaScript was always single-threaded asynchronous, the added async/await keywords were just syntactic sugar. Node.js became popular before it as well, though I found at the time it was difficult to avoid callback hell similar to using libuv directly in C.

int_19h

async/await was syntactic sugar in C# as well. Callbacks are a natural way to do async so it's no surprise.

And while Python implements async directly in the VM, its semantics is such that it can be treated as syntactic sugar for callbacks there also.

TheCondor

I generally like Python. I'm not a hater but I don't treat it like a religion either.

Async Python is practically a new language. I think for most devs, it's a larger than than 2 to 3 was. One of the things that made python uptake easy was the vast number of libraries and bindings to C libraries. With async you need new versions of that stuff, you can definitely use synchronous libraries but then you get to debug why your stuff blocks.

Async Python is a different debugging experience for most python engineers. I support a small handful of async python services and think it would be an accellerator for our team to rewrite them on Go.

When you hire python engineers, most don't know async that well, if at all.

If you have a mix of synchronous and asynchronous code in your org, you can't easily intermix it. Well you can, but it won't behave as you usually desire it to, it's probably more desirable to treat them as different code bases.

Not to be too controversial, but depending upon your vintage and they was you've learned to write software I think you can come to python and think async is divine manna. I think there are many more devs that come to python from datascience or scripting or maybe as a first language and I think they have a harder time accepting the value and need of async. Like I said above, it's almost an entirely different language.

languagehacker

Wow, didn't even see much about how miserable using the sync_to_async and async_to_sync transformers are.

In general, the architectures developed because of the GIL, like Celery and gunicorn and stuff like that, handles most of the problems we run into that async/await solves with slightly better horizontal scaling IMO. The problem with a lot of async code is that it tends not to think beyond the single machine that's running it, and by the time you do, you need to rearchitect things to scale better horizontally anyway.

For most Python applications, especially with web development, just start with something like Celery and you're probably fine.

operator-name

Not to mention sync_to_async and async_to_sync are also part of a library, asgiref that the Django developers made to wrap a thread pool runtime!

kurtis_reed

The premise of the article is wrong. Async in Python is popular. I'd expect most new web backends to use it.

The article says SQLalchemy added async support in 2023 but actually it was 2020.

KaiserPro

the two issues I have with async is are:

1) its infectious. You need to wrap everything in async or nothing.

2) it has non-obvious program flow.

Even though it is faster in a lot of cases (I had a benchmark off for a web/socket server for multi-threaded vs async with a colleague, and the async was faster.) for me it is a shit to force into a class.

The thing I like about threads is that the flow of data is there and laid out neatly _per thread_, where as to me, async feels like surprise goto. async feels like it accepts a request, and then will at some point at the future either trigger more async, or crap out mixing loads of state from different requests all over the place.

To me it feels like a knotted wool bundle, where as threaded/multi-process feels like a freshly wound bobbin.

Now, this is all viiiiiibes man, so its subjective.

taeric

Wouldn't this be like asking why bit packing/flipping isn't more popular in python? In general, it just isn't necessary for the vast majority of programs people are likely to write using python.

Which isn't to argue that they did a good or a bad job adding the ability to the language. It just isn't the long pole in performance concerns for most programs.

tonymet

async only helps with io-wait concurrency, but not cpu-bound concurrency.

async is popular in JS because the browser is often waiting on many requests.

command-line tools are commonly computing something. even grep has to process the pattern matching so concurrent IO doesn't help a single-threaded pattern match.

Sure there are applications where async would help a CLI app, but there are fewer than JS.

Plus JS devs love rewriting code very 3 months.