Hard numbers in the Wayland vs. X11 input latency discussion
414 comments
·January 26, 2025ChuckMcM
arghwhat
Display devices (usually part of your GPU) still have "explicit display hardware just for the mouse" in form of cursor planes. Later this has been generalized as overlay planes.
Planes can be updated and repositioned without redrawing the rest of the screen (the regular screen image is on the primary plane), so moving the cursor is just a case of committing the new plane position.
The input latency introduced by GNOME's Mutter (the Wayland server used here) is likely simply a matter of their input sampling and commit timing strategy. Different servers have different strategies and priorities there, which can be good and bad.
Wayland, which is a protocol, is not involved in the process of positioning regular cursors, so this is entirely display server internals and optimization. What happens on the protocol level is allowing clients to set the cursor image, and telling clients where the cursor is.
smallmancontrov
Protocols can bake in unfortunate performance implications simply by virtue of defining an interface that doesn't fit the shape needed for good performance. Furthermore, this tends to happen "by default" unless there is a strong voice for performance in the design process.
Hopefully this general concern doesn't apply to Wayland and the "shape" you have described doesn't sound bad, but the devil is in the details.
gizmo686
I don't think the Wayland protocol is actually involved in this. Wayland describes how clients communicate with the compositor. Neither the cursor, nor the mouse are a client, so no where in the path between moving the mouse and the cursor moving on screen is Wayland actually involved.
The story is different for applications like games that hide the system cursor to display their own. In those cases, the client needs to receive mouse events from the compositor, then redraw the surface appropriately, all of which does go through Wayland.
jchw
Yeah, Wayland isn't designed in such a way that would require any additional latency on cursor updates. The Wayland protocols almost entirely regard how applications talk to the compositor, and don't really specify how the compositor handles input or output directly. So the pipeline from mouse input coming from evdev devices and then eventually going to DRM planes doesn't actually involve Wayland.
wmanley
Software works best when the developers take responsibility for solving user's problems.
> Wayland, which is a protocol
This is wayland's biggest weakness. The effect is diffusion of responsibility.
wmf
You're kind of getting tripped up on terminology. The OP didn't measure Wayland; they measured GNOME Shell which does take responsibility for its performance. Also, I'm not aware of any latency-related mistakes in Wayland/Weston (given its goal of tear-free compositing).
sho_hn
It's no different from X11, which is also a protocol/spec with many implementations.
arghwhat
Is it a weakness of the web that HTTP is just a protocol specification?
The "problem" with this in Wayland is that before people 'ran Xorg with GNOME on top", now they just run GNOME the same way they run Chrome or Firefox to use HTTP - it will take time for people to get used to this.
guappa
It's the biggest strength!
Every time someone complains about wayland there's someone informing you how it achtually it isn't.
redmajor12
Wayland as a just protocol... Isn't that the same argument they used when it shipped without copy/paste or a screensaver?
arghwhat
It's not an "argument", it's just a description of what Wayland is. But no, the correct protocol has had copy-paste since day one, and I dont remember there being issues with screensavers.
In the metaphor of a web server and a web browser, Wayland would be the HTTP specification. What you're usually interested in is what server you're running, e.g. GNOME's Mutter, KDE's Kwin, sway, niri, or what client you're running, e.g. Gtk4, Qt6, etc.
flohofwoe
> they sometimes had explicit display hardware for just the mouse because that would cut out the latency
That's still the case. The mouse cursor is rendered with a sprite/overlay independent from the window composer swapchain, otherwise the lag would be very noticeable (at 60Hz at least). The trickery starts when dragging windows or icons around, because that makes the swapchain lag visible. Some window composers don't care any longer because a high refresh rate makes the lag not as visible (eg macOS) others (eg Windows I think) switch to a software mouse cursor while dragging.
Lanolderen
> The trickery starts when dragging windows or icons around
Cool. Never really thought about why they don't exactly keep up with the cursor.
dralley
GNOME has unredirection support, so I don't think this test is applicable to actual in-game performance.
A fullscreen app ought to be taking a fast path through / around the compositor.
eqvinox
This isn't about in-game performance, this is about the desktop feeling sluggish.
null
gf000
Asahi Lina's comment on the topic: https://lobste.rs/s/oxtwre/hard_numbers_wayland_vs_x11_input...
eqvinox
I'm not sure where this absolute "tearing is bad" and "correctness" mentality comes from. To me, the "correct" thing specifically for the mouse cursor is to minimize input lag. Avoiding tearing is for content. A moving cursor is in various places along the screen anyway, which will include biological "afterglow" in one's eyes and therefore there's going to be some ghosted perception of the cursor anyway. Tearing the cursor just adds another ghost. And at the same time, this ghosting is the reason keeping cursor input lag to a minimum is so important, to keep the brain's hand-eye coordination in sync with the laggy display.
gf000
What about a cursor moving over an element that requires changing its look? (E.g. you go over a link?)
ChuckMcM
That is a great comment and everyone should read it. It also demonstrates a common truism that system goals dictate performance.
hulitu
Tl;dr: X bad and tears but Wayland with 1.5 frame latency (or more) good. Now, if you have a monitor (or TV) with 30i refresh rate, you're screwed.
Edman274
We don't even know if it's actually a 1.5 frame latency for real because the test didn't try to establish that. The author said it looked like it was 1.5 frames, but that could be a coincidence. It just looks like a 1.5 frame overhead. It could, in reality, be a constant time overhead rather than a function of frames and so would be fine with a lower frame rate.
Tearing would affect everyone that uses a computer with X11 but your proposed example of a TV with 30i refresh rate would only affect the tiny subset of users that use a CRT television as a monitor, right?
Vilian
1.5 is the worse case, and implementation specific
p_l
Hardware cursor is still a thing to this day on pretty much all platforms.
thayne
On sway, if you use the proprietary Nvidia drivers, it is often necessary to disable hardware cursors. I wonder if there is something similar happening here. Maybe wayland gnome doesn't use hardware cursors?
pengaru
Once upon a time XFree86 and Xorg updated the pointer directly in a SIGIO handler. But that's ancient history at this point, and nowadays I wouldn't expect Wayland and Xorg to have a hugely different situation in this area.
IIRC it all started going downhill in Xorg when glamour appeared. After the cursor rendering path wasn't async-safe for execution from the signal handler (which something opengl-backed certainly wouldn't be), the latency was worse.
I remember when even if your Linux box was thrashing your mouse pointer would stay responsive, and that was a reliable indicator of if the kernel was hung or not. If the pointer prematurely became unresponsive, it was because you were on an IDE/PATA host and needed to enable unmask irq w/hdparm. An unresponsive pointer in XFree86 was that useful of a signal that something was wrong or misconfigured... ah, the good old days.
paulryanrogers
Reminds me of how we used 'dir' command to test computer speed by watching how fast it would scroll by.
redmajor12
Or later, seeing how long it took to load /usr/bin in a filemanager.
AtlasBarfed
How old is Wayland?
I'll be reading a dream of spring in my grave at this rate.
I understand I'm complaining about free things, but this is a forced change for the worse for so long. Wayland adoption should have been predicated on a near universal superiority in all input and display requirements.
Intel and AMD and Nvidia and Arm makers should be all in on a viable desktop Linux as a consortium. Governments should be doing the same because a secure Linux desktop is actually possible. It is the fastest path to showcasing their CPUs and 3d bling, advanced vector /computer hardware.
Wayland simply came at a time to further the delay of the Linux desktop, at a time when Windows was attempting to kill Windows with its horrid tiles and Apple staunchly refused a half billion in extra market cap by offering osx on general x86.
dralley
Wayland is a protocol. The problems people complain about are generally implementation details specific to GNOME or KDE or (in general) one particular implementation.
There's rarely any such thing as "universal superiority", usually you're making a tradeoff. In the case of X vs Wayland it's usually latency vs. tearing. Personally I'm happy with Wayland because there was a time when watching certain videos with certain media players on Linux was incredibly painful because of how blatant and obtrusive the tearing was. Watching the same video under Wayland worked fine.
Early automobiles didn't have "universal superiority" to horses, but that wasn't an inhibitor to adoption.
o11c
"Wayland is a protocol" is exactly the problem. Protocols suck; they just mean multiplying the possibility of bugs. A standard implementation is far more valuable any day.
With X11, it was simple: everybody used Xfree86 (or eventually the Xorg fork, but forks are not reimplementations) and libX11 (later libxcb was shimmed underneath with careful planning). The WM was bespoke, but it was small, nonintrusive, and out of the critical path, so bugs in it were neither numerous nor disastrous.
But today, with Wayland, there is no plan. And there is no limit to the bugs, which must get patched time and time again every time they are implemented.
bawolff
> Wayland is a protocol. The problems people complain about are generally implementation details specific to GNOME or KDE or (in general) one particular implementation.
I feel like at some point this is a cop-out. Wayland is a protocol but its also a "system" involving many components. If the product as a whole doesn't work well then its still a failure regardless of which component's fault it is.
Its a little like responding to someone saying we haven't reached the year of linux on the desktop by saying: well actually linux is just the kernel and its been ready for the desktop for ages. Technically true but also missing the point.
bpfrh
I mean most of the things are the fault of a badly designed or non existent protocols:
-Problems with non western input systems
-Accessibility
-Remote control(took around 2 years to be stable I think?)
-Bad color management
Then there's the things that did work in x11 but not in wayland:
-Bad support for keymapping(the input library says keymapping should be implemented by the compositor, gnome says not in scope, so we have a regression)
-bad nvidia support for the first two years? three years?
While these things are compositor/hw vendor faults, the rush to use wayland and nearly every distro making it as default, forced major regressions and wayland kinda promised to improve the x11 based experience.
hulitu
> there was a time when watching certain videos with certain media players on Linux was incredibly painful because of how blatant and obtrusive the tearing was
It was because of the crap LCD monitors (5 to 20 ms GtG) and how they are driven. The problem persists today. The (Wayland) solution was to render and display a complete frame at a time without taking into account the timings involved in hardware (you always have a good static image, but you have to wait).
I tried Tails (comes with some Wayland compositor) on a laptop. The GUI performance was terrible with only a Tor browser open and one tab.
If you do not care about hardware, you will, sooner or later, run into problems. Not everybody has your shiny 240 Hz monitor.
Cthulhu_
> How old is Wayland?
About 16 years old, for comparison, X is 40.
hulitu
Hm, X works fine since 20 years. Wayland is still a protocol after 16. /s
Vilian
>How old is Wayland?
This argument don't make sense because Wayland started as a hobby and not to replace x11, was after it got traction, other people/companies started contributing that it matter
BrenBarn
> Wayland adoption should have been predicated on a near universal superiority in all input and display requirements.
Totally agree. The people saying "Wayland is a protocol" miss the point. Wayland is a protocol, but Wayland adoption means implementing stuff that uses that protocol, and then pushing it onto users.
Measure twice, cut once. Look before you leap. All that kind of thing. Get it working FIRST, then release it.
bluGill
You have to releaseethings like this in parts because it needs too many external people to do things to make it useful. Managing those parts is something nobody has figured out and so people live you end up using it before it is ready for your use and then complaining.
n144q
> Wayland simply came at a time to further the delay of the Linux desktop
I can't tell if you are serious or not.
paulddraper
Serious
prmoustache
>but this is a forced change for the worse for so long
Can you explain who is forced to do what in that context?
bashkiddie
AMD states that bugs only get fixed for Wayland.
Coincidentally I have got a graphics driver that likes crashing on OpenGL (AMD Ryzen 7 7840U w/ Radeon 780M Graphics)
bsder
> I understand I'm complaining about free things, but this is a forced change for the worse for so long.
Then write code.
Asahi Lina has demonstrated that a single person can write the appropriate shims to make things work.
Vulkan being effectively universally available on all the Linux graphics cards means that you have the hardest layer of abstraction to the GPU taken care of.
A single or small number of people could write a layer that sits above Wayland and X11 and does it right. However, no one has.
yencabulator
You're arguing against Wayland, but for a more secure Linux desktop? I recommend you spend more time getting to know the X11 protocol then, because it has plenty of design decisions that simply cannot be secured. The same people who used to develop XFree86 designed Wayland to fix things that could not be fixed in the scope of X11.
wmanley
> the X11 protocol [...] has plenty of design decisions that simply cannot be secured.
I've been hearing this for over a decade now. I don't get it. Just because xorg currently makes different clients aware of each other and broadcasts keypresses and mouse movements to all clients and allows screen capturing doesn't mean it has to. You could essentially give every application the impression that they are the only thing running.
It might seem difficult to implement, but compare it to the effort that has gone into wayland across the whole ecosystem. Maybe that was the point - motivating people to work on X was too difficult, and the wayland approach manages to diffuse the work out to more people.
I was really bullish on Wayland 10 years ago. Not so much any more. In retrospect it seems like a failure in technical leadership.
sxp
For anyone who uses ffmpeg for this type of per frame analysis, `ffmpeg -skip_frame nokey -i file -vsync 0 -frame_pts true out%d.png` will get the "presentation time" of each frame in the video. That's more precise than just dumping frames and calculating timestamps. You can also do something similar in a web browser by playing a <video> and using `requestVideoFrameCallback()`. Though, you might need to set `.playbackRate` to a low value if the computer can't decode all the frames fast enough.
> With my 144Hz screen,....Wayland, on average, has roughly 6.5ms more cursor latency than X11 on my system...Interestingly, the difference is very close to 1 full screen refresh. I don't know whether or not that's a coincidence.
The fact that the latency is almost 1/144th of a second means that it might become 1/60th of a second on standard 60Hz monitors. This is hard to notice consciously without training, but most people can "feel" the difference even if they can't explain it.
mlyle
> The fact that the latency is almost 1/144th of a second means that it might become 1/60th of a second on standard 60Hz monitors.
My guess: the "true" numbers are close to 2.5 (half a frame of random phase of when the mouse is touched vs. refresh, plus 2 frames to move cursor) and 3.5. If you throw out the low outlier from each set you get pretty close to that.
(of course, the 125Hz mouse poll rate is another confound for many users, but this guy used a 1KHz mouse).
> This is hard to notice consciously without training, but most people can "feel" the difference even if they can't explain it.
Yah. 7ms difference is not bad vs 16.6ms is starting to be a lot.
IMO, we should be putting in effort on computers to reach 1.6 frames of latency -- half a frame of random phase, plus one frame, plus a little bit of processing time.
mananaysiempre
To have a compositor not introduce a frame of latency more or less requires it to race the beam, which has definitely been suggested[1], but you can see how it’d be difficult, and so far no operating systems have tried as far as I know. And as for good frame pacing support in UI toolkits (and not just game engines), well, one can dream. Until both of these are in place, 2.5±0.5 seems to be the hard limit, the question here is more where the Mutter is losing another frame (which even the greats tell us[2] is not hard to do by accident).
[1] https://raphlinus.github.io/ui/graphics/2020/09/13/composito...
[2] http://number-none.com/blow/john_carmack_on_inlined_code.htm...
amluto
I’ve read these arguments quite a few times and always found them a bit questionable. Sure, if everything is driven by the vblank time (or any other clock that counts in frames), it makes sense. But that’s a silly approach! There is nothing whatsoever special about allocating one full frame interval to the compositor to composite a frame — if it takes 16ms to composite reliably, it will take 16ms to composite reliably at 30Hz or 60Hz or 144Hz. So shouldn’t the system clock itself on a time basis, not a frame basis?
Put another way, a system fast enough to composite at 144Hz should be able to composite at 60Hz while only allocating 1/144 seconds to the compositor, which would require offsetting the presentation times as seen by the compositor’s clients by some fraction of a frame time, which doesn’t actually seem that bad.
It gets even better if variable refresh rate / frame timing works well, because then frames don’t drop even if some fraction of compositing operations are a bit too slow.
I assume I’m missing some reason why this isn’t done.
goalieca
I found low latency terminals make a big improvement to even simple tasks like typing.
XorNot
I was pretty shocked by the eyestrain difference I felt going from 30hz to 60hz with a 4K monitor while only doing coding tasks (i.e. text and mouse, no real graphics or animations).
daef
what would you count as low latency terminal?
layer8
See https://beuke.org/terminal-latency/. Single-digit milliseconds I’d say. These numbers are minus the keyboard and display latency.
rcxdude
It's almost certainly because of one extra frame of buffering between the mouse move and the screen. Vsync can cause this, but it should be possible to get vsync with just double buffering.
xbar
Cursor input latency is the continuous galling impediment to my satisfaction with an application or operating system.
IDE text cursor delay? Unacceptable. Shell text cursor delay? Unacceptable. GUI mouse cursor delay? Unacceptable.
All are immediate deal-breakers for me.
mcv
Yeah, I'm baffled this is so bad. Especially in Linux. Surely moving the mouse should take precedence over everything else? Even highlighting the thing you move the mouse over shouldn't slow down the mouse cursor itself.
I recently had a problem with extreme mouse slowdowns on KDE/Plasma 6 on X11 with NVidia. I noticed the slowdown was particularly extreme over the tabs of my browser.
The fix, in case anyone else also has this problem, is to disable OpenGL flipping. I have no idea what OpenGL flipping does, but disabling it (in nvidia-settings and in /etc/X11/xorg.conf) fixed the problem.
I'd love it if someone could explain why. Or why this isn't the default.
torginus
I have another somewhat unrelated pet peeve - when I started using Linux, the desktop was software composited, essentially what I assume was going on was that the UI was drawn by the CPU writing into the RAM which got blitted into the GPU's framebuffer, as it was done forever.
When compositors (even with X11 composite) and GPU renderers becamse fashionable, the latency became awful, literally 100s of msecs. Which was weird considering, I grew up playing DOS games that didn't have this issue. Things improved over time but it's not clear to me if we ever got back to the golden days of CPU rendering.
jchw
Note that the results will differ between compositors, different GPUs, and different configurations. This is somewhat less the case with X11 since there is only one X server implementation (that Linux desktop systems use, anyhow.)
I think there may still be the issue that many compositor/GPU combinations don't get hardware cursor planes, which would definitely cause a latency discrepancy like this.
Cthulhu_
TIL Wayland is 16 years old already; in a few years (I can't math) it'll be as old as X was when Wayland came out, but it seems it's still considered mediocre or not as good as X was.
jzb
Initial release of X was June 1984, and Wayland was first released in 2008 -- so it won't be until 2032 that Wayland is the same age. When people complain about Wayland being mediocre or "as good as X was" what they often mean is "Wayland doesn't support $very_specific_feature" or "my video card vendor's proprietary drivers aren't well-tested with Wayland and it makes me sad".
Approximately 99.999% of those complaints are uttered by people who 1) are not doing the work, 2) are not interested or capable of doing the work, and 3) do not understand or refuse to acknowledge that the Wayland and X developers are _mostly the same folks_ and they _do not want to work on X11 anymore_.
I don't really have a stake in this argument, except I'm bone-tired of seeing people whine about Wayland when they're using software somebody else gave them for free. There are enough Wayland whiners that, by now, they could've banded together and started maintaining / improving X and (if their complaints and theories were correct) left Wayland behind.
Strangely -- even though X is open source and eminently forkable (we know this, because XFree86 -> X.org) it gathers dust and none of its proponents are doing anything towards its upkeep.
When someone shows up with "Wayland isn't as good as X, so here is my modernized fork of X anyone can use" -- I'll be quite interested.
piotr-yuxuan
I enjoy reading the whinging around Wayland by people who feel so entitled and so outraged. Every time they never fail to remind me of a previous coworker who was always using the same tactics: « this breaks my existing workflow, I will not adapt, I am all for trying new things but only if they are very exactly equal to what I already know and then what's the point, this is not how I have always worked the past ten years and I will not bulge ».
Not talking about the technical points of the whole X/Wayland considerations here, but a group of people in such debate is always as vocal as they'll do nothing and put unreasonably high expectations on other people to listen to them and have no choice but to agree. The fact that X.org's core team initiated Wayland and stopped X.org development is disregarded: these people Will Be Right against literally everything, they can't be reasoned.
My previous coworker was fired for gross incompetence in the end yet he never acknowledged anything wrong. He always knew better than literally the whole world that bash scripts run manually where superior to Terraform for the whole company's infrastructure. Yes, he was that stupid. I guess this is why he ended lying like hell on his LinkedIn profile, concealing he had had four employers in four years.
Seeing technical debates with such poor quality is worrying. If people with quantitative minds fail to entertain a fact-based discussion, how can we expect our political life to acknowledge there is only one reality and stop lying?
Before you voice your outrage, here are some facts that won't dis-exist just because you disagree with them:
- https://www.phoronix.com/review/x_wayland_situation
- https://ajaxnwnk.blogspot.com/2020/10/on-abandoning-x-server...
mcv
> Approximately 99.999% of those complaints are uttered by people who 1) are not doing the work, 2) are not interested or capable of doing the work, and
Are you saying that Wayland is only for developers? Are people not allowed to complain when The New Thing turns out to be less good than the supposedly Obsolete Thing?
> 3) do not understand or refuse to acknowledge that the Wayland and X developers are _mostly the same folks_ and they _do not want to work on X11 anymore_.
I'm fully aware of that. I understand X11 has its limitations, and some of the goals of Wayland sound very appropriate for the modern PC environment, but if after 16 years, there are still many situations where Wayland does worse than X, that's not a great sign, and it will make people continue to use X.
lmm
> There are enough Wayland whiners that, by now, they could've banded together and started maintaining / improving X and (if their complaints and theories were correct) left Wayland behind.
They already have. X is already more full-featured and stable than Wayland (yes it is missing certain niche features that Wayland has). Sometimes the most important thing you can do with a piece of software is not screw around with it.
braiamp
And yet, X is as flawed as it can be. There's a problem with global shortcuts, that Xorg only fires an event on keyup/key release, rather than on first match on keypress. That is a protocol limitation, and fixing it means breaking a bunch of stuff. The complains about wayland are of that nature, but at least they are fixable.
eadmund
> what they often mean is "Wayland doesn't support $very_specific_feature"
My primary complaint with Wayland is that it is a textbook example of Brooks’s second-system effect. It forsook backwards compatibility (yes, there’s an X11 server but to my knowledge there is no way to just run Wayland and one’s X11 desktop environment).
> Strangely -- even though X is open source and eminently forkable (we know this, because XFree86 -> X.org) it gathers dust and none of its proponents are doing anything towards its upkeep.
I suspect that is because the X/Wayland guys have sucked all the oxygen out of that particular room. A newbie shows up and is told that X.org is legacy and he shouldn’t work on it, so … he doesn’t.
And of course X.org really is a bit of a disaster due to being written in C.
arp242
The core Wayland and X.org don't exist in a vacuum. I have written a lot of code over the years that works only on X, as have many others. I have not directly contributed to the X.org server, but have to the "wider X ecosystem". I will have to rewrite some of that. This is the case for many people.
jzb
"I have not directly contributed to the X.org server, but have to the "wider X ecosystem". I will have to rewrite some of that. This is the case for many people."
This is a fair point, but the quality of Wayland isn't really at issue for this -- Wayland could be much better than X by all accounts and it would still require you to rewrite software for it. (Assuming XWayland doesn't suit your needs, anyway.)
yjftsjthsd-h
> When people complain about Wayland being mediocre or "as good as X was" what they often mean is "Wayland doesn't support $very_specific_feature" or "my video card vendor's proprietary drivers aren't well-tested with Wayland and it makes me sad".
...Yes? Wayland being a regression in terms of features and bugginess is kinda a sticking point.
> Approximately 99.999% of those complaints are uttered by people who 1) are not doing the work, 2) are not interested or capable of doing the work, and 3) do not understand or refuse to acknowledge that the Wayland and X developers are _mostly the same folks_ and they _do not want to work on X11 anymore_.
If X works for someone and Wayland doesn't, none of that matters. It doesn't matter how much you insult X users, it won't make their usecases invalid or make Wayland good enough.
> I don't really have a stake in this argument, except I'm bone-tired of seeing people whine about Wayland when they're using software somebody else gave them for free.
It cuts both ways: Users aren't entitled to free work, and developers aren't entitled to their work being well-regarded. Giving software away for free has never meant that people can't point out its problems.
bluGill
X also isn't wrking for some. If you care about tearing for instance x doesn't work.
FooBarWidget
> It cuts both ways: Users aren't entitled to free work, and developers aren't entitled to their work being well-regarded. Giving software away for free has never meant that people can't point out its problems.
While true, this statement is also useless. On a meta level, what you're essentially saying is that users' rights to feel resentful, is more valuable than achieving an outcome.
craftkiller
> 16 years old [...] in a few years [...] it'll be as old as X was when Wayland came out
Lol no. X is from 1984: https://www.talisman.org/x-debut.shtml
That means Wayland is only 66% the age of X when Wayland came out, or you'd need 50% more of Wayland's life before its as old as X was.
OvbiousError
I'm getting much higher framerates in Civ VI on linux since I switched to Wayland, so there is that. For the rest it just works for me, use it both professionally and privately.
tmtvl
If X is 40 and Wayland is 16,that means a difference of 24 years. Hence Wayland compositors have 8 years to work out the kinks. I am currently using Wayland via Plasma 6 and it works well enough but I don't have special needs, so I don't know how well... say... screen readers work.
DonHopkins
That means in two more years it will be legal for X11 and Wayland to have a baby!
josefx
We can call it V to signify another step backwards and make it the default, people can just use the terminal until the first implementation crops up.
guappa
Maybe like we did we pulseaudio (throw it away and make pipewire) needs to be done with wayland.
tannhaeuser
Another way to look at these figures is that in a few years a Wayland successor is due, born out of the same motivation as Wayland ie. lack of people with a desire to maintain legacy software. My expectation is that browser frontend stacks will eat the desktop completely at that point; it's not like there weren't many new desktop apps on Linux anyway.
ein0p
16 years old and there's still video tearing in the browsers. I try it with every Ubuntu LTS release, and then switch back to X in a few days. It solves problems I just don't have.
dist-epoch
Wayland came out at the same time as the Windows compositor in Vista. Let's be generous and consider the next Windows version, 7, as having a "good/stable" compositor. So Wayland is 13 years behind Windows.
gf000
Based on what exactly? On a single frame difference, which is easily explained by the fundamental point of Wayland: no tearing? It's simply that X puts the cursor asynchronously, potentially tearing it, while Wayland always renders a correct frame, but the mouse movements' result has to wait for the very next frame.
That's literally it.
SaintSeiya
Why this does not surprise me? every attempt to rewrite a working solution to make it more "modern, easy to maintain and future proof" rarely do so. It always end up slower, with more latency and lifted by faster hardware, not by faster software. Every 20 years a new generation comes weaker, pampered with the abtsractions of the previous generations who did the grunt work for them. After 3 generations of software developers all we have is library/framework callers and nobody really knows about performance and optimization.
kombine
This is not my experience. When I upgraded several of my computers to KDE 6 switching to Wayland, overall responsiveness and snappiness of the system increased very visibly. There are still features that Wayland lacks compared to X11, but I am willing to compromise in favor of its other benefits.
dvdkon
Same here, switching from i3 to sway resulted in a noticeably more responsive experience on my aging hardware. Of course, this is just an anecdote, and I could probably get the same results on X with some fiddling, but I do think the value of simpler and more modern systems is demonstrated here.
WhyNotHugo
I’d be interested in seeing similar benchmarks done on x11+i3 vs sway.
There’s nothing Wayland-specific that would introduce this latency, so I wonder if wlroots/sway have any additions lag.
guappa
????
After upgrading to plasma6 from 5, all the desktop animations have started stuttering. Probably your hardware is too new.
mtlmtlmtlmtl
On the other hand, I just did a fresh install of Artix Linux. Installed KDE just to have something functional while I get my tiling setup working. Boot into Plasma(Wayland) and it utterly shits itself, turning my main monitor on and off repeatedly until it finally crashes. So I pick Plasma(X11) instead and that just works.
In fact, in almost 2 decades of using Linux every day, I can't remember X doing anything similar. It's always just worked.
gf000
> In fact, in almost 2 decades of using Linux every day, I can't remember X doing anything similar. It's always just worked.
Well, we have very different memories then. Sure, X worked reliably once configured. But configuring it was a marathon in hell, as per by my memory, and all the litany of forum posts crying out for help all across the internet. Like, I have at one point had to rescue an install by changing back the config file without a screen at all!
prmoustache
Old computers had less latency, but otoh on many OSes a single app crashing meant the whole OS was irresponsive and you had to reboot the whole system.
Avamander
Less latency matters little if it's just to wait behind some other operation like disk IO or network.
mikenew
I get major lag spikes when the gpu is under heavy load (like doing Stable Diffusion inference or something). TBF I haven't A/B tested with X11, but I don't ever remember it being like that. An extra frame of latency isn't great on it's own, but the occasional spikes in lag are really irritating.
cma
May still happen especially if it is thrashing vram in and out of system memory or something, but have you tried lowering priority of the stable diffusion process?
cloudwalk9
I can also attest to horrific lagspikes on an Optimus laptop even if Intel is driving the desktop. Memory pressure is definitely the problem here. Lagspikes actually lessened when I switched to Wayland Gnome. I think they lessened further with PREEMPT_RT on kernel 6.12. Nvidia requires an environment variable to build on real time kernels but it plays surprisingly nice as of driver 570. But if you have this config, you need at least 11th gen Intel iGPU or AMD APU, because i915 does not build for real-time kernels. Only the Xe driver works and only if you force_probe the ID if it's Tiger Lake.
...Which I don't get because the Xe driver is said to explicitly support, at minimum, Tiger Lake. I played Minecraft on the iGPU with Xe and it was perfectly fine. It... drew 3D graphics at expected framerates.
sapiogram
Beautiful work. Could it be worth repeating the experiment with the monitor running at a very low refresh rate, i.e. 30hz? If Wayland is always a frame slower than X11, it should be much easier to observe.
adeon
I used to be a NetHack speedrunner. Pressing the keys real fast. All of it in terminals. I had the fastest run for almost 10 years. Input lag was a thing I thought sometimes. Everything below is based on pure gut feelings because I never measured any of it.
I've long wondered why some terminals and operating systems felt like they were pretty laggy. In the sense that if I press my character to move, how long until they actually move?
iTerm2 on MacOS is worst offender. But I can't tell if it's iTerm2 or MacOS itself. I remember trying other terminals on Mac, and it was mostly the same. iTerm2 itself also had bunch of rendering options, but I couldn't get them to do anything I could actually feel affected input lag. One thought I had was: maybe iTerm2 added 1 frame lateness, and MacOS itself added another? But that would be ~30ms on a 60fps monitor which I can easily already tell on a server-connection NetHack.
I also have no idea if measuring "N frames late" is actually a sensible way to think about this. I assume computers know when a key is pressed at a higher frequency?
Linux xterm and rxvt were the fastest that I remember, on X11 (I've never used Wayland like in the article, well not for NetHack anyway).
I have no idea how compositors work on any of these operating systems or X11/Wayland to say anything smart about them or why they would be slower or faster.
Reading the article...I realized I have that same iPhone 15 Pro with 240fps recording. I could replicate that experiment, but testing out terminals on various operating systems instead I used to play on. So I could now test if my gut feeling was right all along or not. or maybe it lied to me all these years.
I wrote the idea in the article down on my notes, maybe when I'm bored enough I'll try it :) and I can stop saying propaganda about iTerm2 being slow if it turns out I was completely wrong. Maybe I have easier time because all my NetHack days were 60fps, so I have more leeway in measurements.
I'm not active in NetHack playing anymore, although I sometimes play it as a comfort game.
patal
iTerm2 is on 45ms and more in this measuring: https://danluu.com/term-latency/
There, it's distinct in that other terminals have lower latency on the same system.
oskenso
I'm glad more people are looking into this. Maybe one day we'll discover the cause of the Laggy Mouse bug which has been a problem since 2015
https://bugzilla.gnome.org/show_bug.cgi?id=745032 https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/749
phkahler
Since this is all mouse pointer relates and there is hardware for that, I can only assume it's a fixable problem.
OTOH the gnome developers have been talking about implementing triple buffering in the compositor. Tripple buffering is only needed when your renderer can't complete a frame faster than the scanout of a frame. Given that any modern GPU should be able to composite hundreds of frames per second this make me think something about Wayland or the gnome compositor in particular isn't designed as well as it could be.
This is excellent. Too often people guess at things when they could be more empirical about them. Ever since I learned the scientific method (I think 3rd or 4th grade) I was all about the 'let's design an experiment' :-).
Let me state up front that I have no idea why Wayland would have this additional latency. That said, having been active in the computer community at the 'birth' of X11 (yes I'm that old) I can tell you that there was, especially early on, a constant whine about screen latency. Whether it was cursor response or xterm scrolling. When "workstations" became a thing, they sometimes had explicit display hardware for just the mouse because that would cut out the latency of rendering the mouse in the frame. (not to mention the infamous XOR patent[1])
As a result of all this whinging, the code paths that were between keyboard/mouse input and their effect on the screen, were constantly being evaluated for ways to "speed them up and reduce latency." Wayland, being something relatively "new" compared to X11, has not had this level of scrutiny for as long. I'm looking forward to folks fixing it though.
[1] https://patents.google.com/patent/US4197590