Skip to content(if available)orjump to list(if available)

'Attention is all you need' coauthor says he's 'sick' of transformers

dekhn

The way I look at transformers is: they have been one of the most fertile inventions in recent history. Originally released in 2017, in the subsequent 8 years they completely transformed (heh) multiple fields, and at least partially led to one Nobel prize.

realistically, I think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area for research exploration for the foreseeable future.

samsartor

I'm skeptical that we'll see a big breakthrough in the architecture itself. As sick as we all are of transformers, they are really good universal approximators. You can get some marginal gains, but how more _universal_ are you realistically going to get? I could be wrong, and I'm glad there are researchers out there looking at alternatives like graphical models, but for my money we need to look further afeild. Reconsider the auto-regressive task, cross entropy loss, even gradient descent optimization itself.

kingstnap

There are many many problems with attention.

The softmax has issues regarding attention sinks [1]. The softmax also causes sharpness problems [2]. In general this decision boundary being Euclidean dot products isn't actually optimal for everything, there are many classes of problem where you want polyhedral cones [3]. Positional embedding are also janky af and so is rope tbh, I think Cannon layers are a more promising alternative for horizontal alignment [4].

I still think there is plenty of room to improve these things. But a lot of focus right now is unfortunately being spent on benchmaxxing using flawed benchmarks that can be hacked with memorization. I think a really promising and underappreciated direction is synthetically coming up with ideas and tests that mathematically do not work well and proving that current arhitectures struggle with it. A great example of this is the VITs need glasses paper [5], or belief state transformers with their star task [6]. The Google one about what are the limits of embedding dimensions also is great and shows how the dimension of the QK part is actually important to getting good retrevial [7].

[1] https://arxiv.org/abs/2309.17453

[2] https://arxiv.org/abs/2410.01104

[3] https://arxiv.org/abs/2505.17190

[4] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5240330

[5] https://arxiv.org/abs/2406.04267

[6] https://arxiv.org/abs/2410.23506

[6] https://arxiv.org/abs/2508.21038

ACCount37

If all your problems with attention are actually just problems with softmax, then that's an easy fix. Delete softmax lmao.

No but seriously, just fix the fucking softmax. Add a dedicated "parking spot" like GPT-OSS does and eat the gradient flow tax on that, or replace softmax with any of the almost-softmax-but-not-really candidates. Plenty of options there.

The reason why we're "benchmaxxing" is that benchmarks are the metrics we have, and the only way by which we can sift through this gajillion of "revolutionary new architecture ideas" and get at the ones that show any promise at all. Of which there are very few, and fewer still that are worth their gains when you account for: there not being an unlimited amount of compute. Especially not when it comes to frontier training runs.

Memorization vs generalization is a well known idiot trap, and we are all stupid dumb fucks in the face of applied ML. Still, some benchmarks are harder to game than others (guess how we found that out), and there's power in that.

eldenring

I think something with more uniform training and inference setups, and otherwise equally hardware friendly, just as easily trainable, and equally expressive could replace transformers.

krychu

BDH

tim333

Yeah that thing is quite interesting - baby dragon hatchling https://news.ycombinator.com/item?id=45668408 https://youtu.be/mfV44-mtg7c

jimbo808

Which fields have they completely transformed? How was it before and how is it now? I won't pretend like it hasn't impacted my field, but I would say the impact is almost entirely negative.

isoprophlex

Everyone who did NLP research or product discovery in the past 5 years had to pivot real hard to salvage their shit post-transformers. They're very disruptively good at most NLP task

edit: post-transformers meaning "in the era after transformers were widely adopted" not some mystical new wave of hypothetical tech to disrupt transformers themselves.

dingnuts

Sorry but you didn't really answer the question. The original claim was that transformers changed a whole bunch of fields, and you listed literally the one thing language models are directly useful for.. modeling language.

I think this might be the ONLY example that doesn't back up the original claim, because of course an advancement in language processing is an advancement in language processing -- that's tautological! every new technology is an advancement in its domain; what's claimed to be special about transformers is that they are allegedly disruptive OUTSIDE of NLP. "Which fields have been transformed?" means ASIDE FROM language processing.

other than disrupting users by forcing "AI" features they don't want on them... what examples of transformers being revolutionary exist outside of NLP?

Claude Code? lol

rootnod3

So, unless this went r/woosh over my head....how is current AI better than shit post-transformers? If all....old shit post-transformers are at least deterministic or open and not a randomized shitbox.

Unless I misinterpreted the post, render me confused.

jimmyl02

in the super public consumer space, search engines / answer engines (like chatgpt) are the big ones.

on the other hand it's also led to improvements in many places hidden behind the scenes. for example, vision transformers are much more powerful and scalable than many of the other computer vision models which has probably led to new capabilities.

in general, transformers aren't just "generate text" but it's a new foundational model architecture which enables a leap step in many things which require modeling!

ACCount37

Transformers also make for a damn good base to graft just about any other architecture onto.

Like, vision transformers? They seem to work best when they still have a CNN backbone, but the "transformer" component is very good at focusing on relevant information, and doing different things depending on what you want to be done with those images.

And if you bolt that hybrid vision transformer to an even larger language-oriented transformer? That also imbues it with basic problem-solving, world knowledge and commonsense reasoning capabilities - which, in things like advanced OCR systems, are very welcome.

dekhn

Genomics, protein structure prediction, various forms of small molecule and large molecule drug discovery.

thesz

No neural protein structure prediction papers I read have compared transformers to SAT solvers.

As if this approach [1] does not exist.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7197060/

CHY872

In computer vision transformers have basically taken over most perception fields. If you look at paperswithcode benchmarks it’s common to find like 10/10 recent winners being transformer based against common CV problems. Note, I’m not talking about VLMs here, just small ViTs with a few million parameters. YOLOs and other CNNs are still hanging around for detection but it’s only a matter of time.

thesz

Can it be that transformer-based solutions come from the well-funded organizations that can spend vast amount of money on training expensive (O(n^3)) models?

Are there any papers that compare predictive power against compute needed?

Profan

hah well, transformative doesn't necessarily mean positive!

econ

All we get is distraction.

blibble

> but I would say the impact is almost entirely negative.

quite

the transformer innovation was to bring down the cost of producing incorrect, but plausible looking content (slop) in any modality to near zero

not a positive thing for anyone other than spammers

jonas21

Out of curiosity, what field are you in?

warkdarrior

Spam detection and phishing detection are completely different than 5 years ago, as one cannot rely on typos and grammar mistakes to identify bad content.

walkabout

Spam, scams, propaganda, and astroturfing are easily the largest beneficiaries of LLM automation, so far. LLMs are exactly the 100x rocket-boots their boosters are promising for other areas (without such results outside a few tiny, but sometimes important, niches, so far) when what you're doing is producing throw-away content at enormous scale and have a high tolerance for mistakes, as long as the volume is high.

onlyrealcuzzo

The signals might be different, but the underlying mechanism is still incredibly efficient, no?

epistasis

> think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area for research exploration for the foreseeable future.

As somebody who was a biiiiig user of probabilistic graphical models, and felt kind of left behind in this brave new world of stacked nets, I would love for my prior knowledge and experience to become valuable for a broader set of problem domains. However, I don't see it yet. Hope you are right!

cauliflower2718

+1, I am also big user of PGMs, and also a big user of transformers, and I don't know what the parent comment talking about, beyond that for e.g. LLMs, sampling the next token can be thought of as sampling from a conditional distribution (of the next token, given previous tokens). However, this connection of using transformers to sample from conditional distributions is about autoregressive generation and training using next-token prediction loss, not about the transformer architecture itself, which mostly seems to be good because it is expressive and scalable (i.e. can be hardware-optimized).

Source: I am a PhD student, this is kinda my wheelhouse

AaronAPU

I have my own probabilistic hyper-graph model which I have never written down in an article to share. You see people converging on this idea all over if you’re looking for it.

Wish there were more hours in the day.

rbartelme

Yeah I think this is definitely the future. Recently, I too have spent considerable time on probabilistic hyper-graph models in certain domains of science. Maybe it _is_ the next big thing.

hammock

> I think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area

I agree. Causal inference and symbolic reasoning would SUPER juicy nuts to crack , more so than what we got from transformers.

cyanydeez

Cancer is also fertile. Its more addiction than revolution, im afraid.

cindyllm

[dead]

pigeons

Not doubting in any way, but what are some fields it transformed

eli_gottlieb

> probabilistic graphical models- of which transformers is an example

Having done my PhD in probabilistic programming... what?

dekhn

I was talking about things inspired by (for example) hidden markov models. See https://en.wikipedia.org/wiki/Graphical_model

In biology, PGMs were one of the first successful forms of "machine learning"- given a large set of examples, train a graphical model using probabilities using EM, and then pass many more examples through the model for classification. The HMM for proteins is pretty straightforward, basically just a probabilistic extension of using dynamic programming to do string alignment.

My perspective- which is a massive simplification- is that sequence models are a form of graphical model, although the graphs tend to be fairly "linear" and the predictions generate sequences (lists) rather than trees or graphs.

pishpash

It's got nothing to do with PGM's. However, there is the flavor of describing graph structure by soft edge weights vs. hard/pruned edge connections. It's not that surprising that one does better than the other, and it's a very obvious and classical idea. For a time there were people working on NN structure learning and this is a natural step. I don't think there is any breakthrough here, other than that computation power caught up to make it feasible.

stephc_int13

The current AI race has created a huge sunken cost issue, if someone found a radically better architecture it could not only destroy a lot of value but also reset the race.

I am not surprised that everyone is trying to make faster horses instead of combustion engines…

bangaladore

> Now, as CTO and co-founder of Tokyo-based Sakana AI, Jones is explicitly abandoning his own creation. "I personally made a decision in the beginning of this year that I'm going to drastically reduce the amount of time that I spend on transformers," he said. "I'm explicitly now exploring and looking for the next big thing."

So, this is really just a BS hype talk. This is just trying to get more funding and VCs.

brandall10

Attention is all he needs.

osener

Reminds me of the headline I saw a long time ago: “50 years later, inventor of the pixel says he’s sorry that he made it square.”

LogicFailsMe

Sadly, he probably needs a lot more or he's gonna go all Maslow...

elicash

Why wouldn't this both be an attempt to get funding and also him wanting to do something new? Certainly if he was wanting to do something new he'd want it funded, too?

energy123

It's also how curious scientists operate, they're always itching for something creative and different.

cheschire

Well he got your attention didn't he?

htrp

anyone know what they're trying to sell here?

gwbas1c

The ability to do original, academic research without the pressure to build something marketable.

aydyn

probably AI

IncreasePosts

It would be hype talk if he said and my next big thing is X.

bangaladore

Well, that's why he needs funding. Hasn't figured out what the next big thing is.

password54321

If it was about money it would probably be easier to double down on something proven to make revenue rather than something that doesn't even exist.

Edit: there is a cult around transformers.

ivape

He sounds a lot like how some people behave when they reach a "top". Suddenly that thing seems unworthy all of a sudden. It's one of the reasons you'll see your favorite music artist totally go a different direction on their next album. It's an artistic process almost. There's a core arrogance involved, that you were responsible for the outcome and can easily create another great outcome.

dekhn

Many researchers who invent something new and powerful pivot quickly to something new. that's because they're researchers, and incentive is to develop new things that subsume the old things. Other researchers will continue to work on improving existing things and finding new applications to existing problems, but they rarely get as much attention as the folks who "discover" something new.

ASalazarMX

Also, not all researchers have the fortune of doing the research they would want to. If he can do it, it would be foolish not to take the opportunity.

moritzwarhier

Why "arrogance"? There are music artists that truly enjoy making music and don't just see their purpose in maximizing financial success and fan service?

There are other considerations that don't revolve around money, but I feel it's arrogant to assume success is the only motivation for musicians.

ivape

Sans money, it's arrogant because we know talent is god-given. You are basically betting again that your natural given trajectory has more leg room for more incredible output. It's not a bad bet at all, but it is a bet. Some talent is so incredible that it takes a while for the ego to accept its limits. Jordan tried to come back at 40 and Einstein fought quantum mechanics unto death. Accepting the limits has nothing to do with mediocrity, and everything to do with humility. You can still have an incredible trajectory beyond belief (which I believe this person has and will have).

dmix

That’s just normal human behaviour to have evolving interests

Arrogance would be if explicitly chose to abandon it because he thought he was better

toxic72

Its also plausible that the research field attracts people who want to explore the cutting edge and now that transformers are no longer "that"... he wants to find something novel.

null

[deleted]

ambicapter

Or a core fear, that you'll never do something as good in the same vein as the smash hit you already made, so you strike off in a completely different direction.

Mistletoe

Sometimes it just turns out like Michael Jordan playing baseball.

bigyabai

When you're overpressured to succeed, it makes a lot of sense to switch up your creative process in hopes of getting something new or better.

It doesn't mean that you'll get good results by abandoning prior art, either with LLMs or musicians. But it does signal a sort of personal stress and insecurity, for sure.

ivape

It's a good process (although, many take it to its common conclusion which is self-destruction). It's why the most creative people are able to re-invent themselves. But one must go into everything with both eyes open, and truly humble themselves with the possibility that that may have been the greatest achievement of their life, never to be matched again.

I wonder if he can simply sit back and bask in the glory of being one of the most important people during the infancy of AI. Someone needs to interview this guy, would love to see how he thinks.

Xcelerate

Haha, I like to joke that we were on track for the singularity in 2024, but it stalled because the research time gap between "profitable" and "recursive self-improvement" was just a bit too long that we're now stranded on the transformer model for the next two decades until every last cent has been extracted from it.

ai-christianson

There's massive hardware and energy infra built out going on. None of that is specialized to run only transformers at this point, so wouldn't that create a huge incentive to find newer and better architectures to get the most out of all this hardware and energy infra?

Mehvix

>None of that is specialized to run only transformers at this point

isn't this what [etched](https://www.etched.com/) is doing?

imtringued

Only being able to run transformers is a silly concept, because attention consists of two matrix multiplications, which are the standard operation in feed forward and convolutional layers. Basically, you get transformers for free.

Davidzheng

how do you know we're not at recursive self-improvement but the rate is just slower than human-mediated improvement?

nabla9

What "AI" means for most people is the software product they see, but only a part of it is the underlying machine learning model. Each foundation model receives additional training from thousands of humans, often very lowly paid, and then many prompts are used to fine-tune it all. It's 90% product development, not ML research.

If you look at AI research papers, most of them are by people trying to earn a PhD so they can get a high-paying job. They demonstrate an ability to understand the current generation of AI and tweak it, they create content for their CVs.

There is actual research going on, but it's tiny share of everything, does not look impressive because it's not a product, or a demo, but an experiment.

tippytippytango

It's difficult to do because of how well matched they are to the hardware we have. They were partially designed to solve the mismatch between RNNs and GPUs, and they are way too good at it. If you come up with something truly new, it's quite likely you have to influence hardware makers to help scale your idea. That makes any new idea fundamentally coupled to hardware, and that's the lesson we should be taking from this. Work on the idea as a simultaneous synthesis of hardware and software. But, it also means that fundamental change is measured in decade scales.

I get the impulse to do something new, to be radically different and stand out, especially when everyone is obsessing over it, but we are going to be stuck with transformers for a while.

danielmarkbruce

This is backwards. Algorithms that can be parallelized are inherently superior, independent of the hardware. GPUs were built to take advantage of the superiority and handle all kinds of parallel algorithms well - graphics, scientific simulation, signal processing, some financial calculations, and on and on.

There’s a reason so much engineering effort has gone into speculative execution, pipelining, multicore design etc - parallelism is universally good. Even when “computers” were human calculators, work was divided into independent chunks that could be done simultaneously. The efficiency comes from the math itself, not from the hardware it happens to run on.

RNNs are not parallelizable by nature. Each step depends on the output of the previous one. Transformers removed that sequential bottleneck.

vagab0nd

It's pretty common I think. A thing is useful, but not intrinsically interesting (not anymore). So.. "let's move on"? Also, burn out is possible.

janalsncm

I have a feeling there is more research being done on non-transformer based architectures now, not less. The tsunami of money pouring in to make the next chatbot powered CRM doesn’t care about that though, so it might seem to be less.

I would also just fundamentally disagree with the assertion that a new architecture will be the solution. We need better methods to extract more value from the data that already exists. Ilya Sutskever talked about this recently. You shouldn’t need the whole internet to get to a decent baseline. And that new method may or may not use a transformer, I don’t think that is the problem.

marcel-c13

I think you misunderstood the article a bit by saying that the assertion is "that a new architecture will be the solution". That's not the assertion. It's simply a statement about the lack of balance between exploration and exploitation. And the desire to rebalance it. What's wrong with that?

tim333

The assertion, or maybe idea, that a new architecture may be the thing is kind of about building AGI rather than chatbots.

Like humans think about things and learn which may require some differences from feed the internet in to pre-train your transformer.

fritzo

It looks like almost every AI researcher and lab who existed pre-2017 is now focused on transformers somehow. I agree the total number of researchers has increased, but I suspect the ratio has moved faster, so there are now fewer total non-transformer researchers.

janalsncm

Well, we also still use wheels despite them being invented thousands of years ago. We have added tons of improvements on top though, just as transformers have. The fact that wheels perform poorly in mud doesn’t mean you throw out the concept of wheels. You add treads to grip the ground better.

If you check the DeepSeek OCR paper it shows text based tokenization may be suboptimal. Also all of the MoE stuff, reasoning, and RLHF. The 2017 paper is pretty primitive compared to what we have now.

alyxya

I think people care too much about trying to innovate a new model architecture. Models are meant to create a compressed representation of its training data. Even if you came up with a more efficient compression, the capabilities of the model wouldn't be any better. What is more relevant is finding more efficient ways of training, like the shift to reinforcement learning these days.

marcel-c13

But isn't the max training efficiency naturally tied to the architecture? Meaning other architecture have another training efficiency landscape? I've said it somewhere else: It is not about "caring too much about new model architecture" but to have a balance between exploitation and exploration.

alyxya

I didn't really convey my thoughts very well. I think of the actual valuable "more efficient ways of training" to be paradigm shifts between things like pretraining for learning raw knowledge, fine-tuning for making a model behave in certain ways, and reinforcement learning for learning from an environment. Those are all agnostic to the model architecture, and while there could be better model architectures that make pretraining 2x faster, it won't make pretraining replace the need for reinforcement learning. There isn't as much value in trying to explore this space compared to finding ways to train a model to be capable of something it wasn't before.

einrealist

I ask myself how much the focus of this industry on transformer models is informed by the ease of computation on GPUs/NPUs, and whether better AI technology is possible but would require much greater computing power on traditional hardware architectures. We depend so much on traditional computation architectures, it might be a real blinder. My brain doesn't need 500 Watts, at least I hope so.

nashashmi

Transformers have sucked up all the attention and money. And AI scientists have been sucked in to the transformer-is-prime industry.

We will spend more time in the space until we see bigger roadblocks.

I really wished energy consumption was a very big roadblock that forced them into still researching.

tim333

I think it may be a future roadblock quite soon. If you look at all the data centers planned and speed of it, it's going to be a job getting the energy. xAI hacked it by putting about 20 gas turbines around their data center which is giving locals health problems from the pollution. I imagine that sort of thing will be cracked down on.

dmix

If there’s a legit long term demand for energy the market will figure it out. I doubt that will be a long term issue. It’s just a short term one because of the gold rush. But innovation doesn’t have to happen overnight. The world doesn’t live or die on a subset of VC funds not 100xing within a certain timeframe

Or it’s possible China just builds the power capabilities faster because they actually build new things

teleforce

>The project, he said, was "very organic, bottom up," born from "talking over lunch or scrawling randomly on the whiteboard in the office."

Many of the breakthrough and game changing inventions were done this way with the back of the envelope discussions, the other popular example was the Ethernet network.

Some good stories of similar culture in AT&T Bell lab is well described in the Hamming's book [1].

[1] Stripe Press The Art of Doing Science and Engineering:

https://press.stripe.com/the-art-of-doing-science-and-engine...

CaptainOfCoit

All transformative inventions and innovations seems to come from similar scenarios like "I was playing around with these things" or "I just met X at lunch and we discussed ...".

I'm wondering how big impact work from home will really have on humanity in general, when so many of our life changing discoveries comes from the odd chance of two specific people happening to be in the same place at some moment in time.

fipar

What you say is true, but let’s not forget that Ken Thompson did the first version of unix in 3 weeks while his wife had gone to California with their child to visit relatives, so deep focus is important too.

It seems, in those days, people at Bell Labs did get the best of both worlds: being able to have chance encounters with very smart people while also being able to just be gone for weeks to work undistracted.

A dream job that probably didn’t even feel like a job (at least that’s the impression I get from hearing Thompson talk about that time).

DyslexicAtheist

I'd go back to the office in a heartbeat provided it was an actual office. And not an "open-office" layout, that people are forced to try to concentrate with all the noise and people passing behind them constantly.

The agile treadmill (with PM's breathing down our necks) and features getting planned and delivered in 2 week-sprints, has also reduced our ability to just do something we feel needs getting done. Today you go to work to feed several layers of incompetent managers - there is no room for play, or for creativity. At least in most orgs I know.

I think innovation (or even joy of being at work) needs more than just the office, or people, or a canteen, but an environment that supports it.

entropicdrifter

Personally, I try to under-promise on what I think I can do every sprint specifically so I can spend more time mentoring more junior engineers, brainstorming random ideas, and working on stuff that nobody has called out as something that needs working on yet.

Basically, I set aside as much time as I can to squeeze in creativity and real engineering work into the job. Otherwise I'd go crazy from the grind of just cranking out deliverables

dekhn

We have an open office surrounded by "breakout offices". I simply squat in one of the offices (I take most meetings over video chat), as do most of the other principals. I don't think I could do my job in an office if I couldn't have a room to work in most of the time.

As for agile: I've made it clear to my PMs that I generally plan on a quarterly/half year basis and my work and other people's work adheres to that schedule, not weekly sprints (we stay up to date in a slack channel, no standups)

tagami

Perhaps this is why we see AI devotees congregate in places like SF - increased probability

liuliu

And it is always felt to me that has lineage from neural Turing machine line of work as prior. The transformative part was 1. find a good task (machine translation) and a reasonable way to stack (encoder-decoder architecture); 2. run the experiment; 3. ditch the external KV store idea and just use self-projected KV.

Related thread:https://threadreaderapp.com/thread/1864023344435380613.html

atonse

True in creativity too.

According to various stories pieced together, the ideas of 4 of Pixar’s early hits were conceived on or around one lunch.

Bug’s Life, Wall-E, Monsters, Inc

emi2k01

The fourth one is Finding Nemo

bitwize

One of the OG Unix guys (was it Kernighan?) literally specced out UTF-8 on a cocktail napkin.

dekhn

Thompson and Pike: https://en.wikipedia.org/wiki/UTF-8

"""Thompson's design was outlined on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. In the following days, Pike and Thompson implemented it and updated Plan 9 to use it throughout,[11] and then communicated their success back to X/Open, which accepted it as the specification for FSS-UTF.[9]"""