Skip to content(if available)orjump to list(if available)

Stack Overflow is almost dead

Stack Overflow is almost dead

146 comments

·May 15, 2025

louison11

My heart goes to the stack overflow community which has always been very kind and helpful, essentially working for free. As a self-taught developer since the age of 8, I literally grew up learning how to code through SO, asking hundreds of questions and answering many more. So many bugs that would take 2-3 days to fix would eventually find their answer through it. But now ChatGPT does that in minutes… so it’s for the best!

moribvndvs

The presumption is that things will improve over time, but the big difference in my experience is the assistance I got from SO _worked_ the vast majority of the time, whereas various LLMs I have used generate unusable, misleading, or unreliable results pretty regularly, increasing as complexity or rarity arises. As human-driven knowledge bases backed by actual experience are replaced by inference from models who rely on such inputs, I am concerned about the medium to long term impact. A lot of people grew frustrated with SO for various reasons and went back to unhelpful behaviors that SO resolved at its zenith (rather than dead ends and flame wars in newslists and irc channels, they do it in random subreddits and discord servers instead). Now what if we circle back after degenerative LLM experiences only to find there’s nothing to circle back on?

myvoiceismypass

Personally, I am getting extremely tired of ChatGPT hallucinating npm packages that don't exist, or package imports that do not exist

datavirtue

Get a good service. Running it on your old gaming machine?

Gigachad

Mixed feelings on SO. It was helpful, but it was also a website you dread having to post on because it was filled with the most intolerable people of the internet who you just had to endure abuse from if you wanted help.

Now chatGPT gives you the same help without the abuse.

awesome_dude

The next AI totally needs to be more snarky to make it feel more like we're dealing with actual "thinksperts", people that think they are experts even if their answers are demonstrably wrong.

NBJack

"You are Comic Book Guy from The Simpsons. You are very knowledgeable in the _____ language; in fact, you believe you are the foremost expert on it. You have taken time from your busy schedule to help the unwashed masses by answering the following question..."

kulahan

I think this is the first time I’ve ever heard someone describe the stack overflow community as “kind”. Usually it’s the exact opposite: “I asked a question and just got 30 questions asking why I’m trying to do this” or “my question was closed for seemingly no reason”.

It’s literally the most blunt and aggressive website I’ve ever been on that wasn’t a straight-up troll site like 4-Chan.

NBJack

Question closed; here's a link to another one that sounds vaguely related but doesn't actually address your problem.

But seriously, I'd love to see some sentiment analysis of the SO corpus classifying tone by tag.

zahlman

"Having your problem addressed" is not a valid reason to post on Stack Overflow. You are expected before posting to have done enough analysis to the point where if your question is answered, you can solve the underlying problem yourself. When you are linked to a duplicate, it's because the person doing so believes in good faith that, to the extent that you have a question that meets the site's standards, answers to the other question will answer yours as well. This also means you are responsible for overlooking irrelevant details, reading the answers, making your own attempts to apply them, etc.

If the other question is actually different, you are expected to edit the question to make this clear - not by adding an "Edit:" section like in a forum post, but by fixing the wording such that it's directly clear what you're looking for and how it's different. This might mean fixing your specification of input or desired output.

It's difficult sometimes, and curators do make mistakes. Most frustratingly, it's entirely possible for two completely different problems to be reasonably described with all the same keywords. I personally had a hell of a time disentangling https://stackoverflow.com/questions/9764298 from https://stackoverflow.com/questions/18016827, while also explaining that https://stackoverflow.com/questions/6618515 really is the same as the first problem despite different phrasing.

But curators much more often get it right. Not only that, a few of us go out of our way to create artificial Q&A (https://meta.stackoverflow.com/questions/426205) for beginner issues that beginners never know how to explain, and put immense effort into both the question and answer. Some popular examples in the Python tag:

"I'm getting an IndentationError (or a TabError). How do I fix it?" (https://stackoverflow.com/questions/45621722) was written to replace "IndentationError: unindent does not match any outer indentation level, although the indentation looks correct" (https://stackoverflow.com/questions/492387) and a few others, with reasoning stated there.

"Asking the user for input until they give a valid response" (https://stackoverflow.com/questions/23294658)

"Why does "a == x or y or z" always evaluate to True? How can I compare "a" to all of those?" (https://stackoverflow.com/questions/20002503) was written largely as an alternative to the organic "How to test multiple variables for equality against a single value?" (https://stackoverflow.com/questions/15112125) after the latter was found not to help beginners very well (the original example was quite unclear, although it's since been improved).

mixmastamyk

Yep, believe it's a direct result of Atwood's iron-fisted no-bullshit policy. To some extent it is great... don't want it turning into Yahoo Answers, do we? I think folks forget about that part.

But, as you mention they just went too damn far with the medicine.

No, you can't fix this misspelling, isn't there something else (with more characters) that you can improve as well? WTF, for realz? :-/

zahlman

>No, you can't fix this misspelling, isn't there something else (with more characters) that you can improve as well? WTF, for realz? :-/

I agree this complaint is legitimate. The problem is that the system expects unprivileged users to have their edits reviewed by three privileged users in a queue (so that people actually pay attention and vandalism doesn't just go unnoticed for months), so this is meant to limit the drain on that resource.

You may be interested in my answer to "Reviewer overboard! Or a request to improve the onboarding guidance for new reviewers in the suggested edits queue" on the meta site (https://meta.stackoverflow.com/a/420357/523612).

bitbasher

Are you talking about Stackoverflow? Every time I asked a detailed question it would be closed within minutes.

I'm not surprised it's on the way out.

bryanlarsen

I've asked dozens of questions on SO, and never had a single one closed. I hear your sentiment often, but have no idea whether my experience or yours is more common.

I've had 3 deleted by Community bot as abandoned, but since they were over a year old when that happened, I couldn't care less.

aquafox

Until there is a radically new version of {popular programming language} with breaking changes and no new and correct answers to train on.

1123581321

These models can figure out syntax and language features they haven’t seen before. Try it with a few code snippets of your own made-up language. It’s a little freaky.

zahlman

They can implicitly assume that your made-up language is designed to be easy to use by native language speakers, and thus apply their existing understanding of "code" to it, sure.

TremendousJudge

> But now ChatGPT does that in minutes

But it's trained on stackoverflow data? What happens in a few years when the data gets more and more outdated? Where will it get its knowledge then?

staircasebug

They're learning from working code in GitHub, IDE "co-pilots"...

TremendousJudge

But a priori you don't know if the code you find on Github is "good", plus it doesn't come with a handy explanation. The quality of the data is much, much worse.

MarcelOlsz

It will steal our own data and we'll have a big "oopsie! didn't mean to!" moment 5-10 years after.

TremendousJudge

My point is that there won't even be any data to steal! The novel human-written and human-rated answers just won't exist anymore. Where will it get its answers on C++26 features from? Not the non-existing StackOverflow, that's for sure.

kulahan

Why does any LLM need new information to do fundamentally the same thing?

And what makes the data outdated? New code? It can train on that. That, or there is simply nothing new to learn, just new ways to express the same thing.

swat535

> Why does any LLM need new information to do fundamentally the same thing?

What makes you think we will be doing fundamentally the same thing in the future? Language grow and change, systems change, operating systems change, hardware and specs change..

Nothing in computing is ever static.

awesome_dude

Closing this as its a statement not a question

threatofrain

I just hope that we can continue to find sources of high quality training data like SO. If people don't publish their mutual learnings somewhere then there's no data to train on.

irrational

> 2014: questions started to decline, which was also when Stack Overflow significantly improved moderator efficiency. From then, questions were closed faster, many more were closed, and “low quality” questions were removed more efficiently. This tallies with my memory of feeling that site moderators had gone on a power trip by closing legitimate questions. I stopped asking questions around this time because the site felt unwelcome.

I also felt around that time that it became unwelcoming. I didn’t realize they had revamped the moderator tools. That is the time period when I stopped using it too. Now I know why.

How many other websites have also shot themselves in the foot by tweaking things?

Buttons840

For a time the "let's interact with people and talk about cool things" group and the "let's build the ultimate knowledge base" group had their incentives aligned.

Then, with better moderator tools, the "ultimate knowledge base" group set out to achieve the ultimate knowledge base by reducing the amount of people who were just there to talk.

mnky9800n

Yes I felt the same around then. It seemed like stack overflow was sending the message to not ask questions anymore. It was really weird.

D13Fd

Same. I've never been a huge StackOverflow user, but it is so irritating to search and find your exact question on StackOverflow, often as the top result, only to see that it was instantly shut down two years ago as a duplicate of some other question in another context with inapplicable and useless answers.

It is frustrating not only because you can't get instant help, but also because it shows the futility of even trying to post on there.

Funes-

How ironic. "AI" feeds off structured knowledge, artistic creations and otherwise any human production to generate its output. As a consequence of its widespread adoption, people start to lean even more towards consuming rather than producing, a tendency which was already increasing before the advent of LLMs and modern machine-learning. This, in turn, leaves "AI" implementations with no new human content to feed off of. Now what? The whole process folds onto itself. Are we entering the dark ages of cultural (in the widest sense of the word) production? Not that I don't think that we're already there, in any case, but for other, somewhat related causes...

bloppe

Perhaps the next step is having the LLMs ask questions on SO when they routinely fumble particular topics. I could see a system of knowledge bounties where people are compensated for providing accurate, in-depth training data on niche topics.

zahlman

LLM content is banned everywhere on Stack Overflow, in both questions and answers, by policy, since mere days after the public announcement of ChatGPT (because it was immediately causing a huge problem): https://meta.stackoverflow.com/questions/421831

Moderators (actual elected moderators, the two dozen or so that exist for ~29 million user accounts and ~24 million non-deleted questions) went on strike in mid 2023, largely because the site staff/owners interfered with their ability to remove such content (an overwhelmingly popular policy with strong community consensus): https://meta.stackoverflow.com/questions/425000 and this decision propagated across the Stack Exchange network (as most SE sites had adopted similar policies): https://meta.stackexchange.com/questions/389811/

A large fraction of the userbase is explicitly opposed to helping LLMs out in any way whatsoever. I personally have ceased contributing new question or answer content, and only edit existing posts. I contribute new content on Codidact (https://software.codidact.com/) instead (disclosure: I have recently become a moderator there).

trial3

you’re one or two additional sentences away from the plot to The Matrix

dinfinity

> This, in turn, leaves "AI" implementations with no new human content to feed off of. Now what?

You seem to be under the impression that AI needs more than all recorded human knowledge up until 2024 to reach the same level as an average SO contributor. It doesn't. Because none of the average SO contributors did.

It is unclear what algorithmic improvements are required to leverage the available data to get AI to AGI, but a lack of data is definitely not the bottleneck.

One could say that these AI systems aren't sharing their solutions (or questions) with other AI systems and that the world would benefit from it if they did, though. Perhaps it's a good idea to have some shared space for AI systems where they share the validated solutions they synthesized.

dragonwriter

> You seem to be under the impression that AI needs more than all recorded human knowledge up until 2024 to reach the same level as an average SO contributor.

Replacing the average SO cobtributor isn't adequate to replace SO, and AI is able to “replace” SO effectively only since major models have gotten not only SO-as-training-data but web search (including SO) for immediate grounding.

And without SO or something like it with active human contributions it’ll have even more trouble replacing the value SO would provide for new questions and new domains where it will neither have SO traijing data nor SO query-time-search-results to use to synthesize answers.

palata

I find it interesting that the current StackOverflow moderators tend to say "in the past we used to accept too many questions but it was never the goal, so now we are doing it as it was meant to be".

Sure, but in the past, StackOverflow was growing, and now it's dying. Maybe something was better before, when "it was not done correctly"?

bawolff

I think these sorts of things are just an unfortunate side effect of scaling. The bigger you get the more people get lost in the bureaucracy. However if you don't build up the bureaucracy the system collapses under its own popularity.

Wikipedia has a similar issue where editing declined around 2007, which is often blamed on stricter enforcement of rules, more complex rules, etc. I think its just a natural stage of growth. You can't be a free for all forever.

Karrot_Kream

The "good" thing is, they're back to 2009 levels of postings. Now obviously that's what the mods let through but my guess is that traffic to the site is down precipitously as well. They can roll back their bureaucracy and head back to a lean path that worked for them in the past.

But I don't really think that's the problem. Reading zahlman's responses in this thread makes me think that the mods fell into the age old trap that's happened since Usenet, IRC, and still happens to this day wherever there's mods: they got tired of doing unpaid labor and instead of deciding to quit decided to become meaner and stricter. The age old mod trip.

motorest

> Sure, but in the past, StackOverflow was growing, and now it's dying. Maybe something was better before, when "it was not done correctly"?

You're presuming that the current volume of questions represent novel, unique posts instead of something you can find over and over again if you do a decent query.

goobie

AFAICR they've always said these lines about now is about better moderation from the slop. The reality is that the rule of thumb for that moderation was already out of date with advances that preceeded LLMs.. Even with the beginnings of computer aided flows we didn't need to alienate most to get the best content and develop the few. Content can be triaged from someone who may be human to others who may be human and maybe there's value or maybe you just didn't alienate anyone and some people will still climb to making higher levels of content that is worth condensing.

zahlman

> Even with the beginnings of computer aided flows we didn't need to alienate most to get the best content and develop the few.

The large majority of new questions from new accounts are from people who are clearly there only to solve a personal problem, who show no interest in considering the value of their question to third parties, and rarely put any effort into attempting to even diagnose or specify a problem.

Even after it became possible for most of these people to get an instant answer from an LLM. Which is actively preferable from the standpoint of Stack Overflow curators. Before LLMs, the point was for them to use a search engine to find an existing question that lets them figure out the problem. But for the Q&A to help such users, they need to apply at least basic problem-solving and debugging skills. (It is explicitly out of scope for the Stack Overflow community to do that for others; and attempting to do this in an answer actively degrades the site for everyone else.) If an LLM can fill in some hypotheses for those users to test, then the LLM is doing what it's best at, and Stack Overflow is doing what it's best at.

Stack Overflow is not there to troubleshoot or debug anything for you, nor to reason about a multi-step problem and break it down into its natural logical steps. It's there to give a direct, objective answer to how to do each individual step, and to explain why the specific point of failure in a failing program fails, after you have identified it and made the problem reproducible.

So yes, we absolutely do need to "alienate most", because "most" are there for a reason that has absolutely nothing to do with getting the best content.

palata

> So yes, we absolutely do need to "alienate most", because "most" are there for a reason that has absolutely nothing to do with getting the best content.

How many of the "desirable" contributors did you alienate in the process?

I may be naive, but when people say "I have been using SO for 10 years but it has become toxic so I left", it doesn't sound like new accounts asking for their homeworks.

zahlman

>the current StackOverflow moderators

Overwhelmingly, the people you're talking about are not moderators. I explained this to someone else a week ago (https://news.ycombinator.com/item?id=43927665) and you replied to that comment.

> Sure, but in the past, StackOverflow was growing

So what? Stack Overflow users get $0.00 for this, whether they're moderators, active curators or just signed up. For users, growing the site isn't the goal. Growing interaction with the site is not the goal. The goal is building a useful artifact (https://meta.stackoverflow.com/questions/254770). This frequently entails removing, closing or duplicating questions, for the same reason that building a useful program frequently entails removing lines of code, deprecating parts of the API, and refactoring.

> and now it's dying

Why should a reduction in incoming questions mean that it's "dying"?

> Maybe something was better before

Who do you think should get to decide what's "better" here? More importantly, why?

If the YC team decided to prioritize increasing site traffic (and introduce ads to capitalize on that) on HN and maximizing the rate of new submissions, at the expense or ignorance of the quality of the discussion, that would be clearly be bad, right? You'd leave, right? I would.

The same principle applies to sites that aren't about having a discussion. Bigger is not better.

palata

> Overwhelmingly, the people you're talking about are not moderators.

I was actually thinking about you. You keep saying everything is great. My observation is that I used to be on SO every day, and I completely stopped contributing even though I would have plenty of stuff to add (more than ever, actually).

> Why should a reduction in incoming questions mean that it's "dying"?

There is "a reduction", and there is "being back to the amount of questions SO had in 2009 when it launched".

zahlman

>You keep saying everything is great.

I say it's fine, because it is. I say that a reduction in question volume has advantages in terms of accomplishing the site's goals, because it does.

There are many things about the site that I'm unhappy with, mainly to do with initiatives the staff are taking that are also very much not true to the site's goals or purpose.

> My observation is that I used to be on SO every day, and I completely stopped contributing even though I would have plenty of stuff to add

... And?

> There is "a reduction", and there is "being back to the amount of questions SO had in 2009 when it launched".

If the amount of questions went to zero per day I would still not consider this a problem. It would be an opportunity to refine the existing publicly visible questions.

As a reminder: there are already more than three times as many of those as there are articles on Wikipedia. You say it's a problem that we don't see thousands more per day like we used to. I say it's a problem that we already have so many; and that if we had perhaps a tenth as many, it would become easier to find what you want.

billy99k

The timing on the sale was genius. Similar to Mark Cuban with Broadcast.com. I guess it's best to sell something before the value plummets to 0.

As far as its demise? AI ate its lunch. I use to use Stack Overflow all the time and haven't even gone to the site for a couple of years now.

zahlman

The new owners have been trying very hard with the "if you can't beat 'em, join 'em" approach. They know this is radically against community consensus (it's been shown to them on the meta site over and over) - so they just get sneakier about it.

Notably, after getting completely humiliated with https://meta.stackoverflow.com/questions/425081 in June 2023 (right after a moderator strike had just started, protesting the staff trying to prevent them from removing AI content from the site), and getting embarrassing feedback on the feature (https://meta.stackoverflow.com/questions/425162), they came back last November with https://meta.stackoverflow.com/questions/432154 and have been forcing it through.

jsheard

Where's the AI of tomorrow going to learn from if nobody is posting Q&As online anymore?

bawolff

From the official docs? AI is good at sumarizing after all

Buttons840

I've been trying to get ChatGPT to write some Emacs Lisp for me and it sucks. Few things are better documented than Emacs. There's several hundreds of pages of documentation, and several million lines of Elisp, but apparently that's not enough.

And I'm not asking for some beautiful architecture from ChatGPT, I'm just asking for simple hacks that get the job done. Elisp is designed to make simple hacks easy, but not easy enough for ChatGPT I guess.

Like, I asked it to make a command which would move the "mark" and the "point" so that the full line was selected. If a selection covered only part of a line, I wanted the selection to expand to cover the full line. To do this, all you have to do is move the mark and the point so that they surround a line. ChatGPT couldn't do it. It would only move the point, but not the mark, it wouldn't do anything with the mark. I explicitly told ChatGPT "no, you have to set both the point and the mark correctly", and then it wrote even more code that only adjusts the point, but not the mark--it would move the point to the beginning of the line, and then move the point to the end of the line, never touching the mark--it's stupid.

null

[deleted]

tedunangst

Good luck with that. The last thing I used SO for was getting answers for SwiftUI and I can assure you the official docs did not contain the needed information.

stego-tech

That’s a tomorrow problem.

(As someone who is all too often hired tomorrow, at a fraction of the before rates, to clean up this mess)

chickenzzzzu

information will be repackaged like credit default swaps in the mid 2000s.

nlawalker

Other people's code on GitHub.

jsheard

The other people's code which will have been AI generated by older, dumber models in many if not most cases? Possibly even written and committed by AI agents with no human review at all? European royalty tried this kind of thing and it didn't end well.

mosdl

AI just stole all its content, I wonder if they will choose to sue.

wanderingstan

Stack overflow never owned the content, it is and was Creative Commons: https://stackoverflow.com/help/licensing

zahlman

That license is granted to the community, but per the linked Terms of Service, the company gets a slightly different license (https://stackoverflow.com/legal/terms-of-service/public#lice... ; scroll to "Subscriber Content"). Emphasis on:

> ... and you grant Stack Overflow the perpetual and irrevocable right and license to ...

amanaplanacanal

That license comes with obligations that the AI companies aren't following.

panstromek

Doesn't feel like the AI is the main driver. Many things changed over time - dev tools got better, editors got smarter, compilers got better error messages, various primary resources improved, tutorial websites, courses and youtube boomed.

Another point of course is that each new question is more and more likely to be already answered. At some point the site pretty much covers most of what is to be answered.

jayflux

One aspect I haven’t seen anyone mention contributing to the decline is GitHub (part of your “improved tooling”)

These days you can go to the repo and there’s usually already an issue open with the problem and a workaround. Or if someone has a question on how to use the tool/software they ask there.

Before GH boomed it was often SO doing this job.

makeitdouble

It could be the most impacting aspect IMHO.

This, and first party developer forums. iOS questions will go directly to Apple's community forums. Same for SalesForce, or Elastic search etc.

There's just a higher noise/signal ratio, a real chance to get answers from experts, and it makes for a stepping stone if the issues needs to be bumped to paid support.

zahlman

There was a noticeable inflection in the question-rate-vs-time curve around the time that ChatGPT was released.

Which is fine. If your question is not answered by `site:stackoverflow.com how to do the thing` but it is answered by an LLM taking `how do I do the thing?` as a prompt and synthezising existing Stack Overflow content, that is inherently not a suitable Stack Overflow question. Because anyone else could put `how do I do the thing?` into the same LLM. It's not any different from using a traditional search engine.

(And when the LLM fails by producing a wrong synthesis, then blessing that result by putting it on Stack Overflow is actively harmful - which is why it's banned by policy.)

imglorp

> each new question is more and more likely to be already answered

Yeah, except for when there should be current answers. Most of computing is in constant flux. There's a mountain of 10+ year old answers that simply don't apply any more.

ahofmann

I answered one question 13 years ago where I still gets points for. Computing as a whole isn't so much in constant flux, it is only JavaScript that changes so much. Hell, I learned to use Apache 30 years ago, and I didn't need to learn anything new the last 25 years of that.

zahlman

The change to Python 3 effectively either invalidated or deprecated tons of Python questions, or else required new answers to be written. In many cases we ended up with an annoying pair of popular questions to capture major 2->3 differences, because you'd get clueless users who thought they were running a 2.x interpreter but were actually running a 3.x interpreter, and also the other way around.

null

[deleted]

nthingtohide

> Another point of course is that each new question is more and more likely to be already answered. At some point the site pretty much covers most of what is to be answered.

AI has the ability combine answers from multiple sources and tailor-make to your exact prompt details. Now that is something we call glove fits the hand. Plus it can explain its answers.

ashirviskas

Yes, it can combine multiple sources and make up an answer that makes no sense. Even if it explains how "it works", it does not help when the API or a function has never existed.

oliwarner

Woof. Looking at a single metric and extrapolating "LLMs killed the radio star"

Stack Exchange sites are designed to nuke duplicates, help people before they post a new question. It seems a natural conclusion that the number of original questions decreases over time.

I won't pretend that some people live their lives inside and LLM but many of us still use search engines and SO.

bfung

Where will training data come from for new tech & programming languages if SO dies?

tevon

All the same places it comes from for human programmers before a language has many answers on SO.

- Documentation - Open source projects using it - Github issues - Source code - Blogs - Youtube videos

The list goes on

simonsarris

I used to answer questions a lot, by around 2013 I had answered maybe ~12% of all HTML canvas questions ever asked. To me it declined a lot sooner, 2014 really does feel like the right inflection point.

There was a belief, sometimes unstated but often explicit, that no more (serious) discussion is really to be had, and further wondering how can one stop people from asking. It became difficult to discuss anything if there was even something vaguely related asked before. It was not possible to discuss something you knew the answer to, but did not know why, or wanted to hear arguments for which of 5 ways might be best. All (to me) very worthwhile technical discussions. Totally shut down.

h4kunamata

I have mixed feeling about this.

It has helped me in the past but yet, I could not reply nor post anything back to help others when I knew the solution because of the way how it works.

To make matters worse while working in IT, I worked with a guy that didn't know anything, if there was no SO post about the problem, the guy couldn't fix the problem.

I have been using Perplexity AI and it has been awesome, and it does provide all the sources it used making it easy to cross check the answers. It has helped me to speed my python learning curve, I am not using search engine anymore, and SO has the problems mentioned above so I have zero interest in using it.

Also, the website layout is a mess, I have to use uBlock Origin with a ton of element picker to stop loading half of its crappy.

bloppe

This will have interesting implications for the LLMs as well, since SO is a wealth of training data. In my experience, LLMs are pretty useless when it comes to helping me with newer, faster-evolving, experimental tools and libraries, which is not surprising. But, if the SO community really atrophies to the point that a lot fewer people are bothering to answer questions, there won't be another centralized resource for answers. Perhaps that just means balkanized communities like random Slack channels will fill the gap, but those aren't search-indexed and I'd bet getting them all into training corpi won't be as easy either.

Maybe the future involves LLMs asking questions on something like SO when it routinely fumbles a particular topic. People could get paid to answer them and provide more training data. Who knows at this point

kurtis_reed

A sociological case study. Legit founders, a fruitful niche, immense value. Growth, politics, corporatization. They did so many things right, then so many things wrong.

If it were up to me, moderation would have been overhauled. But it wasn't up to me.