An AI bubble threatens Silicon Valley, and all of us
131 comments
·March 25, 2025strict9
simonw
> Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity
I'm pretty suspicious of that survey (the one that always gets cited as proof that Copilot makes developers less productive, which then inevitably gets used to argue that all generative AI makes developers left productive): https://resources.uplevelteam.com/gen-ai-for-coding
If I was running a company like https://uplevelteam.com/ that sells developer productivity metrics software, one of the smartest marketing moves I could make would be to put out a report that makes a bold, contrarian claim about a hot topic like AI coding productivity. Guaranteed to get a ton of press coverage from that.
Is the survey itself any good? I filled in the "request a copy" form and it's a two page infographic! Precious few confirmable details on how they actually ran it: https://static.simonwillison.net/static/2025/uplevel-genai-p...
Here's what they say about their methodology:
> Metrics were evaluated prior to implementation of Copilot from January 9 through April 9, 2023 versus after implementation from January 8 through April 7, 2024. This time period was selected to remove the effects of seasonality.
> Data on Copilot access was provided to Uplevel Data Labs across several enterprise engineering customers for a total of 351 developers in the TEST group (with Copilot access) and 434 in the CONTROL group (without Copilot access). The developers in the CONTROL group were similar to those in the TEST group in terms of role, working days, and PR volume in each period.
strict9
I said it in a confusing way, but I do believe it increases productivity, at least for me.
But it's hazy and hard to measure. I am very rarely stuck on hard problems like I was 3+ years ago. But I lose time in other ways. I've never measured it so it's really just a feeling.
I am also skeptical of a company selling something to sponsor or put out a report directly related to what they're selling.
rightbyte
> I am very rarely stuck on hard problems like I was 3+ years ago.
I am stuck on easy problems. Either bugs that are obvious afterwards or cooperation problems. Almost never hard problems.
pcthrowaway
If I add in all the time I spend reading about AI now, or wading through AI slop while researching something, any productivity gains I may see from actually using AI effectively are more than cancelled out.
Interestingly, AI tooling existing in its current form may be making people collectively less productive, even if individually it might make one somewhat more productive at very specific tasks.
intended
Additional example -
Had to review submissions to a conference. You had to pry open a thick rind of words, to get seeds of meaning spread all over the place, and then reconstruct the points being made. Wordy, complex and tiring to analyze.
Dumping it into ChatGPT to get answers was an act of frustration, and the output made you more frustrated. It gave you more words, but didn’t help with actual meaning, unless you just gave in and assumed it was accurate.
It’s making the job of verification harder, and the job of content creation easier. This is not to society’s larger benefit, since the more challenging job is verification.
I shudder to think what is happening with teachers and college at this point.
nerdponx
This has been my complaint about AI from the beginning, and it hasn't gotten better. In the time spend I figuring out how to explain to the AI what I need it to do, I can just sit down and figure it out. AI, for me, has never been a useful assistant for writing code.
Where AI has really improved my productivity is in acting like a colleague I can talk to. "I'm going to start working on X: how would you approach it? what should I know that isn't obvious from the beginning?" or "I am thinking about using Y approach for X problem but I don't see a lot of literature about it. Is there a reason it's not more common?".
These "chats" only take 10-30 minutes and have already led me to learn a bunch of new things, and helps keep me moving on projects where in the past I'd have spent 2-3x as long working through ideas, searching for literature, and figuring things out.
The combination of "every textbook and journal article in existence" with "more or less understands every topic in its training data" is incredibly powerful, especially for people who either didn't do a lot of formal school training or haven't had time to keep up with new developments.
Beginners can benefit from this kind of interaction too, they'll just be talking about simpler topics (which the bot would do even better with) instead of esoterica.
jbreckmckye
From my own (limited) exploration with them, that's how I use them too. As a way to summarise basics and accelerate my progress with new technologies.
I don't need anything esoteric here, I just need a source for the essentials that isn't SEO spam and doesn't assume I'm an absolute moron.
null
jbreckmckye
I'm sure I've read somewhere - and it annoys me immensely that I can't recall the source - that SWEs perceive they are more productive with AI, but the measurements say they aren't.
h4ny
Tangentially related, I feel that SWEs who claim that they are more productive with AI haven't actually demonstrated with real examples of how they are actually more productive.
Nobody I follow (including some prominent bloggers and YouTubers) claiming productivity increase is recording or detailing any workflow or showing real world, non-hobby (scalable, maintainable, readable, secure, etc.) workflows of how to do it. It's like everyone who "knows what they are doing" is hiding what the secret sauce for a competitive edge or that they are all just mediocre SWEs hyping AI up and lying because it makes them more money.
Even real SWEs in large companies I know can't really seem to tell me how their productivity is increasing, and when you dig deeper it always seem to be well-scoped and well-understood problems (which is great, but doesn't match the level of hype and productivity increase that everyone else is claiming) -- and they still have to be very careful with reviewing (for now).
It's almost like AI makes SWE brains go mush and forget about logic and data.
simonw
I wrote 4,800 words about how I'm using LLMs to help me code here, because I was frustrated at how little detailed information there was on that topic: https://simonwillison.net/2025/Mar/11/using-llms-for-code/
MattSayar
In fairness, it's also still a NEW technology in the scale of tech. For comparison, it takes years after a gaming console is released for teams to optimize and squeeze every last ounce of performance out of the hardware.
We're just getting started with AI, and we're still "stuck" in the chat interfaces because of the storming success of ChatGPT a few years ago. Cursor, GitHub Copilot etc. are cool but they're still "launch titles" to continue my analogy from above.
New models are still coming out (but slowing down) with increased capabilities, context windows, etc. and I'm sure the killer app is still waiting to be unearthed. In the meantime, I'm having a lot of fun building my hobby code. Collectively, we're going to morph that into something more scalable and enterprisey, it's just a matter of time.
nerdponx
I'm a "data scientist", but I have absolutely improved my productivity in the last year or so by conversing with LLM chatbots to work through tough problems, get ideas, figure out project plans, etc. I can see the effect in my list of completed projects, the overall speed isn't that much higher, but the quality has definitely gone up, because I'm able to work through things more quickly and get to good solutions faster, so I can spend the more time iterating on good ideas and less time trying figure out which ideas are even good.
For programming, meh, it helps when I'm really tired and don't want to read documentation. Can't imagine using it in a serious capacity for writing code except in a huge codebase, where I might want it to explain to me how things fit together in order to make some change or fix a bug.
abalashov
Interestingly, this is the conclusion reached by the major militaries, Axis and Allied alike, at the end of extensive experiments with amphetamines in WWII. They certainly made pilots and soldiers feel more confident, engaged and attentive, but the quality of the output was at best unchanged and at worst markedly inferior.
bobbiechen
I can believe that, with personal experience from a non-AI tool! A few years back, I wrote a puzzle solving tool (semaphore decoder) that felt faster than using a lookup table manually, but was actually very similar in time.
Those notes: https://bobbiechen.com/blog/2020/5/28/the-making-of-semaphor...
Regardless of the speed, it certainly felt easier because I didn't have to think as hard, and maybe that extra freshness would improve productivity for later tasks. I wonder if there's any effect like that for AI coding tools - it makes you happier to be less tired.
bwestergard
This seems right, intuitively, but I'd love to see a source.
I've noticed that when I detect chatbot-style code in a coworker's PR, I often find subtle problems. But it's harder for me to spot the problems in code I got out of a chatbot because I am primed by the very activity of writing the prompt to see what I desire in the output.
bluefirebrand
This is my observation. Not a scientific measurement by any means but from what I can see it isn't speeding anyone up
If I had to guess, people feel more productive because they are doing less of the work they are used to and more review / testing, but to reach the same level of confidence the review takes much longer
And the people who are not doing thorough review are producing absolute garbage, and are basically clueless
spacemadness
I perceive quite the opposite. Rarely do I see it producing workable solutions and it often just creates noise. What’s worse is the mistakes it makes sometimes are nuanced, and not the kind of mistakes a human coder would make, causing me to waste a lot of time finding the mistake. I think it’s more useful to get ideas from, or treat it like a trainer when learning a new language, but code generation seems really poor to me still. The only ones I see arguing that its not the case are junior coders making slop apps that do nothing all that interesting.
tartoran
More productive with AI or more productive in general?
ohgr
It depends how you measure productivity and value. And who is measuring it. And who tells the story.
If the developer writes 6,000 lines of utter dog shit with AI that causes your customers to leave, well.
formerphotoj
Gary is harsh or extreme, even, but the point largely stands: https://garymarcus.substack.com/p/what-if-generative-ai-turn...
Personally I think there will be things that gen-AI is useful for, such as rapid education for beginners and learning or performance feedback mechanisms. Those use cases are promising, and still in development. Hopefully they'll be cost-effective as well.
null
cmrdporcupine
Deskilling labour in order to endroute around conflict in the workplace is as old as capitalism. It's not just about automating to reduce expenses, but also about reducing both bargaining power and the "specialness" of the worker.
Uppity Google employees raising a stink and not doing their job because of CEOs sexually harassing employees? Or passing around petitions about making weapons or unethical AI? Sharing their compensation package #s to improve bargaining position or deal with perceived injustices around salary differentials?
They only get away with that because they have bargaining power that management would dearly like them not to have.
The aggressive pivot to "AI" LLM tools is the desperate move of a managerial class sick of our uppity shit.
bhouston
I do worry about the viability of OpenAI in particular. So much of its talent went to other firms which then built up amazing capaibilities like Anthropic with Claude. And then they also have the threat of OpenSource models like DeepSeek v3.1 and soon DeepSeek R2 while at the same time OpenAI is raising its prices to absurd levels. I guess they are trying to be the Apple of the AI world... maybe...
That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese.
diggan
> worry about the viability of OpenAI in particular [...] they also have the threat of OpenSource models
It's a real shame OpenAI didn't succeed with their core and most fundamental mission of being open and improving humanity as a whole, where new FOSS models would have been seen as a success, not as a competitor or threat. We're all worse off because of it.
aiono
> It's a real shame OpenAI didn't succeed with their core and most fundamental mission of being open and improving humanity as a whole
You frame it like they sincerely had this mission at all. Which I doubt seriously. Why would anyone who funded them have such aim?
diggan
Well, I mean, if we take what they themselves said (at the time) as truth, then that was their sincere mission:
> OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
https://openai.com/index/introducing-openai/
Of course, they could have been lying at that point, in order to gain interest I suppose , but they were pretty outspoken about that idea at least in the beginning.
neilv
Some of the parties involved are known for long-cons, shameless backstabbing, and generally ruthless self-interest.
So I would wouldn't be surprised if the "open" and "non-profit" was just a thin PR veneer from the start.
It would also explain how values consistent with their supposed mission seem to have been zero consideration in hiring. (Given that, during what looked like a coup, almost all of those hires lined up to effectively say they would discard the values in exchange for more money.)
simonw
US tax law is designed to help with this - the tax status of nonprofits is evaluated based on if their activity supports their published mission.
Those mission statements (ostensibly) have teeth!
xnx
> OpenAI is raising its prices to absurd levels
When customers don't know how to differentiate on quality, they use price as a signal.
diggan
Although I dislike the pricing, OpenAI do sit on some of the best models/processes. o1 Pro mode is a literal beast even compared to newer R1 and Sonnet 3.7. I'm not sure I'd call it 10x better (than R1 specifically) perhaps, but it certainly is better (and slower).
i_love_retros
A literal beast
olalonde
Worried? Their mission is to make sure that AI benefits all of humanity. Surely they must be thrilled that there is a ton of competition undercutting their prices and eating their market share. I bet Sam Altman is popping champagne as we speak.
> We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
Source: https://openai.com/about/
y1n0
In some ways it resembles the drug industry. Heavy investment in what looks like a promising line of development only to have it flop like 4.5 with marginal improvements.
root_axis
It also seems OpenAI is struggling to scale. I'm a premium subscriber and the site is totally unusable for multiple hours every week. Now that Claude has web search I may switch away from OpenAI permanently.
bhouston
This is also happening to Anthropic. The adoption of AI is accelerating right now, or at least token usage is accelerating is with agentic workflows coming on online.
infecto
> OpenAI is raising its prices to absurd levels
When did they raise prices? I don't recall them ever raising prices.
> That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese
People love to say this but its hard to imagine which American or European businesses would be actively using models that are being run and hosted within mainland China. The risks are too great. Protectionist policies can be entirely ignored.
bhouston
> When did they raise prices? I don't recall them ever raising prices.
I am referring to their new models having absurdly high prices. GPT-4.5 is $75/150 and o1 is $15/60, whereas GPT-4o is $2.5/10.
> models that are being run and hosted within mainland China.
The models are open source so one can run them locally.
infecto
> I am referring to their new models having absurdly high prices. GPT-4.5 is $75/150 and o1 is $15/60, whereas GPT-4o is $2.5/10.
That is a new product not a price increase.
> The models are open source so one can run them locally.
Again, not sure how any policies will stop opensource code from being run. At any rate those models still don't compare to O1 Pro and the full tool suite.
radicalbyte
> I guess they are trying to be the Apple of the AI world...
They're looking more like Alta Vista.
bionhoward
Anthropic didn’t build shit, it’s got the same closed output rules as ClosedAI and xAI and Perplexity and the Gemini API. Send them your precious questions and code and what do you get back? Output you with a prohibition era policy telling you not to use it to compete with their thing that does everything. That’s such a dumb deal, I immediately think people are dumb when I hear them mention using closed output AI services. Government protections for explicitly anticompetitive services? What a joke!
bhouston
> Anthropic didn’t build shit, it’s got the same closed output rules as ClosedAI and xAI and Perplexity and the Gemini API
Anthropic was founded by ex-OpenAI employees and they built an effective competitor to OpenAI as evidenced by their valuation, e.g. +$60B USD. If building a company with a +$60B valuation is considered shit, well, I guess I want to know what you built that is better?
bpt3
Using a company's valuation as a way to measure the effectiveness of the company or product is not a good idea.
See anything related to NFTs as an example, or WeWork immediately sprang to mind as another one.
Did Adam Neumann build anything noteworthy, or was he charismatic and connected enough to dupe many high net worth individuals and investment firms?
null
timcobb
> I immediately think people are dumb when ... What a joke!
lukev
It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
But ultimately I think Satya Nadella is right. We can speculate about the potential of these technologies all we want, but they are now here. If they are of value, then they'll start to significantly move the needle on GDP, beyond just the companies focused on creating them.
If they don't, then it's hype.
mjr00
> It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
I agree, but this feeling isn't anything new! This was the verbatim argument people used when insisting that blockchain was going to be a transformative technology. Bitcoin increased in value so dramatically and so quickly that it just felt like there had to be something valuable in the underlying technology, and corporations collectively through billions at it because they wanted to be the ones to exploit it first.
It's very easy to look back and say "well of course it's different this time," but the exuberance for blockchain at the time really was very close to current AI hype levels. It's quite possible that AI becomes a useful tool in some relatively niche scenarios like software development while never delivering on the promise of replacing large swathes of skilled labor.
hatefulmoron
>> It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
> I agree, but this feeling isn't anything new! This was the verbatim argument people used when insisting that blockchain was going to be a transformative technology.
Not saying you're doing this, just a random thought: it's funny seeing how much effort is spent trying to predict the trajectory of LLMs by comparing them to the trajectories of other technologies. Someone will say that AI is like VR or blockchain, and then someone else will chime in that we're actually in the early days of something like the internet, etc, etc..
It's like, imagine I wanted to know if my idea was any good. Thinking about the idea itself is hard, instead I'll explain it to you and look at your reaction. Oh no, that's the face of someone who just had NFTs explained to them, I should halt immediately.
Malcolmlisk
Of course is totally different this time. We are solving a lot of problems right now with llms, and pushing forward a lot of stagnant areas like in computer vision.
So, yeah, this time is different. The chat bots are just the tip of the iceberg
keiferski
I can't say for all possible implementations, but IMO (from industry experience) the content and consumer-focused benefits of AI/LLMs have been very much over-hyped. No one really wants to watch an AI-generated video of their favorite YouTuber, or pay for an AI-written newsletter. There is a ceiling to the direct usefulness of AI in the media industry, and the successful content creators will continue to be personality-driven and not anonymously generic.
Whether that also applies to B2B is a different question.
whiplash451
That might be true, but GenAI for banner optimization is getting huge investments in BigTech. The company that's not doing it will likely bite the dust seriously. You may say it's a sad state of affair, but this will pull hard on the tech you're talking about (albeit, with much less romantic outcomes).
keiferski
What do you mean by banner optimization specifically? Ad images and video thumbnails, etc?
In that case, I don't disagree – genAI is still useful for this "side" work like thumbnails, idea generation, and so on. But I don't think it will be used much for content directly, as has been suggested by a ton of big tech companies.
whiplash451
Yes, I mean images/thumbnails/background optimization inside banners/youtube videos. "Our" ads are about to get a lot nicer.
jbreckmckye
This journalist, Ed Zitron, is very skeptical of AI and his arguments border on polemic. But I find his perspective interesting - essentially, that very few players in the AI space are able to figure out a profitable business model:
formerphotoj
Also, from Sequoia: https://www.sequoiacap.com/article/ais-600b-question/
Sort of surprised this hasn't been taken down, actually.
beezlebroxxxxxx
A big problem is to what extent do future business models disrupt the public perception that AI is "objective" or unbiased, a perception that is often itself the product of these companies' own marketing. When the very results of a prompt to an agent/LLM, such as ChatGPT, are suspected of profit/corporate interests over accuracy, why would anyone use it over the dozen other players in the field? This is precisely the thing that has soured google search results.
alephnerd
What does Zitron define as "AI" companies though? (Actually curious).
A lot of "AI" investments in public markets were basically a bundle of fabless players (Nvidia), data center adjacent players (Broadcom), and some applications, whereas private/VC investment was bifurcated between foundational model players (OpenAI, Anthropic), application or AIOps oriented players (ScaleAI, Harness, etc), and rebranded hardware vendors (Cerebras, CoreWeave, Groq, Lambda).
If by AI he means foundational model companies, then I absolutely agree that there is a need to be bearish about their valuations, because they can constantly be outcompeted by domain specific or specialized models.
Furthermore, a lot of the recent "AI" IPO hype are functionally fabless hardware companies like CoreWeave or Cerebras who were previously riding high because of the previous boom in semiconductor and chip investments.
I've been fairly open about this opinion at my day job.
jbreckmckye
I think he sees the weakness more in the model/product/IP holding companies than the infra/hardware/commodity compute side
alephnerd
Interesting.
I personally think there is more risk on the commodity compute side because the projected multiples for a Cerebras or CoreWeave are imo unrealistic in such a margins driven market, and there was a DC funding boom that has started lagging over the past couple months, as there appears to be a bit of an overcapacity issue in compute - imo very similar to that in the mid-2000s with servers and networking players like Dell, EMC, Cisco, etc.
That said, I have a similar opinion about foundational model companies as well.
MrBuddyCasino
Ed Zitron is trying to be a wittier Kara Swisher, which means there is near zero signal in the anti-tech noise. Which is a shame because he has the talent to be something else.
jbreckmckye
Would it not be the opposite? I always had the impression Swisher was quite boosterist, very enthusiastic about the industry. I don't know her work too well though
MrBuddyCasino
I‘ve only ever saw work of her that was the left-populist anti Big Tech fare that the whole legacy media had picked up in the 2010s, but it might not be representative.
fullshark
The end result of this wave looks increasingly like will get us an open web blogspam apocalypse, better search / information retrieval, better autocomplete for coders. All useful (well useful to bloggers/spammers at least), not trillions of dollars in value generated though.
Until a new architecture / approach takes root at least.
boringg
"better search / information retrieval" - this is only temporary until people figure out either the SEO equivalent to get into models or AI agents start charging for product placement etc as they will absolutely need to add to their revenue model.
Its the equivalent of all new tech that comes out of the valley -- amazing for early adopters etc then as it starts having to move past the user growth phase and into the monetizing phase the product loses its lustre and create space for new competitors/tech following the same paradigm. Rinse and repeat.
alabastervlog
It's looking a lot like it'll mostly be a tiny bits of value scattered around all over, amounting to quite a bit all together, but little in the way of acute disruption of... anything. Aside from ruining some things, as you note.
The supposed Jobs quip about Dropbox being a "feature, not a company" comes to mind.
Ekaros
Question to me really is how much value will there be for the big players creating these models. And well how much value for smaller players using some open source model to sell some project. Which the customers get something out of and hopefully even the end users...
Maybe the extraction will be with AWS, Azure and GCP... After all hosting is most realistic place to generate costs that must be paid.
mountainriver
The software industry as a whole generates trillions of dollars every year. The current state of AI makes coders significantly more efficient and it’s only getting better.
It’s easily worth trillions with just normal speculation.
jjulius
>The current state of AI makes coders significantly more efficient and it's only getting better.
Do you have any data that supports this assertion and the associated upward trend?
mountainriver
There is a lot of data to support this, most of it showing 20-50% efficiency gains. For a multi-trillion dollar industry that equates to potentially over a trillion dollars in efficiency in a single year.
https://github.blog/news-insights/research/research-quantify...
https://www.infoq.com/news/2024/09/copilot-developer-product...
https://www.mckinsey.com/capabilities/mckinsey-digital/our-i...
https://intuitive.cloud/blog/delve-into-the-depths-of-amazon...
https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-...
https://arxiv.org/pdf/2303.17125
This isn't slowing down either, if anything its accelerating with reasoning models. Which means it likely under-valued actually. Lets not forget those numbers are just for software!
pwndByDeath
Billions to develop something less clever than an intern. Nobody could see that imploding.
biophysboy
I tend to agree with the article, but I do wonder if the operating costs of AI companies will decrease if they incorporate the more efficient methods of R1 and stop building so many fucking data centers.
I also expect AI to incorporate ads at some point, once they exit the dreamy phase that early tech products always go through. I know Sam says he doesn't want to, but they only have so much runway. Eventually they will rationalize their ads as fundamentally different - a consumer assistant, if you will.
bionhoward
Epictetus would hate this trend to outsource mental work
nntwozz
Just an anecdote, my father when he was younger used to be a great driver. He would drive all around Europe on our vacations, in capital cities (e.g. Paris) with nothing but a tourist map (mother holding the map) and his eyes and ears.
Places I wouldn't really like to drive myself today because of the difficulty (Italy etc.).
For many years he's been using GPS to assist him, first TomTom then iPhone and now his new car has CarPlay.
He's a much worse driver today, constantly distracted, constantly looking at the screen instead of his surroundings and actually learning, using his brain.
I myself like to ride MTB in the woods, I know all the trails and small features in the back of my head. I'm aware of my surroundings and it just sticks without any effort over the years.
I have friends I ride with who use a trail map on their phone or bike computer, many years later they still don't know the trails because they're always looking at their devices.
Sometimes we stop to "have a look" where the fork in the trail goes, drives me mad when it interrupts the flow of the ride.
This is what AI feels to me in many ways.
It's a double-edged sword.
aoeusnth1
I agree, but GPS for knowledge work does sound like something that will pan out to be a big industry, right?
dest
Do you have more details on which aspects of Epictetus' philosophy are related to hating to outsource mental work? I'm not familiar enough with stoicism. Thanks
(I could ask chatgpt, I ask you instead =p)
PS: Claude 3.5 Haiku take on it: "For Epictetus, the process of thinking is more important than the result. By outsourcing mental work, people surrender their most valuable tool for self-improvement - their own mind."
apples_oranges
Knowledge work. Part of it is the ability to reason and part of it is just knowing many things and certain fundamental principles.
tim333
>Marcus ... bet Anthropic CEO Dario Amodei $100,000 that AGI would not be achieved by the end of 2027.
Has anyone seen any mention of this? I couldn't find it googling.
softwaredoug
I worry about the cultural shift in Tech to "what have you done for me lately" over patient innovation. Due to no more ZIRP, due to a shift to very top-down management, narcissistic CEO bros, and the new focus to please investors over all else... There's little appetite for actual innovation, which would require IMO a different culture and much more trust between management and employees. So instead, there's top-down AI death marches to "innovate" because that's the current trend.
But who is DEFINING the trend? Who is actually trying to stand out and do something different?
There's glimmers of hope in tiny bootstrapped startups now. That seems to be the sweet spot of not needing to obsess about investor sentiment, and instead focus on being lean and having a small team with the trust to actually try new things. Though this time with a focus early profitability where they can dictate terms to investors, not the other way around.
Havoc
I still don’t buy that it is a bubble. Unlike other bubbles like crypto I see real life impact an utility.
I do think OpenAI is in deep trouble. They’re ahead but not nearly enough to justify their lofty position.
lizknope
The dot com bubble from around 1995 to 2000 was huge. I started working in 1997 and we thought it would just keep going but then the bubble popped. Thousands of people lost jobs. Stock market dropped way down. But that doesn't mean that the Internet was useless.
I see AI like that. In my view there is absolutely an AI bubble and it does have real world uses but it is way over hyped right now. I say that as someone working on AI chips.
Spartan-S63
I generally agree. I think we're in a hype cycle and there will be a correction. I'm hopeful that people will be more realistic about it being a tool and not a panacea. For OpenAI, though, they have to talk up AGI and Universal Basic Compute because of the capital they've raised and their unrealistic valuation.
Curious to see how open weight international models eat into OpenAI's first-mover moat. Until hardware is cheaper and more commodity—which requires crossing the CUDA moat—open weight will be a fun toy, but not a serious production commodity.
null
I'm usually skeptical of doomer articles about new technology like this one, but reluctantly find myself agreeing with a lot of it. While AI is a great tool with many possibilities, I don't see the alignment of what many of these new AI startups are selling.
It makes my work more productive, yes. But it often slows me down too. Knowing when to push back on the response you get is often difficult to get right.
This quote in particular:
>Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity by increasing the volume of content and steps needed to complete a given task, and by frequently introducing errors that have to be checked and corrected.
This sort of mirrors my work as a SWE. It does increase productivity and can reduce lead times for task completion. But requires a lot of checking and pushback.
There's a large gap between increased productivity in the right field in the right hands vs copying and pasting a solution everywhere so companies don't need workers.
And that's really what most of these AI firms are selling. A solution to automate most workers out of existence.