Microsoft Cancels Leases for AI Data Centers, Analyst Says
112 comments
·February 24, 2025toomuchtodo
Microsoft CEO Admits That AI Is Generating Basically No Value - https://futurism.com/microsoft-ceo-ai-generating-no-value
Bit of a click bait title, but it certainly seems like the realization is setting in that the hype has exceeded near term realistic expectations and some are walking back claims (for whatever reason; honesty, derisking against investor securities suits, poor capital allocation, etc).
Nadella appears to be the adult in the room, which is somewhat refreshing considering the broad over exuberance.
mrtksn
IMHO anyone who started using AI seriously:
1) Wouldn't want to go back
2) Wouldn't believe that it's about to replace human intellectual work
In other words AI got advanced enough to do amazing things but not 500B or T level of amazing and people with the money are not convinced that it will be anytime soon.
rsynnott
Lining up for whatever the next thing is. "Look, we know we said AR/VR was the next big thing in the late tens and LLMs were the next big thing in the early 20s, but quantum is the next big thing now. For real, this time!"
(Not entirely sure what the next fad will be, but some sort of quantum computing thing doesn't feel unlikely. Lot of noise in that direction lately.)
sigmoid10
Curiously, all of these three (VR/AI/QC) are limited by hardware. But AI is the only one that has seen meaningful progress by just throwing more contemporary hardware at it. Sure, future hardware might bring advancements to all of them. But if you're making an investment plan for the next quarter, the choice is pretty obvious. This is why AI rules the venture capitalist sector instead of fusion or other long term stuff.
null
unsupp0rted
Nadella is looking for the world to grow at 10% due to AI enhancement, like it did during the industrial revolution.
That seems like a low bar because it already is- it's just not equally distributed yet.
My own productivity has grown far more than 10% thanks to AI, and I don't just mean in terms of dev. It reads my bloodwork results, speeds up my ability to repair a leak in my toilet tank, writes a concise "no I won't lend you money; I barely know you" message... you name it.
Normally all of those things would take much longer and I'd get worse results on my own.
If that's what I can do at the personal level, then surely 10% is an easily-achievable improvement at the enterprise level.
geuis
All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.
For the last year, I've tried all sorts of models both as hosted services and running locally with llama.cpp or ollama. I've used both the continue.dev vscode extension and cursor more recently.
The results have been frustrating at best. The user interface of the tools is just awful. The output of any models from Deepseek to quen to Claude to whatever other model is mediocre to useless. I literally highlight some code that includes comments about what I need and I even include long explicit descriptions etc in the prompts and it's just unrelated garbage out every time.
The most useful thing has just been ChatGPT when there's something I need to learn about. Rubber ducking basically. It's alright at very simple coding questions or asking about obscure database questions I might have, but beyond that it's useless. Gotta keep the context window short, or it starts going off the rails every single time.
strangescript
I think its about scope and expectations. I have had some form of AI code completer in my neovim config for 3 years. It works flawlessly and saves me tons of keystrokes. Sure sometimes it suggests the incorrect completion but I just ignore it and keep coding as if it didn't exist. I am talking about line by line, not entire code blocks, but even that it does well at times.
From what I have seen the people that have the most success have AI building something from scratch using well known tooling (read: old tooling).
The problem is that doesn't immediately help most people. We are all stuck in crap jobs with massive, crusty code bases. Its hard for AI because its hard for everyone.
alexvitkov
If LLM chatbots are making you vastly more productive in a field, you are in the bottom 20% of that field.
They're still useful tools for exploring new disciplines, but if you're say a programmer and you think ChatGPT or DeepSeek is good at programming, that's a good sign you need to start improving.
infecto
Just in the past couple months there have been a number of "I am a senior/principal engineer and this is how I use LLMs". I would agree that the tools are not optimal yet but every iteration has improved for me.
Maybe whatever language you are coding it or whatever project you are working on is not a good fit? It is an equally perplexing situation for myself when I hear anecdotes like yours which don't align with my experience. The fact that you say everything is garbage calls into question either how you are using the tool or something else.
I can reliably use cursor's composer to reference a couple files, give a bullet list of what we are trying to do and point it to one of the better models and the output is junior engineer level or better output. When I say junior, I mean a junior who has experience with the codebase.
ta1243
> All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.
It's very difficult to measure productivity of most people, certainly most people in office jobs, so while you can have a gut feeling that you're doing better, it's no more measurable than pre-AI individual productivity measurement was
null
andy24
I have a similar experience. Tried to use it for real work and got frustrated by the chat’s inability to say “I don’t know”. It’s okay for code snippets demonstrating how something can be used (stack overflow essentially), also code reviews can be helpful if doing something for the first time. But they fail to answer questions I’m interested in like “what’s the purpose of X”.
thagsF
And Henry Ford would reply: "Who is going to buy the cars?"
We have been living in a fake economy for quite some time where money is printed and distributed to the "tech" sector. Which isn't really "tech", but mostly entertainment (YouTube, Netflix, Facebook, ...).
Growth of the economy means nothing. The money that has been printed goes to shareholders. What the common man gets is inflation and job losses.
If you want to grow the real economy, build houses and reduce the cost of living.
szundi
[dead]
toomuchtodo
You should be curious why Nadella is looking for the world to grow at that rate. That’s because he wants Microsoft to grow into $500B/year in revenue by 2030, and it will be challenging without that economic growth to grow into that target. You can grow into a TAM, try to grow or broaden the TAM, or some combination of both. Without AI, it is unlikely the growth target can be met.
https://www.cnbc.com/2023/06/26/microsoft-ceo-nadella-said-r...
jononor
What is your personal productivity metric by which you have more than 10% increase? More money earned, less money spent, fewer working hours for same income, more leisure time? It needs to be something in aggregate to mean something related to what Nadella meant. There are many individual task which LLM system can help with. But there is also may ways for those gains to fail to aggregate into large overall gains. Both on personal level and on corporate, and economy wide level.
suraci
I think the 'grow at 10%' refers to the incremental part of the entire world/market.
during the industrial revolution(steam/electricity/internet), the world was growing, there're trains, cars, netflix
bussiness grown with productivity growing, even so, we lived through 2 world wars and dozens of economic crisis
but now is very different, when you repair the tank with LLM's help, when the labour value of repairers is decreased, there's no addition value are produced
there's a very simple thought experiment abt the result of productivity growing alone:
let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?
ta1243
> let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?
We work 35 hour years instead of 35 hour weeks?
threeseed
Going to safely assume you've never worked at an enterprise.
Because improving the productivity of every employee by 10% does not translate to the company being 10% more productive.
Processes and systems exist precisely to slow employees down so that they comply with regulations, best practices etc rather than move fast and break things.
And from experience with a few enterprise LLM projects now they are a waste of time. Because the money/effort to fix up the decades of bad source data far exceeds the ROI.
You will definitely see them used in chat bots and replacing customer service people though.
bbarnett
It also gets all of these things wrong, like not paying attention to models of toilets and quirks for their repair, often speaking with an authoritative voice and deceiving you on the validity of its instructions.
All of the things you site are available via search engines, or better handled with expertise so you know how much of the response is nonsense.
Every time I use AI, it's a time waste.
unsupp0rted
Every time I contact an enterprise for support, the person I'm talking to gets lots of things wrong too. It takes skepticism on my part and some back and forth to clean up the mess.
On balance AI gets more things wrong than the best humans and fewer things wrong than average humans.
bognition
Time to sort NVDA?
toomuchtodo
A fellow degenerate gambler I see. The market can remain irrational longer than you can remain solvent, trade with caution. Being early is the same as being wrong.
bognition
A common hypothesis for why Nvidia is so hot is because they have an effective monopoly on the hardware to train AI models and it requires a crap ton of hardware.
With DeepSeek it’s been demonstrated you can get pretty damn for a lot cheaper. I can only imagine that there are tons of investors thinking that it’s better to invest their dollars in undercutting the costs of new models vs investing billions in hardware.
The question is, can Nvidia maintain their grip on the market in the face of these pressures. If you think they can’t, then a short position doesn’t seem like that big of a gamble.
short_sells_poo
Highly regarded people unite :D
More seriously though: unless you have privileged information or have done truly extensive research, do not short stocks. And if you do have privileged information, still don't short stocks because unless you have enough money to defend yourself against insider trading like Musk and it's ilk, it's not going to be worth it.
It's perfectly reasonable to determine that a particular high growth stock is not going to perform as well going forward, in which case I'd shift allocation to other, better candidates.
Generally, being long equities is a long term positive expected value trade. You don't have to time the market, just be persistent. On the other hand, as you correctly alluded to, shorting equities requires decently precise timing, both on entry and exit.
dijit
I think its probably foolish to short nvidia until theres at least echoes of competition.
AMD wants it to be them, but the reality is that the moat is wide.
The closest for AI is Apple, but even then, I’m not certain its a serious competitor; especially not in the datacenter.
For Gaming there’s practically no worthwhile competition. Unreal Engine barely even fixes render bugs for Intel and AMD cards, and I know this for fact.
FD: I’m recently holding shares in nvidia due to the recent fluctuation, and my own belief that the moat is wider than we care to believe, as mentioned.
dpflan
Using “bubble” sort? ;)
jeyoor
The combination of high and climbing price to earnings ratios for a smaller subset of tech firms, outsize retail investment in tech (cloaked by people buying crypto), and macro environment factors like high interest rates stimulating risky lending has me swapping this bubble toward the top of the list.
See further: https://www.morningstar.com/news/marketwatch/20250123167/six...
HPsquared
Bubble sort is very resource-hungry...
suraci
it's very dangerous, shorting in a market where a gamma squeeze can occur is extremely dangerous
other markets s like Taiwan are preferable
bsenftner
The "elephant in the room" is that AI is good enough, it's revolutionary in fact, but the issue now is the user needs more education to actually realize AI's value. No amount of uber-duper AI can help an immature user population lacking in critical thinking, which in their short shortsightedness seek self destructive pastimes.
geuis
It's not "good enough", it's mostly overhyped marketing garbage. LLM models are mostly as good as they're going to get. It's a limitation of the technology. It's impressive at what has been done, but that's it.
It doesn't take billions of dollars and all human knowledge to make a single human level intelligence. Just some hormones and timing. So LLMs are mostly a dead end. AGI is going to come from differenst machine learning paradigms.
This is all mostly hype by and for investors right now.
infecto
It's pretty good for a whole class of problems that humans currently do.
literalAardvark
LLM direct response models are quite mature, yes (4o)
LLM based MoE architectures with some kind of reasoning process ( Claude 3+, o series, R1, grok 3 with thinking ), are the equivalent of v0.2 atm, and they're showing a lot of promise.
rsynnott
"You're holding it wrong" only goes so far.
qgin
A few things from the Dwarkesh interview with Satya:
* He sees data center leases getting cheaper in near future due to everyone building
* He’s investing big in AI, but in every year there needs to be a rough balance between capacity and need for capacity
* He doesn’t necessarily believe in fast takeoff AGI (just yet)
* Even if there is fast takeoff AGI, he thinks human messiness will slow implementation
* It won’t be a winner-take-all market and there will be plenty of time to scale up capacity and still be a major player
djtango
> Even if there is fast takeoff AGI, he thinks human messiness will slow implementation
Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?
I found it very amusing that at the turn of the decade "digitalisation" was a buzzword as Amazon was approaching its 25th anniversary.
Meanwhile huge orgs like the NHS run on fax and was crippled by excel row limits. Software made a very slow dent in these old important slow moving orgs. AI might speed up the transition but I don't see it happening overnight. Maybe 5 years if we pretend smartphone adoption is indicative of AGI and humanoid robot rollout
qgin
I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.
You click a button on Microsoft Teams and hire “Bob” who joins your team org, gets an account like any other employee, interacts over email, chat, video calls, joins meetings, can use all your software in whatever state it’s currently in.
It has to be a brownfield solution because most of the world is brownfield.
dumbaccount123
[dead]
Delphiza
As a first order of business, a sufficiently advanced AGI would recommend that we stop restructuring and changing to a new ERP every time an acquisition is made or the CFO changes, and to stop allowing everyone to have their own version of the truth in excel.
As long as we have complex manual processes that even the people following them can barely remember the reason why they exist, we will never be able to get AI to smooth it over. It is horrendously difficult for real people to figure out what to put in a TPS report. The systems that you refer to need to be engineered out or organisations first. You don't need AI for that, but getting rid of millions of excel files is needed before AI can work.
markus_zhang
In fact most of the industries out there are still slow and inefficient. Some physicians only accept phone calls for making appointments. Many primary schools only take phone calls and an email could go either way just not their way.
It's just we programmers who want to automate everything.
QuadmasterXLII
“ only takes phone calls for appointments” is a huge selling point for a physicians office. People are very tired of apps.
graemep
Given how bad some of the apps and websites are I am not sure phone calls are any worse! They are also less prone to data breaches and the like.
mattlondon
But why are these sort of orgs slow and useless? I don't think it is because they have made a conscious decision to do so - I think it is more than they do not have the resources to do anything else. They can't afford to hire in huge teams of engineers and product managers and researchers to modernize their systems.
If suddenly the NHS had a team of "free" genuinely phd-level AGI engineers working 24/7 they'd make a bunch of progress on the low-hanging fruit and modernize and fix a whole bunch of stuff pretty rapidly I expect.
Of course the devil here is the requirements and integrations (human and otherwise). AGI engineers might be able to churn out fantastic code (some day at least), but we still need to work out the requirements and someone still needs to make decisions on how things are done. Decision making is often the worst/slowest thing in large orgs (especially public sector).
typewithrhythm
It's not a resource problem; everyone inside the system has no real incentive to do anything innovative; improving something incrementally is more likely to be seen as extra work by your colleagues and be detrimental to the person who implemented it.
What's more likely is a significantly better system is introduced somewhere, NHS can't keep up and is rebuilt by an external. (Or more likely it becomes a inferior system of a lesser nation as the UK continues its decline).
djtango
IMO it comes from inertia. People at the top are not digital-native. And they're definitely not AI-native.
So you're retrofitting a solution onto a legacy org. No one will have the will to go far enough fast enough. And if they didn't have the resources to engineer all these software migrations who will help them lead all these AI migrations?
Are they going to go hands off the wheels? Who is going to debug the inevitable fires in the black box that has now replaced all the humans?
qwertox
> Meanwhile huge orgs like the NHS run on fax
I thought this was a German-only thing?
ghaff
The healthcare system I largely use in the US was bought up by another larger system. They are now primarily on an electronic system (which for me is welcome) but some of the offices still use faxes as well.
ta1243
Not convinced.
In 2018:
https://www.gov.uk/government/news/health-and-social-care-se...
> Matt Hancock has banned the NHS from buying fax machines and has ordered a complete phase-out by April 2020.
The NHS is quite federated. Hell many parts of it are independent companies. Some trusts have decent modern systems though - I had to go for a test just before christmas - I'd phoned up my GP in the morning got an appointment for half an hour later, he ordered a test, said go to one of these 8 centres, so I went to one about half an hour away (I live a fair way away from a major town). Had the test, by the time I'd had lunch and driven back home I had another call from the GP asking me to come in that evening, the appointment was created by the GP and read seconds later at the hospital, the test was done there and results reported back again at the click of a system at the GP.
But that's just my local trust. Go 10 miles west and it's another trust and they have different systems. And I had to go to one of the test centres in the trust, I couldn't go to one in a neighbouring trust as they have different systems and there's no/limited interconnects.
llm_trw
Counterpoint: the PC.
The minicomputer revolution was meant to replace all the big iron in the basement with computers shared by teams. This lasted a decade until we got computers meant for one person and we've used that format of computation for nearly 50 years now.
Everyone is talking about how AI will replace people.
It won't.
AI without HI is useless. However, HI with AI is much better at handling all the bullshit office work that needs to get done—anyone who doesn't understand this is either an idiot, a troglodyte, or already has an expendable army of minions doing their work.
The surge in AI demand won't be top-down but bottom-up. That stupid office birthday wish card being passed around? I can Shakespeare the motherfucker. And that email to the big boss telling him the project is doomed and that he needs a compass to find his own ass? Let's ask R1.
>My boss is a moron who can't find his ass without a compass. His latest tech initiative will cost us millions and make the best staff leave. Write a professional letter that covers those points without being harsh.
---
Subject: Request for Review of Upcoming [Tech Initiative]
Dear [Recipient Name/Team Name],
I hope this message finds you well. I wanted to share some thoughts regarding our upcoming [Tech Initiative Name], as I believe open dialogue will ensure its success while aligning with our team’s strengths and long-term goals.
While the initiative’s intent to innovate is commendable, I’ve observed several potential challenges that may impact its effectiveness:
Financial Impact: Based on preliminary analysis, the projected costs appear to be significantly higher than initially estimated. A detailed cost-benefit review or phased rollout might help mitigate financial risk while allowing us to test feasibility.
Team Retention Concerns: Many of our top performers have expressed reservations about the initiative’s current structure, particularly around [specific pain point, e.g., workflow disruption, lack of clarity]. Retaining their expertise will be critical to execution, and their insights could refine the plan to better address on-the-ground needs.
To ensure alignment with both our strategic objectives and team capacity, I respectfully suggest:
Conducting a collaborative risk assessment with department leads.
Piloting the initiative in a controlled environment to gather feedback.
Hosting a forum for staff to voice concerns/solutions pre-launch.
I’m confident that with adjustments, this project can achieve its goals while preserving morale and resources. Thank you for considering this perspective—I’m eager to support any steps toward a sustainable path forward.
Best regards,
lolinder
This is a very refreshing take.
Our current intellectual milieu is largely incapable of nuance—everything is black or white, on or off, good or evil. Too many people believe that the AI question is as bipolar as every other topic is today: Will AI be godlike or will it be worthless? Will it be a force for absolute good or a force for absolute evil?
It's nice to see someone in the inner circles of the hype finally acknowledge that AI, like just about everything else, will almost certainly not exist at the extremes. Neither God nor Satan, neither omnipotent nor worthless: useful but not humanity's final apotheosis.
heresie-dabord
Is this the transcript of the interview (podcast) with Dwarkesh?
https://www.dwarkeshpatel.com/p/satya-nadella
Because if so,
> He doesn’t necessarily believe in fast takeoff AGI (just yet)
the term "fast takeoff AGI" does not appear in the transcript.
orzig
Remember that there is a lot of nuance to these sorts of deals.
I don’t have any domain knowledge, but I recently saw an executive put in restaurant reservations for five different places the night of our team offsite, so he would have optionality. An article could accurately claim that he later canceled 80% of the teams eating capacity!
ZeroGravitas
But if it was reported in the press that your team was going to eat 5 meals at the same time before it was revealed that it was just an asshole screwing over small businesses, then that correction in eating capacity should be reported.
FrustratedMonky
"should "
But often not.
That was the point in the parent. How this is being reported is bit skewed.
And also there is the problem that nobody reads corrections. Lies run around the globe before the Truth has tied its shoelaces, or some quote like that.
laserbeam
I've read the first 2 paragraphs 5 times and I still can't tell if Microsoft was renting datacenters and paying for them, or if Microsoft was leasing out datacenters and decided "no more AI data centers for you, 3rd parties".
And digging further into the article didn't help either.
walrus01
The first one, they were acquiring datacenter space.
helsinkiandrew
Meanwhile: "Apple Says It Will Add 20k Jobs, Spend $500B, Produce AI Servers in US" https://www.bloomberg.com/news/articles/2025-02-24/apple-say...
numbsafari
This has nothing to do with supply/demand, and everything to do with geopolitics.
FrustratedMonky
You think they would spend 500B without thinking there will be any demand?
rsynnott
You're missing the key phrase. "Says it will". Companies, of course, say all sorts of things. Sometimes, those things come to pass. But not really all that often.
turnsout
If Apple can pull off "Siri with context," it will completely annihilate Microsoft's first mover advantage. They'll be left with a large investment in a zero-margin commodity (OpenAI).
rs186
If history is our guide, that's never going to happen.
asadhaider
Unfortunately Siri remains near useless at times even with Apple Intelligence™®
helsinkiandrew
The "LLM Siri" hasn't been rolled out even in beta, estimates reckon 2026
CuriousSkeptic
Apple will not beat Microsoft in any capacity here
Microsoft has all the context in the world just waiting for exploitation: Microsoft Graph data, Teams transcripts and recordings, Office data, Exchange data, Recall data(?), while not context per se even the XBox gaming data
helsinkiandrew
> Apple will not beat Microsoft in any capacity here
I'm sure MS will provide AI to business, but if Apple get things right, they'll be the biggest provider of AI to the masses.
With a Siri that knows your email, calendar, location, history, search history, ability to get data from and do things in 3rd party Apps (with App Intents) and if it runs on your phone for security, it could be used by billions of consumers, not a few hundred million MS office users.
What was that restaurant I went to with Joan last fall? Send linkedin requests to all the people I've had emails from company X.
Of course they could take too long or screw things up.
svnt
I’m sorry but what are you saying.
How are any of these unique competitive advantages over iCloud, App Store, Safari, and just generally more locked-in high margin mobile platform users than anyone?
blonderoast
a lot of this sounds like a normal course of business and stuff that msft does all the time. i don't understand the openai drama speculation on here. msft continues to have right of first refusal on openai training and exclusivity on inferencing. if someone else wants to build up openai capacity to spend money on msft for inferencing, msft would be thrilled. they recognize revenue on inferecing not training at the moment, so it's all upside to their revenue numbers
feverzsj
Seems consumers just hate every product with AI functionality.
zkmon
The hype starts to head down towards reality.
smetj
No it's one level deeper. The exclusive claim to the "hype" is heading down towards reality.
fny
Here’s the TD Cowen research note:
https://www.threads.net/@firerock31/post/DGbK1VkyKlp/in-late...
strangescript
OpenAI is pivoting away from MS. MS also has their own internal AI interests. Need to frame this for investors that doesn't look like we are losing out. "Nadella doesn't believe in AI anymore". Done and done.
jimmySixDOF
In a recent Dwarkesh podcast this week Nadella was just commenting on how they expected to benefit from reduced DC rental pricing and were preparing for Jevon's paradox to max out capacity. I guess they are calculating a ceiling now.
https://archive.is/dWo55