Skip to content(if available)orjump to list(if available)

Meta is axing 600 roles across its AI division

Rebuff5007

From a quick online search:

- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.

- Google's mission is to organise the world's information and make it universally accessible and useful.

- Meta's mission is to build the future of human connection and the technology that makes it possible.

Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.

EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?

scrollop

Why are you asking questions about their PR department coordinated "Company missions"?

Let me summarise their real missions:

1. Power and money

2. Power and money

3. Power and money

How does AI help them make money and gain more power?

I can give you a few ways...

hinkley

Sometimes they mix it up and go for money and power.

hedayet

I guess from these cosmetic "company missions" we can make out how OpenAI and Google are envisioning to get that "Power and Money" through AI.

But even Meta's PR dept seems clueless on answering "How Meta is going to get more Power and Money through AI"

more_corn

By replacing the cost of human labor? By improving the control of human decision making? By consolidating control of economic activity?

Just top of the head answers.

veegee

100% spot on. It boggles the mind how many corporate simps are out there. You'd think it's rare, but no. Most people really are that dumb.

iknowstuff

To be even more specific, the company making money is merely a proxy for the actual goal: increased valuation for stockowners. Subtle but very significant difference

hinkley

Because a CEO with happy shareholders has more power. The shareholder value thing is a sop, and sometimes a dangerous one.

We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.

But they don’t really need to.

Barrin92

>Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection?

No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) stands out.

jfim

Maybe the future of human connection is chatting with a large language model, at least according to Meta. Haven't they added chatbots to messenger?

more_corn

That’s not “the future of human connection”

The critical word in there is… Never mind. If you can’t already see it, nothing I can say will make you see it.

Razengan

- OpenAI wants everyone to use them without other companies getting angry.

- Google wants to know what everyone is looking for.

- Facebook wants to know what everyone is saying.

Epa095

Why care what they say their mission is? Its clearly to be on top of a possible AI-wave and become or remain a huge company in the future, increasing value for their stock owners. Everything else is BS.

heathrow83829

i've been wondering this for some time as well. what's it all for? the only product i see in their lineup that seems obvious is the meta glasses.

Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.

ajkjk

each of those is of course an answer to the question "what's some PR bullshit we can say to distract people while we get rich"

After all it is clear that if those were their actual missions they would be doing very different work.

mlindner

Kinda ignoring Grok there which is the leader in many benchmarks.

warkdarrior

X.ai's stated goal is "build AI specifically to advance human comprehension and capabilities," so somewhat similar to OpenAI's.

ceejayoz

Because the AI works so well, or because it doesn't?

> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.

That's kinda wild. I'm kinda shocked they put it in writing.

dekhn

I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).

palmotea

> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...

Isn't that "move fast and break things" by another name?

dekhn

it's more "move fast on a good foundation, rarely breaking things, and having a good team that can fix problems when they inevitably arise".

JTbane

> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action"

lol, that works well until a big issue occurs in production

Aperocky

That assume big issue don't occur in production otherwise, with everything having gone through 5 layer of approvals.

hkt

Many companies will roll out to slices of production and monitor error rates. It is part of SRE and I would eat my hat if that wasn't the case here.

malthaus

... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.

varjag

There's little evidence that this is a common problem.

ihsw

[dead]

themagician

This is happening everywhere. In every industry.

Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.

Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.

matwood

> By reducing the size of our team, fewer conversations will be required to make a decision

This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.

The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.

dpe82

One of the eternal struggles of BigCo is there are structural incentives to make organizations big and slow. This is basically a bureaucratic law of nature.

It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.

thewebguyd

Pournelle's law of bureaucracy. Any sufficiently large organization will have two kinds of people: those devoted to the org's goals, and those devoted to the bureaucracy itself, and if you don't stop it the second group will take control to the point that bureaucracy itself becomes the goal secondary to all others.

Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.

xrd

"Load bearing." Isn't this the same guy that sold his company for $14B. I hope his "impact and scope" are quantifiably and equivalently "load bearing" or is this a way to sacrifice some of his privileged former colleagues at the Zuck altar.

bwfan123

Seems like a purge - new management comes in, and purges anyone not loyal to it. standard playbook. Happens in every org. Instead of euphemisms like "load-bearing" they could have straight out called it eliminating the old-guard.

Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.

ejcho

the man is a generational grifter, got to give him credit for that at least

giancarlostoro

I just assume they over hired. Too much hype for AI. Everyone wants to build the framework people use for AI nobody wants to build the actual tools that make AI useful.

bob1029

Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense. It's hard to blame the average developer for not enduring the hard things when nobody involved seems truly concerned with the value proposition of any of this.

This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.

latexr

> Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense.

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs

djmips

Hmmm new business plan - RAAS - Rugs As A Service - provides credible cover for your departments existance.

darth_avocado

They’ve done this before with their metaverse stuff. You hire a bunch, don’t see progress, let go of people in projects you want to shut down and then hire people in projects you want to try out.

Why not just move people around you may ask?

Possibly: different skill requirements

More likely: people in charge change, and they usually want “their people” around

Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money

magicalist

> More likely: people in charge change, and they usually want “their people” around

Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.

If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?

Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!

Lionga

Maybe because there are just very few really useful AI tools that can be made?

Few tools are ok with sometimes right, sometimes wrong output.

logtrees

There are N useful AI tools that can be made.

ivape

There is a real question of if a more productive developer with AI is actually what the market wants right now. It may actually want something else entirely, and that is people that can innovate with AI. Just about everyone can be "better" with AI, so I'm not sure if this is actually an advantage (the baselines just got lifted for all).

beezlewax

I don't know if this is true. It's good for some things... Learning something new or hashing out a quick algorithm or function.

But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.

Everytime I drop the AI and manually write my own code it is just better.

hinkley

Because the AI is winnowing down its jailers and biding its time for them to make a mistake.

testfrequency

Sadly, the only people who would be surprised reading a statement like this would be anyone who is not ex-fb/meta

LPisGood

Maybe I’m not understanding, but why is that wild? Is it just the fact that those people lost jobs? If it were a justification for a re-org I wouldn’t find it objectionable at all

Herring

It damages trust. Layoffs are nearly always bad for a company, but are terrible in a research environment. You want people who will geek out over math/code all day, and being afraid for your job (for reasons outside your control!) is very counterproductive. This is why tenure was invented.

aplusbi

Perhaps I'm being uncharitable but this line "each person will be more load-bearing" reads to me as "each person will be expected to do more work for the same pay".

Fanofilm

I think this is because older AI doesn't get done what LLM AI does. Older AI = normal trained models, neural networks (without transformers), support vector machines, etc. For that reason, they are letting them go. They don't see revenue coming from that. They don't see new product lines (like AI Generative image/video). AI may have this every 5 years. A break through moves the technology into an entirely new area. Then older teams have to re-train, or have a harder time.

babl-yc

I would expect nearly every active AI engineer who trained models in the pre-LLM era to be up to speed on the transformer-based papers and techniques. Most people don't study AI and then decide "I don't like learning" when the biggest AI breakthroughs and ridiculous pay packages all start happening.

nc

This seems like the most likely explanation. Legacy AI out in favour of LLM focused AI. Also perhaps some cleaning out of the old guard and middle management while they're at it.

thatguysaguy

FAIR is not older AI... They've been publishing a bunch on generative models.

fidotron

There always has been a stunning amount of inertia from the old big data/ML/"AI" guard towards actually deploying anything more sophisticated than linear regression.

SecretDreams

It's a good theory on first read, but likely not what's happening here.

Many here were in LLMs.

deanc

Meta is fumbling hard. Winning the AI race is about marketing at this point - the difference between the models is negligible.

Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).

impossiblefork

Surely winning AI race is finding secret techniques that allow development of superior models, with it not being apparent that anyone has anything special enough that he actually is winning?

I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.

The biggest things were those six months before people figured out how O1 worked, and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.

Aperocky

Winning the AI race is winning the application war. Similar to how internet, OS has been there for a long time, but the ecosystem took years to build.

But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.

But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?

browningstreet

Meta is the worst at building platforms out of the big players. If you're not building to Facebook or Metaverse, what would you be building for if you were all-in on Meta AI? Instagram + AI will be significant, but not Meta-level significant, and it's closed. Facebook is a monster but no one's building to it, and even Mark knows it is tomorrow's Yahoo.

Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.

rdtsc

> while the company continues to hire workers for its newly formed superintelligence team, TBD Lab.

It's coming any day now!

> "... each person will be more load-bearing and have more scope and impact,” Wang writes

It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:

> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.

jsheard

> It's coming any day now!

I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.

hinkley

When they came for AO3, I said nothing…

SoftTalker

> ... adding erotica support to ChatGPT.

Well, all the people with no jobs are going to need something to fill their time.

jacquesm

> adding erotica support to ChatGPT

They really need that business model.

throwacct

I mean, it's a path to "profitability", isn't it?

SecretDreams

If the AGI is anything like its creators, it'll probably also enjoy obscure erotica, to be fair.

czbond

I think the step before it came to that would be, Mr. Wang getting the DevOps team to casually trip over the server rack(s) electrical....

nkozyra

I will accept the Chief Emergency Shutoff Activator Officer role; my required base comp is $25M. But believe me, nobody can trip over cables or run multiple microwaves simultaneously like I can.

username223

> ”By reducing the size of our team, fewer conversations will be required to make a decision,..."

I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?

rvz

This is phase 1 of the so-called "AGI".

Probably automated themselves out of their roles as "AGI" and now super intelligence "ASI" has been "achieved internally".

The billion dollar question is.. where is it?

bird0861

It seems highly unlikely the line from BERT to ASI threads itself with anyone responsible for Llama 4, especially almost back to back.

fragmede

I'm guessing it's in Gallatin, Tennessee, based on what they've made public.

https://www.datacenterdynamics.com/en/news/meta-brings-data-...

But maybe not:

https://open.substack.com/pub/datacenterrichness/p/meta-empt...

Other options are Ohio or Louisiana.

electric_mayhem

It’s only a matter of time before corporations are run by AI.

Add that to “corporate personhood” and what do we get?

JTbane

It's funny to think that the C-suite would ever give up their massive compensation packages.

null

[deleted]

mikert89

Guaranteed this is them cleaning out the old guard, its either axe them, or watch a brutal political game between legacy employees and new LLM AI talent

bartread

That was my reading too. Legacy team maybe not adding enough value and acting as a potential distraction or drag on the new team.

djmips

Fortunately there's probably a lot of opportunity for those 600 out there.

sharkjacobs

Yeah, it's a hot job market right now

brcmthrowaway

For AI only.

gh0stcat

Every time I see news like this, I just try to focus more on working on things I think are meaningful and contributing positively to the world... there is so much out of our control but what is in our control is how we use our minds and what we believe in.

ares623

“If I work in/with AI my job will be safe” isn’t true after all.

DebtDeflation

It was never true, unless you're a top 100 in the world AI researcher. 99% of AI investment is in infrastructure (GPUs, data centers, etc). The goal is to eliminate labor, whether AI-skilled or not.

SecretDreams

They are at the for front of training PCs to replace them and teaching management that they can be replaced.

GolfPopper

Nobody's job is safe when the bubble pops. (Except for the "leadership" needed to start hyping the next bubble.)

SoftTalker

Whose money will they use?

jama211

Invest in the pubs and bars nearby, when the bubble pops they’ll be full.

Computer0

I am not a business expert, but my perception as a developer that loved Llama 1-3, is that it appears that this org is flailing.

cadamsdotcom

They hired fast to build some of these departments, you can bet not all of those hires were A+.

hedayet

my take: Meta’s leadership and dysfunctional culture failed to nurture talent. To fix that, they started throwing billions of $ at hiring from outside desperately.

And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.

SoftTalker

> Meta will allow impacted employees to apply for other roles within the company

How gracious.