Skip to content(if available)orjump to list(if available)

Meta's Vision for Superintelligence

Meta's Vision for Superintelligence

156 comments

·July 30, 2025

hnthrow90348765

This stuff is naive. There's a bunch of people who want a large income (wealth) disparity, and they will fight to preserve it unless you give them an equivalent station in the 'new world'.

But you will still need to sustain ex-workers if they can't get normal jobs, and those same people at the top will not tolerate the taxes required to sustain a basic level of living for much wider population. They already can't tolerate the idea of a much smaller population using food assistance or healthcare from the government.

That leads me to think this is not really a visionary statement, but just a signal that Mark isn't intentionally trying to bring about a new dystopia, and here's his proof. And if a dystopia happens to come about, you can't blame him because he had pure intentions; clearly it was everyone else who just didn't agree with him and it's their fault.

Maybe make Meta a not-for-profit and there might be some credibility here.

lvl155

Last time I commented on Zuck on HN I got a warning. That said, this stuff confirms for the 100th time that he is out of touch with reality. Perhaps that’s why he wanted to make VR work so badly. I think he might try to change Co’s name again since Meta doesn’t fit the bill anymore. How about they buy Intel and reverse merge just for the name?

benterix

> he might try to change Co’s name again since Meta doesn’t fit the bill anymore.

If so, the logical choice would be to change the name from "Meta" to "AGI".

lvl155

I was thinking they buy Intel and say “we are going to make Intel super. SuperIntel.” Zuck seems to like the term superintelligence over AGI.

deepfriedchokes

I think perhaps it would be useful to completely ignore the nice words people use and just judge everyone based on their behavior.

From Zuckerberg’s behavior, since the beginning, it’s clear what he wants is power, and if you have the kind of mental health disorder where you believe you know better than everyone and deserve power over others, then that’s not dystopian at all.

Everything he says is PR virtue signaling. Judge the man on his actions.

lcnPylGDnU4H9OF

> completely ignore the nice words people use

Kind of an unrelated topic but I'm reminded of a video essay in which the creator talks about this. They put it very kindly, IMO:

> Rich and powerful people have quite a different attitude and approach to truth and lies and games compared to ordinary people.

Which sounds like a really nice way of saying that rich and powerful people are dishonest by ordinary standards.

https://youtu.be/m6lObdE3s10?t=245

FirmwareBurner

>There's a bunch of people who want a large income (wealth) disparity

Apart form you of course, so I'm sure you'd be ok if the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe, and therefore fix the wage gap you hate so much. Would you like that solution?

Caption this: It's only a problem when the people who earn more than me are greedy, but my greed is fine, it's OK for me to out-earn others because "I've earned it", not like Zuckerberg, he didn't earn it.

benterix

> the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe

I live in Europe and earn ca. 6 times more than my friend who is a bus driver in the same city. We both have access to free education and, if we wish, also free healthcare, for which I am paying slightly more, but I really don't mind.

Gud

If you earn 6 times more than your friend who is a bus driver, you live in a place that has an unusually high income disparity for Europe.

queenkjuul

I could comfortably live on a bus drivers wage here, obviously I'd prefer they make as much as tech workers, given their job is much harder; your solution is fine with me too, though.

hnthrow90348765

Sure I would, as long as they tax billionaires even more and guarantee it. I do CRUD app development, I'm not even responsible for anything as potentially dangerous as a train. Superintelligence would very likely take my job anyway, so I won't get taxed for long.

blinkbat

there's a cap on the bracket around 600k. there are people who make many times this and their percentage owed does not go up. they are also uniquely positioned to avoid paying what low comparative amount they owe. let's start there.

tanduv

perhaps we can create a sort of a bracket that scales based on the income?

DanHulton

Unless you are yourself a robber baron the likes of Zuck, you should look up this little concept called "class solidarity."

null

[deleted]

FirmwareBurner

I'm just pointing out the hypocrisy here of people seeing greed only in those with more income than them but never in themselves when they accept those generous big-tech, big-finance, big-ad-tech, big-4, big-pharma compensation packages from the evil robber barons they claim to hate. If you hate them so much why are you taking their blood money?

Also, there is no class solidarity the way you imagine it in your fantasy, because to the average person on the street putting the fries in the bag ac McD, or stacking shelves at Walmart, or tearing down the roads with a jackhammer in the summer heat, the big-tech worker is closer to the robber baron Zuckerberg, than they are to them. So when you get laid off from your big-tech job, they won't have solidarity for you, they might even break a smile, as those spoiled pampered tech worker are brought down from their Kombucha sipping ivory towers.

Class solidarity, as seen applied in Europe, means bringing the income of tech workers in line with unskilled labor till everyone is equally lower-middle class, not touching the super wealthy robber barons to contribute more to society, because no society does that, that's just fantasy. Look at the owner of IKEA's complex tax avoidance scheme: https://www.greens-efa.eu/legacy/fileadmin/dam/Documents/Stu... Do you think he has any class solidarity? He has more in common with Musk, Zuckerberg or XiJinping than with his average Swedish countrymen.

The more class solidarity you wish and vote for, the higher the tax burdens will be on skilled and ambitious middle class workers and small businesses, not on Zuckerberg or the elites with inherited wealth. So be careful what you wish for. My country already went through communism once and everyone had enough of "class solidarity" for the next lifetime, but there's always some westerners out there who cling on that "this time it will be different". Sure buddy.

null

[deleted]

Voloskaya

> We'll need to be rigorous about mitigating these risks and careful about what we choose to open source.

Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.

Also,

> As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.

Yea about that... Sure Mark can choose to just fly on his private Hawaiian Island, or is Tahoe bunker and mess around with metaverse and AI and whatever he chooses. 99.9% of the population has an old regular job that they go to for subsistence. Michael from north dakota has not been doing bookeeping for SMEs because this was always the pursuit of his dreams. I also see no reason at all to believe we spend more time on creativity, culture, relationships or enjoying life than before. Especially that last point is in free fall over the last 50 years by the look every single mental well being metric around.

[1]: https://www.nytimes.com/2025/07/14/technology/meta-superinte...

simonsarris

> Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.

That's not pulling a trick, that's doing precisely what Zuck said he would do. In April 2024 Zuck on Dwarkesh said that models are a commodity right now, but if models became the biggest differentiator, that Meta would stop open sourcing them.

At the time he also said that the Model itself was probably not the most valuable part of an ultimate future product, but he was open to changing his mind on that too.

You can whine about that anyway, but he's not tricking anyone. He has always been frank about this!

Voloskaya

July 2024:

> Open Source AI is the Path Forward.

> Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.

> We need to control our own destiny and not get locked into a closed vendor.

> We need to protect our data.

> We want to invest in the ecosystem that’s going to be the standard for the long term.

> There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives.

> I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors [...] As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.

> The bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.

> I hope you’ll join us on this journey to bring the benefits of AI to everyone in the world.

> Mark Zuckerberg

Pulling the "Closed source for safety" card, once it makes economic sense for you, after having clearly outlined why you think open source is safer, and how you are "committed" to it "for the long term" and for the "good for the world", is mainly where my criticism is coming from. If he was upfront in the new blog post about closing source for competitive reason, I would still find it a distasteful bait and switch but much less so than trying to just put the safety sticker on it after having (correctly) trashed others for doing so.

https://about.fb.com/news/2024/07/open-source-ai-is-the-path...

DanHulton

> The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.

Oh, is it now? So you know for a fact that intelligence comes from token prediction, do you, Mark?

Look, multi-bit screwdrivers have been improving steadily as well. I've got one that stores all it's bits in the handle, and one with over three dozen bits in a handy carrying case! But they're never going to suddenly, magically become an ur-tool, capable of handling any task. They're just going to get better and better as screwdrivers.

(Well, they make a handy hammer in a pinch, but that's using them off-spec. The analogy probably fits here, too, though.)

My POINT, to be crystal clear, is that Mark is saying that A is getting better, so eventually it will turn into B. It's ludicrous on its face, and he deserves the ridicule he's getting in the comments here.

But I also want to go one step further and maybe turn the mirror around a bit. There's also an odd tendency here to do a very similar thing: to observe critical limitations that LLM tools have, that they have always have, and that are very likely baked into the technology and science powering these tools, and then to do the same thing as Mark, to just wave our hands, and say "But I'm sure they'll figure it out/fix it/perfect it soon."

I dunno, I don't see it. I think we're all holding incredible screwdrivers here, which are very impressive. Some people are using them to drive nails, which, okay, sure. But acting like a screwdriver will suddenly turn into precision calipers (and a saw, and a level, and...) if we just keep adding on more bits, I think that's just silly.

tim333

That's not really what he said.

cootsnuck

Isn't it though? He's provided zero evidence to suggest otherwise. So of course we are all going to assume he's talking about the current, popular, SOTA architectures still as the foundational piece.

jameskilton

Facebook / Meta and Mark in particular are an amazing case of someone who is, at least at this point, incapable of learning from past mistakes, or even recognizing the mistakes they have made and are continuing to make.

Facebook's mission of "connecting the world" turned out to be the absolute worst thing anyone should ever try to do. Humans are social creatures, yes, but every connection we make costs energy to maintain, and at a certain point (Dunbar's Number) we apply the minimal amount of energy and effort. With Internet anonymity, that means we are actually incapable of treating each other as people on the Internet, leading to the rise of toxicity and much, much worse.

Mark has never understood this, and as his fortune is built around not understanding this, he never will.

There is nothing good that will come from Meta's "superintelligence" and this vision is proof.

dinfinity

I don't think the "connecting the world" was the problem. IRC also has tons of toxicity and connects people all over the world.

The core problem is gamification of social interaction. The 'Like' button and everything like it for things people say or show is hands down the worst thing to happen on the internet. Everywhere they can, people whore for karma (unless they spend a lot of mental effort to fight back that urge). How primitive the related moderation systems are directly affects how much primitive shit gets rewarded and alas, most moderation systems are ridiculously primitive.

So, dopamine hits for saying primitive shit.

9rx

> that means we are actually incapable of treating each other as people on the Internet

Well, that's because there aren't people on the internet! I mean, yes, us technologists understand that there are often people pulling knobs and levers behind the scenes as an implementation detail, so technically they are there. But they are only implementation details, not what makes it what it is. If you replaced the implementation with another algorithm that functions just as well, nobody would notice. In that sense, it is just software.

> leading to the rise of toxicity and much, much worse.

It is not so much that it has lead to anything different, but that those who used to be in the forest yelling at animals as if they were human moved into civilized areas when they started yelling at computers as if they were human. That has taken their mental disorders to where it is much more visible.

pbrum

What if superintelligence isn't even a thing? I was watching an interview with a Chinese-American specialist the other day (I'm sure it's been shared here on HN at some point) and she explained in the Chinese AI community they don't operate under the assumption that something such as AGI or superintelligence exists, and therefore don't work toward that goal. I'm sure people in this community can comment to a much more informed extent on this than I though

mrcwinn

>Personal superintelligence that knows us deeply, understands our goals, and can help us achieve.

>We believe the benefits of superintelligence should be shared with the world as broadly as possible.

So... ads.

ankit219

That is the model everyone gravitate towards. Openai's Fidji too started with the note about how superintelligence is for everyone.

I think it would be back to income based tiers though. You want more assistance, pay $200 per month. Even more, maybe $2000 (for companies). Then, if you dont want to pay, you get contextual ads (which would work here because llms can contextualize far better), and a lower quality of service.

saubeidl

Not just ads. Psyops, a propaganda machine unlike anything the world has ever seen. There's a reason Zuck and the US government are real cozy lately.

qprofyeh

Reads as RIP Metaverse to me.

Any time a CEO publishes such empty, wordy essays, it's probably earnings reporting time. I can't shake the feeling it's a public subreply at one of or a cluster of doubting investors, who started to doubt the CEOs vision for the company, or find the lack of one on a certain topic concerning.

laweijfmvo

FWIW, today is Meta’s earnings report

lvl155

That’s exactly why he published this. To justify his insane investment spend. I am not sure if investors will continue to give him a blank check. He’s not spending his money. He’s spending shareholder money to pursue personal projects and endeavors.

9rx

> I am not sure if investors will continue to give him a blank check.

What are they going to do, exactly? They explicitly invested in the company knowing that Zuckerberg would retain full control.

If they can show gross negligence there may be a legal avenue, but it would be pretty hard to justify chasing potentially profitable business ventures, even if they end up failing, as being negligence. Controversial business decisions are not negligence in the eye of the law.

Sure, they can sell their interest in the company — if someone else wants to buy it — but that just moves who the investor is around. That doesn't really change anything.

xnx

Looks like after-hours investors like what they see. Stock at all time high.

subpixel

"more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be."

Meanwhile I can't properly find items that are listed on FB marketplace.

orochimaaru

It’s a fluff piece for investors. I think the same may have been said for the metaverse.

aeon_ai

I read Careless People recently.

I don't think the author of that book is unbiased, and after some healthy debate with friends, imagine there are a number of different perspectives on the facts. But it seems clear that, well before it was public knowledge outside of the company, there was clearly visibility of and ignorance over harms being caused by the platform inside of it.

Facebook (now Meta) turned human attention into a product. They optimized for engagement over wellbeing and knew that their platforms were amplifying division and did it anyway because the metrics looked good.

It's funny, because I aspire to many of the same things cited in this vision -- helping realize the best in each individual, giving them more freedom, and critically, helping them be wise in a world that very clearly would prefer them not to be.

But the vision is being pitched by the company that already knows too much about us and has consistently used that knowledge for extraction rather than empowerment.

twoodfin

If trends continue, then you'd expect people to spend less time in productivity software, and more time creating and connecting.

Does the average American worker today spend a ton of time in productivity software?

I know and Zuckerberg surely knows the impact on labor will be much more pervasive than that, so it seems like an odd way to frame the future.

reaperducer

Does the average American worker today spend a ton of time in productivity software?

"Average?" No. But many millions of people, yes.

The majority of people in my company spend their day tied to Microsoft Office.

Which bring its own problems when managers don't understand that building a computer program isn't the same speed, complexity, and skill level as making a PowerPoint presentation.

ctippett

I know your comparison to PowerPoint was probably not meant to be taken literally, but I'll just add that a good presentation takes just as much time, skill and effort as any creative endeavour (including programming).

K0balt

I’d love to see a PowerPoint presentation that has a million man-hours of work in it. Oh, never mind, I probably have.

But seriously, this comment can easily be true, and if it is , then it is an excellent example of a human endeavour that we invented to improve efficiency but has become a bottomless sink of talent, effort, and cost directed away from generating any value whatsoever.

I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.

Presentations are a great example of an activity that has become an end unto itself that delivers no value, and only serves as a kind of internal preening behaviour, signalling a persons value to the organisation without actually delivering any.

meindnoch

>and more time creating and connecting.

Creating what? AI slop?

HDThoreaun

I mean yea, Marks vision here is that genAI creates special personalized space in his metaverse for literally everyone in the world. Slop for the masses.

reaperducer

more time creating

Considering that the most common use for "AI" is to take jobs away from creators like artists, musicians, illustrators, writers, and such, I find this statement hard to believe.

So far, all I've seen is AI taking money away from the least-paid workers (artists, et.al.) and giving it to tech billionaires.

K0balt

But people have to keep creating to feed the AI! AI is extractive, not creative, so without people toiling away and adding actual creativity, current paradigm AI will become increasingly derivative and uninspiring… so the obvious answer is to put people into nutrient filled VR pods so they can imagine actually new things to power the AI hive-mind.

pnutbutry

[dead]

osti

Looks like no more open source models :(

"We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible."

adrianbooth17

Meta's business model requires:

- Maximum data extraction - Behavioral modification for profit - Attention capture and addiction maintenance

"Personal superintelligence" serves all three perfectly while appearing to do the opposite.