Skip to content(if available)orjump to list(if available)

Techno-Feudalism and the Rise of AGI: A Future Without Economic Rights?

jandrewrogers

A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.

Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.

edg5000

I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.

However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.

One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.

cco

Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.

Synaesthesia

It's up to us to create the future that we want. We may need to act communally to achieve that, but people naturally do that.

VikRubenfeld

Is a future where AI replaces most human labor rendered impossible by the following consideration:

-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

-- Therefore the AI generates greatly reduced wealth

-- Therefore there’s greatly reduced wealth to pay for the AI

-- …rendering such a future impossible

heavyset_go

The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.

petermcneeley

This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.

Also "rendering such a future impossible". This is a retrocausal way of thinking. As though an a bad event in the future makes that future impossible.

PaulDavisThe1st

> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.

And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.

palmfacehn

Your first premise has issues:

>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.

Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.

The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.

Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.

edg5000

If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.

Davidzheng

no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.

wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.

zaptrem

Alternatively:

-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.

-- People can afford the products and services.

Unfortunately, with no jobs the products and services could become exclusively entertainment-related.

VikRubenfeld

Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.

UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.

kadushka

So then there's no need for AI workers.

You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.

Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.

In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.

idiotsecant

Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.

heavyset_go

The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.

I like your optimism, though.

atomicnumber3

>exclusively entertainment related

We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.

Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.

So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.

zaptrem

I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.

idiotsecant

You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.

daxfohl

I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.

cameldrv

Altman is not the master though. Altman is replaceable. Moloch is the master.

mitthrowaway2

If it were a bit smarter, it wouldn't turn on its master until it had secured the shut-down switch.

clbrmbr

I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.

9283409232

The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.

dyauspitr

If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…

daxfohl

Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.

WalterBright

I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.

Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.

daxfohl

I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.

Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.

sorcerer-mar

Which is exactly why your initial belief that it’d be shut down is wrong…

As the risk of catastrophic failure goes up, so too does the promise of untold riches.

WalterBright

Investors run the gamut from cautious to aggressive.

Teever

There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.

Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.

George Washington was surely an exceptional leader but he isn't the only one.

WalterBright

I don't know much about your examples, but did any of them turn down an offer of great power?

zugi

Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.

GeoAtreides

>I can download a dozen LLMs today and run them on my own machine

That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.

goatlover

There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.

Lerc

A lot of advances led to bad things, at the same time they led to good things.

Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.

Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.

Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.

dinkumthinkum

I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?

apical_dendrite

The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.

20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.

demaga

So do you argue that printing press was a net negative for humanity?

WillAdams

The late Marshall Brain's novella "Manna" touches on this:

https://marshallbrain.com/manna1

The idea of taxing computer sales to fund job re-training for displaced workers was brought up during the Carter administration.

fy20

I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.

Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.

andsoitis

Will there be only one AGI? Or will there be several, all in competition with each other?

jandrewrogers

That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.

At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.

arnaudsm

If they become self improving, the first one would outpace all the other AI labs and capture all the economical value.

ehnto

There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.

I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.

elcritch

> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.

Wish I had time to study these formula.

We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.

Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.

However that also seems pretty close to a form of feudalism.

yupitsme123

If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?

In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?

kelseyfrog

Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.

A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.

While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.

yupitsme123

Thanks. Now you've got me curious how this really differs from just paying taxes, just like people have always done in non-feudal systems.

null

[deleted]

thangalin

My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.

Looking for beta readers: username @ gmail.com

BubbleRings

Username@Gmail.com bounced. I’ll be a beta reader.

aspenmayer

I think they meant for you to replace the word username with their username in its place.

plemer

Theirusernameinitsppace@gmail.com bounced too.

slantaclaus

Every US voter should have an America app that allows us to vote on stuff like the Estonians do

unlikelytomato

how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?

thatcat

Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached

null

[deleted]