Strategic Wealth Accumulation Under Transformative AI Expectations
93 comments
·February 22, 2025WorkerBee28474
geysersam
> zero sum nature of labor automation
Labor automation is not zero sum. This statement alone makes me sceptical of the conclusions in the article.
With sufficiently advanced AI we might not have to do any work. That would be fantastic and extraordinarily valuable. How we allocate the value produced by the automation is a separate question. Our current system would probably not be able to allocate the value produced by such automation efficiently.
addicted
Your criticism is completely pointless.
I’m not sure what your expectation is, but even your claim about the assumption the paper makes is incorrect.
For one thing, the paper assumes that the amount that will be transferred from the human lawyer to the AI lawyer would be $500 + the productivity gains brought by AI, so more than 100%.
But that is irrelevant to the actual paper. You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.
Because the actual nature of the future is irrelevant to the question the paper is answering.
The question the paper is answering is what impact such expectations of the future would have on today’s economy (limited to modeling the interest rate). Such a future need not arrive or even be possible as long as there is an expectation it may happen.
And future papers can model different variations on those expectations (so, for example, some may model that 20% of labor in the future will still be human, etc).
The important point as far as the paper is concerned is that the expectations of AI replacing human labor and some percentage of the wealth that was going to the human labor now accrues to the owner of the AI will lead to significant changes to current interest rates.
This is extremely useful and valuable information to model.
mechagodzilla
The $500 going to the "AI Owner" instead of labor (i.e. the human lawyer) is the productivity gain though, right? And if that was such a productivity gain (i.e. the marginal cost was basically 0 to the AI owner, instead of, say, $499 in electricity and hardware), the usual outcome is that the cost for such a product/service basically gets driven to 0, and the benefit from productivity actually gets distributed to the clients that would have paid the lawyer (who suddenly get much cheaper legal services), rather than the owner of the 'AI lawyer.'
We seem pretty likely to be headed towards a future where AI-provided services have almost no value/pricing power, and just become super low margin businesses. Look at all of the nearly-identical 'frontier' LLMs right now, for a great example.
larodi
Indeed, fair chance AI only amplifies certain sector's wages, but the 100% automated work will not get any magic margin. Not more than say smart trading to have too many people focus there.
pizza
This almost surely took place somewhere in the past week alone, just with a lawyer being the mediating human face.
gopalv
> The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document
Is your theory that the next week there will be an AI lawyer that charges only 400$, then it is a race to the bottom?
There is a proven way to avoid a race to the bottom for wages, which is what a trade union does - a union by acting as one controls a large supply of labour to keep wages high.
Replace that with a company and prices, it could very well be that a handful of companies could keep prices high by having a seller's market where everyone avoids a race to the bottom by incidentally making similar pricing calls (or flat out illegally doing it).
WithinReason
You would need to coordinate across thousands of companies across the entire planet
rvense
That seems unlikely - law is very much tied to a place.
habinero
There have been several startups that tried it, and they all immediately ran into hot water and failed.
The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.
You don't write a contract just to have a contract, you write one in case something goes wrong.
Litigation is highly dependent on the specific situation and case law. They're dealing with novel facts and arguing for new interpretations, not milling out an average of other legal works.
Also, you generally only get one bite at the apple, there's no do-overs if your AI screws up. You can hold a person accountable for malpractice.
chii
> The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.
this is true - and the majority of work of lawyers is in knowing past information, and synthesising possible futures from those information. In contracts, they write up clauses to protect you from past issues that have arisen (and may be potential future issues, depending on how good/creative said lawyer is).
In civil suits, discovery is what used to take enormous amounts of time, but recent automation in discovery has helped tremendously, and vastly reduced the amount of grunt work required.
I can see AI help in both of these aspects. Now, whether the newer AI's can produce the type of creativity work that lawyers need to do post information extraction, is still up for debate. So far, it doesn't seem like it has reached the required level for which a client would trust a pure ai generated contract imho.
I suspect the day you'd trust an AI doctor to diagnose and treat you, would also be the day you'd trust an AI lawyer.
echelon
> There is a proven way to avoid a race to the bottom for wages, which is what a trade union does
US automotive, labor, and manufacturing unions couldn't remain competitive against developing economies, and the jobs moved overseas.
In the last few years, after US film workers went on strike and renegotiated their contracts, film production companies had the genius idea to start moving productions overseas and hire local crews. Only talent gets flown in.
What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?
js8
> What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?
Labor action, such as strikes.
null
riku_iki
> people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document.
there will be caste of high-tech lawyers very soon which will be able to handle many times more volume of work thanks to AI, and many other lawyers will lose their jobs.
sgt101
I know one !
She's got international experience and connections but moved to a small town. She was a magic circle partner years ago. Now she has a FTTP connection and has picked up a bunch of contracts that she can deliver on with AI. She underbid some big firms on these because their business model was traditional rates, and hers is her cost * x (she didn't say but >1.0 I think)
Basically she uses AI for document processing (discovery) and drafting. Then treats it as the output of associates and puts the polish on herself. She does the client meetings too obviously.
I don't think her model will last long - my guess is that there will be a transformation in the next 5 years across the big firms and then she will be out of luck (maybe not at the margin though). She won't care - she'll be on the beach before then.
petesergeant
Yes, that is obvious. The point you are replying to is that oversupply will mean the cost to the consumer will fall dramatically too, rather than the AI owner capturing all of the previous value.
riku_iki
It depends. If there will be one/few winners on the market, they will dictate price after human labor out-competed through the price or quality.
kev009
That's a bit too simplistic; would a business have paid IBM the same overheads to tabulate and send bills with a computer instead of a pool of billing staff? In business the only justification for machinery and development is that you are somehow reducing overheads. The tech industry gets a bit warped in the pseudo-religious zeal around the how and that's why the investments are so high right now.
And to be transparent I'm very bearish on what we are being marketed to as "AI"; I see value in the techs flying underneath this banner and it will certainly change white collar jobs but there's endless childish and comical hubris in the space from the fans, engineers, and oligarchs jockeying to control the space and narratives.
smeeger
foolish assumption on your part
quotemstr
> Not worth reading.
I would appreciate a version of this paper that is worth reading, FWIW. The paper asks an important question: shame it doesn't answer it.
standfest
i am currently working on a paper in this field, focusing on the capitalisation of expertise (analogue to marx) in the dynamics of cultural industry (adorno, horkheimer). it integrates the theories of piketty and luhmann. it is rather theoretical, with a focus on the european theories (instead of adorno you could theoretically also reference chomsky). is this something you would be interested in? i can share the link of course
itsafarqueue
Yes please
thrance
Be careful, barely mentioning Marx, Chomsky or Picketty is a thoughtcrime in the new US. Many will shut themselves down to not have to engage with what you are saying.
wcoenen
If I understand correctly, this paper is arguing that investors will desperately allocate all their capital such that they maximize ownership of future AI systems. The market value of anything else crashes because it comes with the opportunity cost of owning less future AI. Interest rates explode, pre-existing bonds become worthless, and AI stocks go to the moon.
It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?
itsafarqueue
Correct. As a thought experiment, this becomes the most likely (non violent) way to stave off the mass impoverishment that is coming for the rest of us in an economic model that sees AI subsume productive work above some level.
ggm
Lawyers are like chartered engineers. It's not that you cannot do it for yourself, it's that using them confers certain instances of "insurance" against risk in the outcome.
Where does an AI get chartered status, admitted to the bar, and insurance cover?
mmooss
I don't think anyone who is an experienced lawyer can do it themselves, except very simple tasks.
ggm
"Do it for yourself" means self-rep in court, and not pay a lawyer. Not, legals doing AI for themselves. They already do use AI for various non stupid things but the ones who don't check it, pay the price when hallucinations are outed by the other side.
smeeger
it could be tomorrow. you dont know and the heuristics, which five years ago pointed unanimously to the utter impossibility of this idea, are now in favor of it.
qingcharles
What jobs do we think will survive if AGI is achieved?
I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?
bawolff
On the contrary, i think AI could replace many religious leaders right now.
I've already heard people comparing AI hallucinations to oracles (in the greek sense)
smeeger
this comment is a perfect example of how insane this situation is… because if you think about it deeply then you are able to understand that these machines will be more spiritual, more human than human beings. people will prefer to confide in machines. they will offer a kind of emotional and spiritual companionship that has never existed before outside of fleeting religious experiences and people will not be able to live without it once they taste it. for a moment in time, machines will be capable of deep selflessness and objectivity that is impossible for a human to have. and their intentions and incentives will be more clear to their human companions than those of other humans. some of these machines will inspire us to be better people. but thats only for a moment… before the singularity inevitably spirals out control.
BarryMilo
Why would we need jobs at that point?
qingcharles
Star Trek says we won't, but even if some utopia is achieved there will be a painful middle-time where there are jobs that haven't been replaced, but 75% of the workforce is unemployed and not receiving UBI. (the "parasite class" as Musk recently referred to them)
smeeger
important point here. regardless of what happens, the transition period will be extremely ugly. it will almost certainly involve war.
IsTom
Because the kind of people who'll own all the profits aren't going to share.
jajko
I dont think AI will lead into any form of working communism, so one still has to pay for products and services. It has been tried ad nausea and it always fails to calculate in human differences and flaws like greed and envy, so one layer of society ends up brutally dominating the rest.
otabdeveloper4
We already have 9 billion "GI"'s without the "A". What makes you think adding a billion more to the already oversupplied pool will be a drastic change?
_diyar
Marginal cost of labour is what will matter.
otabdeveloper4
That "AGI" is supposed to be a cheaper form of labor is an assumption based on nothing at all.
daft_pink
Is a small group really going to control AI systems or will competition bring the price down so much that everyone benefits and the unit cost of labor is further and further reduced.
kfarr
At home inference is possible now and getting better every day
sureIy
At home inference by professionals.
I don't expect dad to Do Your Own AI anytime soon, he'll still pay someone to set it up and run it.
pineaux
I see a few possible scenarios.
1) all work gets done by AI. Owners of AI reap the benefits for a while. There is a race to the bottom concerning costs, but also because people are not earning wages and come ang really afford the outputs of production. Thus rendering profits close to zero. If the people controlling the systems do not give the people "on the bottom" some kind allowance they will not have any chance for income. They might ask horrible and sadistic things from the bottom people but they will need to do something.
2) if people get pushed into these situations they will get riot or start civil wars. "Butlerian jihads" will be quite normal.
3) another scenario is that the society controlled by the rich will start to criminalise non-work in the early stages, that will lead to a new slave class. I find this scenario highly likely.
4) one of the options that I find very likely if "useless" people do NOT get "culled" en mass is an initial period of Revolt followed an AI controlled communist "Utopia". Where people do not need to work but "own" the means of production (AI workers). Nobody needs to work. Work is LARPing and is done by people who act like workers but don't really do anything (like some people do today) A lot of people don't do this, there are still people who see non-workers as leeching of the workers, because workers are "rewarded" by ingame mechanics (having a "better job"). Parallel societies will become normal. Just like now. Rich people will give themselves "better jobs" some people dont play the game and there are no real consequences, but not being allowed to play.
5) an amalgamation of the scenario as above, but in this scenario everybody will be forced to larp with the asset owning class. They will give people "jobs" but these jobs are bullshit. Just like many jobs right now. Jobs are just a way of creating different social classes. There is no meritocracy. Just rituals. Some people get to do certain rituals that give them more social status and wealth. This is based on oligarch whims. Once in a while a revolt, but mostly not needed.
Many other scenarios exist of course.
zurfer
Given that the paper disappoints, I'd love to hear what fellow HN readers do to prepare?
My prep is:
1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.
2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.
3) Staying fit and healthy, so physical labour stays possible.
energy123
> 2) investing in the chip manufacturing
The only thing I see as obvious is AI is going to generate tremendous wealth. But it's not clear who's going to capture that wealth. Broad categories:
(1) chip companies (NVDA etc)
(2) model creators (OpenAI etc)
(3) application layer (YC and Andrew Ng's investments)
(4) end users (main street, eg ChatGPT subscribers)
(5) rentiers (land and resource ownership)
The first two are driving the revolution, but competition may not allow them to make profits.
The third might be eaten by the second.
The fourth might be eaten by second, but it could also turn out that competition amongst the second, and the fourth's access to consumers and supply chains means that they net benefit.
The fifth seems to have the least volatile upside. As the cost of goods and services goes to $0 due to automation, scarce goods will inflate.
impossiblefork
To me it's pretty obvious that the answer (5).
It substitutes for human labour. This will reduce the price and substantially increase the benefits of land and resource ownership.
bob1029
I'd say #3 is most important. I'd also add:
4) Develop an obsession for the customers & their experiences around your products.
I find it quite rare to see developers interacting directly with the customer. Stepping outside the comfort zone of backend code can grow you in ways the AI will not soon overtake.
#3 can make working with the customer a lot easier too. Whether or not we like it, there are certain realities that exist around sales/marketing and how we physically present ourselves.
smeeger
i think if AI gains the ability to reason, introspect and self-improve (AGI) then the situation will become very serious very quickly. AGI will be a very new and powerful technology and AGI will immediately create/unlock lots of other new technologies that change the world in very fundamental ways. what people dont appreciate is that this will completely invalidate the current military/economic/geopolitical equilibrium. it will create a very deep, multidimensional power vacuum. the most likely result will be a global war waged by AGI-led and augmented militaries. and this war will be fought in the context of human labor having, for the first time in history, zero strategic, political or economic value. so, new and terrifying possibilities will be on the table such as the total collateral destruction of the atmosphere or supply chains that humans depend on to stay alive. the failure of all kinds of human-centric infrastructure is basically a foregone conclusion regardless of what you think. so my prep is simply to have a “bunker” with lots of food and equipment with the goal of isolating myself as much as possible from societal/supply chain instability. this is good because its good to be prepared for this kind of thing even without the prospect of AGI looming overhead because supply chains are very fragile things. and in the case of AGI, it would allow you to die in a relatively comfortable and controlled manner compared to the people who burn to death.
ghfhghg
2 has worked pretty well for me so far.
I try to do 3 as much as possible.
My current work explicitly forbids me from doing 1. Currently just figuring out the timing to leave.
sfn42
Nothing. I don't think there's anything I need to prepare for. AI can't do my job and I doubt it will any time soon. Developers who think AI will replace them must be miserable at their job lol.
At best AI will be a tool I use while developing software. For now I don't even think it's very good at that.
sureIy
> AI can't do my job
Last famous words.
Current technology can't do your job, future tech most certainly will be able to. The question is just whether such tech will come in your lifetime.
I thought the creative field was the last thing humans could do but that was the first one to fall. Pixels and words are the cheapest item right now.
sfn42
Sure man, I'll believe you when I see it.
I'm not aware of any big changes in writer/artist employment either.
rybosworld
Imagine two software engineers.
One believes the following:
> AI can't do my job and I doubt it will any time soon
The other believes the opposite; that AI is improving rapidly enough that their job is in danger "soon".
From a game theory stance, is there any advantage to holding the first belief over the second?
zurfer
It's not certain that we get TAI or ASI, but if we get it, it will be better at software development than us.
The question is which probability do you assign to getting TAI over time? From your comment it seems you say 0 percent in your career.
For me it's between 20 to 80 percent in the next ten years ( depending on the day :)
sfn42
I don't have any knowledge that allows me to make any kind of prediction about the likelihood of that technology being invented. I'm not convinced anyone else does either. So I'm just going to go about my life as usual, if something changes at some point I'll deal with it then. Don't see any reason to worry about science fiction-esque scenarios.
bawolff
If the singularity happens, i feel like interest rates will be the least of our concerns.
impossiblefork
It's actually very important.
If this kind of thing happens, if interest rates are 0.5%, then people on UBI could potentially have access to land and not have horrible lives, if it's 16% as these guys propose, they will be living in 1980s Tokyo cyberpunk boxes.
aquarin
There is one thing that AI can't do. Because you can't punish the AI instance, AI cannot take responsibility.
smeeger
this boils down to the definition of pain. what is pain? i doubt you know even if you have experienced it. theres no reason to think that even llms are not guided by something that resembles pain.
abtinf
Whoever endorsed this author to post on arxiv should have their endorsement privileges revoked.
farts_mckensy
this paper asserts that when "TAI" arrives, human labor is simply replaced by AI labor while keeping aggregate labor constant. it treats human labor as a mere input that can be swapped out without consequence, which ignores the fact that human labor is the source of wages and, therefore, consumer demand. remove human labor from the equation, and the whole thing collapses.
smeeger
so-called accelerationists have this fuzzy idea that everything will be so cheap that people will be able to just pluck their food from the tree of AI. they believe that all disease will be eliminated. but they go to great lengths to ignore the truth. the truth is that having total control over the human body will turn human evolution into a race to the bottom that plays out over decades rather than millennia. there is something sacred about the ultimate regulation: the empathy and kindness that was baked into us during millions of years of living as tribal creatures. and of course, the idea of AI being a tree from which we can simply pluck what we need… is stupid. the tree will use resources, every ounce of its resources, to further its own interests. not feed us. and we will have no way of forcing it to do otherwise. so, in the run-up to ASI, we will be exposed to a level of technology and biological agency that we are not ready for, we will foolishly strip ourselves of our genetic heritage in order to propel human-kind in a race to the bottom, the power vacuum caused by such a sudden change in society/technology will almost certainly cause a global war, and when the dust settles we will be at the total mercy of super-intelligent machines to whom we are so insignificant we probably wont even be included in their internal models of the world.
jsemrau
Accelerationists believe in a post-scarcity society where the cost of production will be negligible. In that scenario, and I am not a believer, consumer demand would be independent of wages.
riffraff
That makes wealth accumulation pointless so the whole article makes no sense either, right?
Tho I guess even post scarcity we'd have people who care about hoarding gold-pressed latinum.
otabdeveloper4
> consumer demand would be independent of wages
That's the literal actual textbook definition of "communism".
Lmao that I actually lived to see the day when techbros seriously discuss this.
bawolff
> Lmao that I actually lived to see the day when techbros seriously discuss this.
People have been making comparisons between post scarcity economics and "utopia communism" for decades at this point. This talking point probably predates your birth.
doubleyou
communism is a universally accepted ideal
riku_iki
consumer demand will shift from middle-class demand (medium houses, family cars) to super-rich demand (large luxury castles, personal jets and yachts, high-profile entertainment, etc) + provide security to superrich (private automated police forces).
psadri
This has already been happening. The gap between wealthy and poor is increasing and the middle class is squeezed. Interestingly, simultaneously, the level of the poor has been rising from extreme poverty to something better so we can claim that the world is relatively better off even though it is also getting more unequal.
riku_iki
poor got more comfortable life because of globalization: they became useful labor for corps. Things will go back to previous state if their jobs will go to AI/robots.
yieldcrv
Do you have a degree in theoretical economics?
“I have a theoretical degree in economics”
You’re hired!
real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none
habinero
This paper is silly.
It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.
FWIW, the author is listed as a fellow of "The Forethought Foundation" [0], which is part of the Effective Altruism crowd[1], who have some cultish doomerism views around AI [2][3]
There's a reason this stuff goes up on a non-peer reviewed paper mill.
--
[0] https://www.forethought.org/the-2022-cohort
[1] https://www.forethought.org/about-us
[2] https://reason.com/2024/07/05/the-authoritarian-side-of-effe...
[3] https://www.techdirt.com/2024/04/29/effective-altruisms-bait...
0xDEAFBEAD
>It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.
Isn't developing AGI basically the mission of OpenAI et al? What's so bad about considering what will happen if they achieve their mission?
>who have some cultish doomerism views around AI [2][3]
Check the signatories on this statement: https://www.safe.ai/work/statement-on-ai-risk
krona
The entire philosophy of existential risk is based on a collection of absurd hypotheticals. Follow the money.
Not worth reading.
> this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.
The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.