Strategic Wealth Accumulation Under Transformative AI Expectations
153 comments
·February 22, 2025wcoenen
DennisP
It seems more general than that. Right now returns go partly to capital, partly to labor. With "transformative AI" the returns go almost entirely to capital. This is true whether it's mostly from labor shrinking or total output increasing.
Since most returns go to capital, we can expect returns on capital to increase.
harshalizee
How does that even work? If labor has no income to spend on goods and services. where is the return on capital coming from? Isn't this halting velocity of money, thereby making it useless? Can someone smarter explain this to me.
hollerith
>If labor has no income to spend on goods and services, where is the return on capital coming from?
Some goods and services are sold to another business, not a consumer. Right now, these "B2B" sales represent about 30% of sales volume in the economy, but there is probably no impediment to that number rising to 90%.
powerapple
when there is no labor, there is no money, and no capital. AI does not work with capitalism, AI fundamentally is communism XD
What do you mean by having a large group of robots working for you? to produce products? to trade for what?
asdff
It seems unlikely because it isn't even rooted in precedent. If this were the case why didn't the world divest from everything and invest into petroleum related investments exclusively after 1900? The reality is that diversification has advantages for an investor. One might say with this same logic why does anyone buy any stock that doesn't perform as well as nvda today? Because past performance doesn't guarantee future returns and there is sense in diversification.
itsafarqueue
Correct. As a thought experiment, this becomes the most likely (non violent) way to stave off the mass impoverishment that is coming for the rest of us in an economic model that sees AI subsume productive work above some level.
throwawayqqq11
Well, i really dont want to be the dystopian guy any more but doesnt this political correction require political representation of such an idea? Looking at the past, cybernetic socialism appears very unlikely to me.
ggm
Lawyers are like chartered engineers. It's not that you cannot do it for yourself, it's that using them confers certain instances of "insurance" against risk in the outcome.
Where does an AI get chartered status, admitted to the bar, and insurance cover?
mmooss
I don't think anyone who is an experienced lawyer can do it themselves, except very simple tasks.
ggm
"Do it for yourself" means self-rep in court, and not pay a lawyer. Not, legals doing AI for themselves. They already do use AI for various non stupid things but the ones who don't check it, pay the price when hallucinations are outed by the other side.
tyre
Lawyers are the last people who would represent themselves. They know how dumb that is.
mmooss
Oops - I met 'not an experienced lawyer'. I've gotta proofread.
smeeger
it could be tomorrow. you dont know and the heuristics, which five years ago pointed unanimously to the utter impossibility of this idea, are now in favor of it.
whatever1
Ok let’s play this scenario. Why this was not the case when the internet was at its infancy? People kept pumping money at young and failing tech companies, they were not hoarding at the expectation that internet will mature and the marginal cost Production for the internet companies will become 0.
WorkerBee28474
Not worth reading.
> this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.
The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.
addicted
Your criticism is completely pointless.
I’m not sure what your expectation is, but even your claim about the assumption the paper makes is incorrect.
For one thing, the paper assumes that the amount that will be transferred from the human lawyer to the AI lawyer would be $500 + the productivity gains brought by AI, so more than 100%.
But that is irrelevant to the actual paper. You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.
Because the actual nature of the future is irrelevant to the question the paper is answering.
The question the paper is answering is what impact such expectations of the future would have on today’s economy (limited to modeling the interest rate). Such a future need not arrive or even be possible as long as there is an expectation it may happen.
And future papers can model different variations on those expectations (so, for example, some may model that 20% of labor in the future will still be human, etc).
The important point as far as the paper is concerned is that the expectations of AI replacing human labor and some percentage of the wealth that was going to the human labor now accrues to the owner of the AI will lead to significant changes to current interest rates.
This is extremely useful and valuable information to model.
mechagodzilla
The $500 going to the "AI Owner" instead of labor (i.e. the human lawyer) is the productivity gain though, right? And if that was such a productivity gain (i.e. the marginal cost was basically 0 to the AI owner, instead of, say, $499 in electricity and hardware), the usual outcome is that the cost for such a product/service basically gets driven to 0, and the benefit from productivity actually gets distributed to the clients that would have paid the lawyer (who suddenly get much cheaper legal services), rather than the owner of the 'AI lawyer.'
We seem pretty likely to be headed towards a future where AI-provided services have almost no value/pricing power, and just become super low margin businesses. Look at all of the nearly-identical 'frontier' LLMs right now, for a great example.
larodi
Indeed, fair chance AI only amplifies certain sector's wages, but the 100% automated work will not get any magic margin. Not more than say smart trading to have too many people focus there.
visarga
> You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.
Do you think we will be doing in 5 or 10 years the same things we do today, but with AI? Every capability increase or cost reduction stimulates demand. AI is no different, it will stimulate both demand and competition. And since everyone has AI, and AIs are not much different between them, then the differentiating factor remain the humans. Even if we solve all our current problems with AI there is no reason to stop there, we could reduce poverty, pollution, fight global warming, conquer space. The application space is unbounded. Take electricity or internet for example and think how they expand the scope of work. Programming has been automating itself for 60 years, with each new language, library or open source project, and yet we have great jobs in the field.
No matter how much we have, we want more. Our capability of desiring progress is faster than AI capability to provide it.
addicted
Again, whether I think this or not is irrelevant to the question the paper is tackling.
The question the paper is tackling is how human behavior today (limiting this to modeling interest rates) would change if people generally expected money to transfer from labor to AI.
A future where this happens isn’t necessary for this paper to be valuable. Heck, even if people never believe this will actually happen makes this paperback less valuable.
As an analogy, let’s say someone wrote a paper on how humanity could respond to a catastrophic hit by a large meteor.
This paper would be valuable even if the earth is never hit by a meteor while humans are still here. Further, it would be additionally valuable because the insights gained here could then be applied to other, less catastrophic scenarios, such as a volcano eruption that destroys a single city.
pessimizer
> The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.
It's an absurd assumption made by AI investors everywhere. They can't handle a world where everyone already has an AI lawyer at home that they trust, that they have because they once paid $100 for it at a kiosk in the mall or pirated it. The real future is an AI lawyer on your keychain and an extreme devaluation of the skill of knowing the law and making legal arguments.
Instead, we're going to have a weirder world where you show up to court and the court already has a list of your best legal arguments that they generated completely independent of you, and they largely match the list of arguments that your own AI advisor app gave you. They'll send you messages regarding your best next steps, and if your own device agrees, all you'll have to do is reply 'Y.'
For simple document preparation, I'm pretty sure that your phone will be able to handle it, and AI at the point of submission would be able to give you helpful suggestions if the documents were inadequate.
LLMs can almost do things of this degree of difficulty reasonably well now. Where will they be (or their successors be) in 10 years? Why do we think they will be as expensive as lawyers, who you have to send to difficult schools for a long time, feed, and flatter?
tim333
I agree that quote seems wrong. When tech reduces the cost of providing a service, the price of the service to consumers is generally driven down correspondingly by competition rather than the service provider getting rich.
The whole AI will cause interest rates to shoot up thing seems a bit mad.
geysersam
> zero sum nature of labor automation
Labor automation is not zero sum. This statement alone makes me sceptical of the conclusions in the article.
With sufficiently advanced AI we might not have to do any work. That would be fantastic and extraordinarily valuable. How we allocate the value produced by the automation is a separate question. Our current system would probably not be able to allocate the value produced by such automation efficiently.
asdff
What like how streaming services were supposed to save you money on cable now everyones subscriptions add up to more than cable? The incentives mean if there is money on the table to be taken it will be taken. If they are paying lawyers $500 an hour there is money for that for an ai. Especially if that company is claiming the ai is the best lawyer ever.
cgcrob
They also forget the economic model that you have to pay $5000 for a real lawyer after the fact to undo the mess you got yourself in by trusting the output of the AI in the first place which made a nuanced mistake that the defending "meat" lawyer picked up in 30 seconds flat.
The proponents of AI systems seem to mostly misunderstand what you're paying for really. It's not writing letters.
jjmarr
https://www.stimmel-law.com/en/articles/story-4-preprinted-f...
Love this story so much I just posted it. Although it's from an era in which you'd buy CDs and books containing contracts, it's still relevant with "AI".
> “No lawyer writes a clause who is not prepared to go to court and defend it. No lawyer writes words and let’s others do the fighting for what they mean and how they must be interpreted. We find that forces the attorneys to be very, very, very careful in verbiage and drafting. It makes them very serious and very good. You cook it, you eat it. You draft it, you defend it.”
bberenberg
This is not true in my experience. We had our generic contract attorney screw up and then our litigation attorney scolded me for accepting and him for him providing advice on litigation matters where he wasn’t an expert.
Lawyers are humans. They make the same mistakes as others humans. Quality of work is variable across skills, education, and if they had a coffee or not that day.
pizza
This almost surely took place somewhere in the past week alone, just with a lawyer being the mediating human face.
quotemstr
> Not worth reading.
I would appreciate a version of this paper that is worth reading, FWIW. The paper asks an important question: shame it doesn't answer it.
standfest
i am currently working on a paper in this field, focusing on the capitalisation of expertise (analogue to marx) in the dynamics of cultural industry (adorno, horkheimer). it integrates the theories of piketty and luhmann. it is rather theoretical, with a focus on the european theories (instead of adorno you could theoretically also reference chomsky). is this something you would be interested in? i can share the link of course
thrance
Be careful, barely mentioning Marx, Chomsky or Picketty is a thoughtcrime in the new US. Many will shut themselves down to not have to engage with what you are saying.
itsafarqueue
Yes please
qingcharles
What jobs do we think will survive if AGI is achieved?
I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?
bawolff
On the contrary, i think AI could replace many religious leaders right now.
I've already heard people comparing AI hallucinations to oracles (in the greek sense)
etiam
To the extent that's just a matter of seeming the most compelling, I think they could blow humans out of the water. Add rich reinforcement feedback on what's the most addictive communication and what's superficially experienced as the most profound, and present-day large models could probably be a contender. A good robot body today is probably not far from being competitive as representation, and some holograms might well already be better in some ways.
To the extent it requires actual faith it's presently a complete joke, of course, and I expect it will remain so for a long time. But I'd say the quality bar for congregation members is due for a rise.
bad_haircut72
I think futurama got AGI exactly right, we will end up living along side robotic AIs that are just as coocoo as us
smeeger
this comment is a perfect example of how insane this situation is… because if you think about it deeply then you are able to understand that these machines will be more spiritual, more human than human beings. people will prefer to confide in machines. they will offer a kind of emotional and spiritual companionship that has never existed before outside of fleeting religious experiences and people will not be able to live without it once they taste it. for a moment in time, machines will be capable of deep selflessness and objectivity that is impossible for a human to have. and their intentions and incentives will be more clear to their human companions than those of other humans. some of these machines will inspire us to be better people. but thats only for a moment… before the singularity inevitably spirals out control.
otabdeveloper4
We already have 9 billion "GI"'s without the "A". What makes you think adding a billion more to the already oversupplied pool will be a drastic change?
_diyar
Marginal cost of labour is what will matter.
otabdeveloper4
That "AGI" is supposed to be a cheaper form of labor is an assumption based on nothing at all.
BarryMilo
Why would we need jobs at that point?
qingcharles
Star Trek says we won't, but even if some utopia is achieved there will be a painful middle-time where there are jobs that haven't been replaced, but 75% of the workforce is unemployed and not receiving UBI. (the "parasite class" as Musk recently referred to them)
smeeger
important point here. regardless of what happens, the transition period will be extremely ugly. it will almost certainly involve war.
IsTom
Because the kind of people who'll own all the profits aren't going to share.
jajko
I dont think AI will lead into any form of working communism, so one still has to pay for products and services. It has been tried ad nausea and it always fails to calculate in human differences and flaws like greed and envy, so one layer of society ends up brutally dominating the rest.
bawolff
If the singularity happens, i feel like interest rates will be the least of our concerns.
impossiblefork
It's actually very important.
If this kind of thing happens, if interest rates are 0.5%, then people on UBI could potentially have access to land and not have horrible lives, if it's 16% as these guys propose, they will be living in 1980s Tokyo cyberpunk boxes.
farts_mckensy
this paper asserts that when "TAI" arrives, human labor is simply replaced by AI labor while keeping aggregate labor constant. it treats human labor as a mere input that can be swapped out without consequence, which ignores the fact that human labor is the source of wages and, therefore, consumer demand. remove human labor from the equation, and the whole thing collapses.
smeeger
so-called accelerationists have this fuzzy idea that everything will be so cheap that people will be able to just pluck their food from the tree of AI. they believe that all disease will be eliminated. but they go to great lengths to ignore the truth. the truth is that having total control over the human body will turn human evolution into a race to the bottom that plays out over decades rather than millennia. there is something sacred about the ultimate regulation: the empathy and kindness that was baked into us during millions of years of living as tribal creatures. and of course, the idea of AI being a tree from which we can simply pluck what we need… is stupid. the tree will use resources, every ounce of its resources, to further its own interests. not feed us. and we will have no way of forcing it to do otherwise. so, in the run-up to ASI, we will be exposed to a level of technology and biological agency that we are not ready for, we will foolishly strip ourselves of our genetic heritage in order to propel human-kind in a race to the bottom, the power vacuum caused by such a sudden change in society/technology will almost certainly cause a global war, and when the dust settles we will be at the total mercy of super-intelligent machines to whom we are so insignificant we probably wont even be included in their internal models of the world.
farts_mckensy
You are projecting your own neurosis onto AI. You assume that because you would be selfish if you were a superintelligent being, an ASI system would act the same way.
achierius
I don't appreciate your condescension towards OP.
This is mainstream AI safety theory -- the term is "instrumental convergence". No matter what goal an optimizing system has, it tends to optimize for its own survival: after all, if it's an optimizer for <goal>, it wants to optimize for <goal>, so destroying it (or turning it off) will reduce the likelihood of achieving <goal>.
Unless that goal happens to be incredibly fine-tuned to our very complex human desires, we're not going to be happy when it goes off to do its thing.
The few exceptions are ones where you have the thing optimize for its own destruction, but those are rather less useful.
smeeger
it is a neurosis because a healthy human being will see the world in a pro-social way. a normal way. but this sometimes obscures the truth. the truth is that there will be many benevolent AIs… there will be every kind of AI imaginable. but very quickly the AIs that are cunning, brutal and self-interested will capture all the resources and power and become the image of this new species… saying that AIs will be benevolent or neutral is as naive as saying that the cambrian explosion couldnt result in animals eating each other because… that just sounds so neurotic. in reality it is an inevitability
jsemrau
Accelerationists believe in a post-scarcity society where the cost of production will be negligible. In that scenario, and I am not a believer, consumer demand would be independent of wages.
riffraff
That makes wealth accumulation pointless so the whole article makes no sense either, right?
Tho I guess even post scarcity we'd have people who care about hoarding gold-pressed latinum.
farts_mckensy
In that scenario, wages and money in general would be obsolete.
otabdeveloper4
> consumer demand would be independent of wages
That's the literal actual textbook definition of "communism".
Lmao that I actually lived to see the day when techbros seriously discuss this.
bawolff
> Lmao that I actually lived to see the day when techbros seriously discuss this.
People have been making comparisons between post scarcity economics and "utopia communism" for decades at this point. This talking point probably predates your birth.
doubleyou
communism is a universally accepted ideal
farts_mckensy
That is not the "textbook definition" of communism. You have no idea what you're talking about.
riku_iki
consumer demand will shift from middle-class demand (medium houses, family cars) to super-rich demand (large luxury castles, personal jets and yachts, high-profile entertainment, etc) + provide security to superrich (private automated police forces).
psadri
This has already been happening. The gap between wealthy and poor is increasing and the middle class is squeezed. Interestingly, simultaneously, the level of the poor has been rising from extreme poverty to something better so we can claim that the world is relatively better off even though it is also getting more unequal.
riku_iki
poor got more comfortable life because of globalization: they became useful labor for corps. Things will go back to previous state if their jobs will go to AI/robots.
farts_mckensy
I am genuinely mystified that you think this is an adequate response to my basic point. The economy cannot be sustained this way. This scenario would almost immediately lead to a collapse.
riku_iki
why do you think it will lead to collapse exactly?
yieldcrv
Do you have a degree in theoretical economics?
“I have a theoretical degree in economics”
You’re hired!
real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none
daft_pink
Is a small group really going to control AI systems or will competition bring the price down so much that everyone benefits and the unit cost of labor is further and further reduced.
kfarr
At home inference is possible now and getting better every day
sureIy
At home inference by professionals.
I don't expect dad to Do Your Own AI anytime soon, he'll still pay someone to set it up and run it.
pineaux
I see a few possible scenarios.
1) all work gets done by AI. Owners of AI reap the benefits for a while. There is a race to the bottom concerning costs, but also because people are not earning wages and come ang really afford the outputs of production. Thus rendering profits close to zero. If the people controlling the systems do not give the people "on the bottom" some kind allowance they will not have any chance for income. They might ask horrible and sadistic things from the bottom people but they will need to do something.
2) if people get pushed into these situations they will get riot or start civil wars. "Butlerian jihads" will be quite normal.
3) another scenario is that the society controlled by the rich will start to criminalise non-work in the early stages, that will lead to a new slave class. I find this scenario highly likely.
4) one of the options that I find very likely if "useless" people do NOT get "culled" en mass is an initial period of Revolt followed an AI controlled communist "Utopia". Where people do not need to work but "own" the means of production (AI workers). Nobody needs to work. Work is LARPing and is done by people who act like workers but don't really do anything (like some people do today) A lot of people don't do this, there are still people who see non-workers as leeching of the workers, because workers are "rewarded" by ingame mechanics (having a "better job"). Parallel societies will become normal. Just like now. Rich people will give themselves "better jobs" some people dont play the game and there are no real consequences, but not being allowed to play.
5) an amalgamation of the scenario as above, but in this scenario everybody will be forced to larp with the asset owning class. They will give people "jobs" but these jobs are bullshit. Just like many jobs right now. Jobs are just a way of creating different social classes. There is no meritocracy. Just rituals. Some people get to do certain rituals that give them more social status and wealth. This is based on oligarch whims. Once in a while a revolt, but mostly not needed.
Many other scenarios exist of course.
itsafarqueue
Have you written a form of this up somewhere? I would very much enjoy reading more of your work. Do you have a blog?
Der_Einzige
Or, don’t… we need less mark fischers and critical thinking in the world and more constructive thinking.
It helps no one to explain to them just how much the boot stomps on their face. Left wing post modernist intellectuals have been doing this since the 60s and all it did was prevent any left winger from doing anything “revolutionary”.
Don’t waste your time reading “theory”. Look at what happened to Mark Fischer.
zurfer
Given that the paper disappoints, I'd love to hear what fellow HN readers do to prepare?
My prep is:
1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.
2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.
3) Staying fit and healthy, so physical labour stays possible.
energy123
> 2) investing in the chip manufacturing
The only thing I see as obvious is AI is going to generate tremendous wealth. But it's not clear who's going to capture that wealth. Broad categories:
(1) chip companies (NVDA etc)
(2) model creators (OpenAI etc)
(3) application layer (YC and Andrew Ng's investments)
(4) end users (main street, eg ChatGPT subscribers)
(5) rentiers (land and resource ownership)
The first two are driving the revolution, but competition may not allow them to make profits.
The third might be eaten by the second.
The fourth might be eaten by second, but it could also turn out that competition amongst the second, and the fourth's access to consumers and supply chains means that they net benefit.
The fifth seems to have the least volatile upside. As the cost of goods and services goes to $0 due to automation, scarce goods will inflate.
impossiblefork
To me it's pretty obvious that the answer (5).
It substitutes for human labour. This will reduce the price and substantially increase the benefits of land and resource ownership.
bob1029
I'd say #3 is most important. I'd also add:
4) Develop an obsession for the customers & their experiences around your products.
I find it quite rare to see developers interacting directly with the customer. Stepping outside the comfort zone of backend code can grow you in ways the AI will not soon overtake.
#3 can make working with the customer a lot easier too. Whether or not we like it, there are certain realities that exist around sales/marketing and how we physically present ourselves.
smeeger
i think if AI gains the ability to reason, introspect and self-improve (AGI) then the situation will become very serious very quickly. AGI will be a very new and powerful technology and AGI will immediately create/unlock lots of other new technologies that change the world in very fundamental ways. what people dont appreciate is that this will completely invalidate the current military/economic/geopolitical equilibrium. it will create a very deep, multidimensional power vacuum. the most likely result will be a global war waged by AGI-led and augmented militaries. and this war will be fought in the context of human labor having, for the first time in history, zero strategic, political or economic value. so, new and terrifying possibilities will be on the table such as the total collateral destruction of the atmosphere or supply chains that humans depend on to stay alive. the failure of all kinds of human-centric infrastructure is basically a foregone conclusion regardless of what you think. so my prep is simply to have a “bunker” with lots of food and equipment with the goal of isolating myself as much as possible from societal/supply chain instability. this is good because its good to be prepared for this kind of thing even without the prospect of AGI looming overhead because supply chains are very fragile things. and in the case of AGI, it would allow you to die in a relatively comfortable and controlled manner compared to the people who burn to death.
sfn42
Nothing. I don't think there's anything I need to prepare for. AI can't do my job and I doubt it will any time soon. Developers who think AI will replace them must be miserable at their job lol.
At best AI will be a tool I use while developing software. For now I don't even think it's very good at that.
sureIy
> AI can't do my job
Last famous words.
Current technology can't do your job, future tech most certainly will be able to. The question is just whether such tech will come in your lifetime.
I thought the creative field was the last thing humans could do but that was the first one to fall. Pixels and words are the cheapest item right now.
sfn42
Sure man, I'll believe you when I see it.
I'm not aware of any big changes in writer/artist employment either.
zurfer
It's not certain that we get TAI or ASI, but if we get it, it will be better at software development than us.
The question is which probability do you assign to getting TAI over time? From your comment it seems you say 0 percent in your career.
For me it's between 20 to 80 percent in the next ten years ( depending on the day :)
sfn42
I don't have any knowledge that allows me to make any kind of prediction about the likelihood of that technology being invented. I'm not convinced anyone else does either. So I'm just going to go about my life as usual, if something changes at some point I'll deal with it then. Don't see any reason to worry about science fiction-esque scenarios.
smeeger
a foolish assumption but i have my fingers crossed for you and stuck firmly up my own butt… just in case that will increase the lucky effect of it
sfn42
Yeah I'm clearly the fool here..
rybosworld
Imagine two software engineers.
One believes the following:
> AI can't do my job and I doubt it will any time soon
The other believes the opposite; that AI is improving rapidly enough that their job is in danger "soon".
From a game theory stance, is there any advantage to holding the first belief over the second?
sfn42
Yeah. The engineer who thinks their job is in danger might be less inclined to improve their skills because they don't think their skills will be useful in the future, which is essentially a self-fulfilling prophecy. Maybe they will pursue some other career or start preparing for it, which might be a complete waste of time. Similarly, non-engineers might choose a different profession entirely.
Meanwhile the engineer who isn't bothered by this bullshit prophecy goes about their day, making lots of money and becoming less replaceable every day. Maybe they learn to use these AI tools to be more efficient, which is really the only realistic endgame of AI tools anyway. You don't just fire all the devs and have some manager do the prompting. Maybe you fire some devs and keep the best ones as prompt engineers. Maybe this isn't even a management-driven process at all, maybe the developers just start using these tools of their own volition, become more productive and everyone's happy. It's not like we're running out of development work any time soon, whenever we meet a goal they set a new one. Being able to move faster doesn't necessarily mean we need fewer developers.
Setting aside hypotheticals and game theory, it's completely unrealistic to expect that software developer suddenly won't be a job any more. If it even happens it will be a slow, gradual process. The people working as software developers today will be prime candidates for using AI tools to create software. You still need to understand what you're doing, what's possible and what isn't etc. There is absolutely no reality where some business person just tells an AI to make a banking system and it does that perfectly without any human intervention.
ghfhghg
2 has worked pretty well for me so far.
I try to do 3 as much as possible.
My current work explicitly forbids me from doing 1. Currently just figuring out the timing to leave.
aquarin
There is one thing that AI can't do. Because you can't punish the AI instance, AI cannot take responsibility.
smeeger
this boils down to the definition of pain. what is pain? i doubt you know even if you have experienced it. theres no reason to think that even llms are not guided by something that resembles pain.
visarga
This paper's got it backwards. AI's benefits don't pile up with the owners, they flow to whoever's got a problem to solve and knows how to point the AI at it. Think of AI like a library: owning the books doesn't make you benefit much, applying knowledge to problems does. The big winners are the ones setting the prompts, not the ones owning the servers. AI developers? They're making cents per million tokens while users, solo or corporate, cash in on the real value: application.
Sure, the rich might hire some more people to aim the AI for them, but who's got a monopoly on problems? Nobody. Every freelancer, farmer, or startup's got their own problems to fix, and cheap AI access means they can. The paper's obsessed with wealth grabbing all the future benefits, but problems are everywhere, good luck cornering that market. Every one of us has their own problems and stands to get personalized benefits from AI.
In the age of AI having problems is linked to receiving its benefits. Imagine for example I feel one side of my face drooping and have speech difficulty, and I type my symptoms into a LLM, and it tells me to quickly visit the doctors. It might save my life from stroke. Who gets the largest benefit here?
Problems are distributed even if AI is not.
tyre
> The big winners are the ones setting the prompts, not the ones owning the servers. AI developers? They're making cents per million tokens while users, solo or corporate, cash in on the real value: application
If this were true, AWS wouldn't have pulled in well over $100bn in 2024. Nvidia wouldn't be worth $3.3tn.
The owners and builders of infra make a ton of money.
visarga
AWS makes a fraction of the money their customers make. And NVIDIA is just seeing benefits from market speculation at work. Most LLM providers are losing money right now.
If I understand correctly, this paper is arguing that investors will desperately allocate all their capital such that they maximize ownership of future AI systems. The market value of anything else crashes because it comes with the opportunity cost of owning less future AI. Interest rates explode, pre-existing bonds become worthless, and AI stocks go to the moon.
It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?