Skip to content(if available)orjump to list(if available)

Three Observations

Three Observations

182 comments

·February 9, 2025

smokel

> The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. ... Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

Moore waited at least five years [1] before deriving his law. On top of that, I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.

[1] http://cva.stanford.edu/classes/cs99s/papers/moore-crammingm...

wwwtyro

> Moore waited at least five years [1] before deriving his law.

OpenAI has been around since 2015. Even if we give them four years to ramp up, that's still five years worth of data. If you're referring to the example he gave of token cost, that could just be him pulling two points off his data set to serve as an example. I don't know that's the case, of course, but I don't see anything in his text that contradicts the point.

> I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.

How about Kurzweil's plot [1]?

[1] https://scx2.b-cdn.net/gfx/news/hires/2011/kurzweilfig.1.jpg

tim333

That Kurzweil plot is a bit ancient, up to 1998 or something

There's a better one that goes to 2023 https://www.bvp.com/assets/uploads/2024/03/Price_Computation...

The rate of progress is more like the Moore's law 18 month doubling. That's compute per dollar rather than Moore's transistor density.

I think 10x per year is a bit questionable - it's way out of line with the Kurzweil trend.

bibanez

The "10x every 12 months" is pure salesman from sama. An engineer wouldn't do these extrapolations from little data in good faith.

signatoremo

He is talking about cost. Are you saying price didn’t go down 10 times the last 12 months? How much data is too little?

NewJazz

1 year of data is indeed too little if you are trying to forecast one year ahead. Also the pricing is set by OpenAI. We don't know their actual costs decreased by that factor. Only that they cut their prices.

benterix

I know this is subjective but he is comparing GPT-4 to 4o. The new model definitely felt lighter and faster, so probably cheaper for them to maintain, but at the same time very often gave worse answers than GPT-4.

bfdm

The retail price, or the actual cost to deliver? Those are not the same thing. Cost to deliver could actually mean something. Retail pricing is approximately meaningless.

42lux

Compute costs are pretty much the same for high vram cards?

avs733

Yeah that was my first thought, don’t sully the name of Gordon Moore with this.

This sounds more like an insight into how things are working at open ai than anything else. And I’m not sure if deep seek and others are going to follow his nice rules.

More generally, transistors are a technical phenomenon…they are either smaller (and work) or don’t. The thing I really don’t feel enough folks appreciate about AGI is it’s a social phenomenon - not in the making of it but in the pragmatic reality of it.

To a sufficient number of folks the current version is AGI, I see students every day trust it more than themselves. To bosses it also might be, if it’s more intelligent than you average employee than that’s sufficiently general intelligence to replace them. So far, I’ve really tried but don’t beyond generating the most basic outline I have yet to find a model that helps with my work, so it’s not intelligent for me.

I’m aware of the benchmarks but they don’t matter outside of places like HN. intelligence is and, likely, will always be social before it is technical and that makes these laws..not useful?

0xDEAFBEAD

>Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.

I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.

See the "OpenAI has a history of broken promises" section of this webpage: https://www.safetyabandoned.org/

In my view, state AGs should not allow them to complete their transition to a for-profit.

null

[deleted]

noch

> I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.

I don't see what "our" trust has to do with anything. Perhaps you're an investor in OpenAI and your trust matters to OpenAI and its plans? But for the rest of us, our trust doesn't matter. It would be like me saying, "I don't see why we should trust Saudi Aramco."

h0l0cube

> It would be like me saying, "I don't see why we should trust Saudi Aramco."

It's completely fair response to say that if the CEO of Saudi Aramco performatively pens an article on how to mitigate the effects of global warming, while also profiting from it, and engaging in no tangible actions to fix the problem.

Animats

2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

First, if the cost is coming down so fast, why the need for "exponentially increasing investment"? One could make the same exponential growth claim for, say, the electric power industry, which had a growth period around a century ago and eventually stabilized near 5% of GDP. The "tech sector" in total is around 9% of US GDP, and relatively stable.

Second, only about half the people with college degrees in the US have jobs that need college degrees. The demand for educated people is finite, as is painfully obvious to those paying off college loans.

This screed comes across as a desperate attempt to justify OpenAI's bloated valuation.

jb_rad

First, it requires exponential investment because

> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

Incremental improvements in intelligence require exponentially more resources. Only once that step is achieved can costs be reduced.

Second, intelligence is not the same as education. Education is specialized, intelligence is general. Every modern convenience around you is the result of intelligence.

esperent

> The intelligence of an AI model roughly equals the log of the resources used to train and run it.

Didn't Deepseek disprove this? They trained a roughly equal model on an order of magnitude less compute.

Lerc

No, advances in efficiency were always expected (and indeed required because of this 'law')

The principle is that if DeepSeek had have spent 10 or 100 times as much as they did their model would have been a few times better.

This rule is intended to be applied on top of all of the other advances.

panarky

"cost to use" != "cost to train"

Elsewhere he says that model intelligence is determined by the log of the resources used to train it, and this relationship has been constant for many orders of magnitude.

The implication is that it takes exponentially increasing investment to achieve a linear increase in intelligence.

Exponentially increasing costs sound like a bad thing.

But then he says the linear increase in intelligence generates exponentially increasing economic benefits.

Those exponentially increasing benefits sounds like they might justify the exponentially increasing costs.

frotaur

Ok, but according to this, linear investments generate linear increase in economic benefits.

So why the need to go exponential?

choilive

The costs are coming down fast because of the investment and would not happen independently of the investment. The same reason for Moore's law can be generalized as the learning/experience curve. You could make the argument that the demand for AI has a higher growth rate than the learning rate ex. if AI becomes 10x more cost efficient and results in demand growing 100x, you would also need to follow with an exponential growth in inference investment.

choilive

You could also make an argument that the bulk of college degrees are not economically useful. The demand for certain kinds of education and knowledge is certainly finite - but the demand ceiling for general intelligence seems much much higher, and appears to be limited only by the cost of hiring ever smarter people.

tunesmith

We've just gotten through a big second generation of available models, but at least for the projects I'm on, I still feel just as far away as ever as being able to trust responses. I spent a few hours this weekend working on a side project that involved setting up a graphql server with a few basic query resolvers and field resolvers, and my experience with 4o, R1, o3-mini-high were all akin to arguing with too-confident junior engineers that were confused, without realizing it, about what they thought they knew. And this was basic stuff about simple graphql resolvers. I did have my first experience of o3-mini-high (finally, after much arguing) being able to answer something that R1 couldn't, though.

It's weird, because it's still wildly useful and my ideas for side projects are definitely more expansive than they used to be. And yet, I'm really far from having any fear of replacement. Almost none of the answers I'm getting are truly nailing the experience of teaching me something new, while also having perfect accuracy. (I'm on ChatGPT+, not pro.)

ilrwbwrkhv

Same. I think the tools have gotten a little better with agent mode and all that but it still doesn't get anything even slightly complicated for an API it hasn't seen before. Like it hasn't gotten more intelligent since maybe Gpt 3.5.

I quite like them as somebody who has a huge problem with onset procrastination. I immediately ask them to write something and criticize them and write it myself because whatever it comes up with is so damn stupid.

cma

Have you tried putting the API in context? Gemini has 2M context, where original 3.5 had like 4K and not great with it.

While some large apis may not fit, not many that it hasn't seen before fit that bill. For updated large APIs you can put in the changelog but more work to gather all the detailed API changes and changed documentation on their own.

sunpazed

Even though I have a reasonable intuition and understanding of how a LLM works, I am still awe-struck each and every time I use one. The fact that I have a junior developer at my convenience is a huge efficiency gain. I’ve been able to automate the rudimentary elements and focus my time on the stuff that counts.

guybedo

At first AI models/systems/agents help employees be more productive.

But i can't imagine a future where this doesn't lead to mass layoffs, or hiring freezes because these systems can replace tens or hundreds of employess, the end result being more and more unemployed people.

Sure there's been the industrial revolution and the argument usually is: some people will lose their jobs but many other jobs will be created. I'm not sure this argument gonna hold this time given the magnitude of the change.

Is there any serious study of the impact of AI on society and employment, and most importantly is there any solution to this problem ?

mitthrowaway2

In terms of solutions: UBI is frequently proposed. Depending on the degree to which AI and automation obsoletes human labor and increases, UBI could become arbitrarily redistributive. Increased output would keep inflation at bay, and the reduced incentive to work would be mitigated by the fact that there almost wouldn't need to be any incentives remaining for wealth creation.

Such an AGI scenario might also raise some old questions about the degree to which wealth disparity-as-an-incentive-for-success remains useful and justifiable. One wonders how Altman would receive this proposal.

kadushka

I wonder what will happen to millions of people with $3k+/mo mortgages when they’re all unemployed and UBI is only $2k/mo.

mitthrowaway2

Sounds like the mortgage market would go through a correction? Or UBI would be bumped up to $4k. Lots of possible outcomes.

lmm

They'll probably go bankrupt and have to move to flyover country. But if you've got a steady income and don't need to work then living in flyover country isn't so bad.

guybedo

yes, UBI is the solution usually proposed but i guess there's a few problems.

First problem i see is UBI is gonna be paid by governments when it's (mega-)corporations making (disproportionate) profits thanks to AI and not employing humans anymore.

Something would need to compensate for this.I guess a new taxes scheme based on the number of AI agents a company would employ ?

mitthrowaway2

If you ask me? A very large consumption tax would probably handle most of it, as well as some form of asset taxation (eg. on land and voting-rights shares, in terms of their equivalent rental value).

This would basically amount to a leveling function across society: people consuming less than the mean earn (much) more in UBI than they pay in sales taxes. Wealthier people who consume more than average would pay in more than they get out. With wealth being power-law distributed, the wealthy would end up paying a lot more.

Then you just tune the UBI value to achieve any desired balance of equality vs incentive to work, while adjusting the tax take to control inflation.

The megacorp wealth flows back to the shareholders, who eventually will want to spend it on something.

shridharxp

It would be interesting to consider where humans would find fulfillment when what they have done every day for years is rendered worthless.

JohnnyMarcone

I like to think about what I did before I had to work. Young me would never worry about finding things to do and none of those things were economically valuable.

lamename

Some would pursue less lucrative passions. Some would spiral into lazy Sunday behavior every day.

s__s

UBI is nonsensical. Give everyone 5k a month, and everyone is not 5k richer. All you’ve done is decrease the value of the dollar.

Would love to be proven wrong.

mitthrowaway2

For a reductio ad absurdum that might illustrate where your analysis falls down, consider how someone with net worth of $0 and someone with net worth of $100k would each be differently affected by getting a $5k handout, accompanied by a 10% decrease in the purchasing power of a dollar. The end result is not the same as the status quo.

The intent is not to make everyone richer, but to redistribute wealth in an equalizing direction. (In fact this might also make everyone slightly richer on average in real terms, because of second-order effects related to economies of scale that make it slightly easier to meet the growth in demand when wealth is distributed more evenly).

Of course, the decreasing value of the dollar is an undesired side-effect of any government spending; this is why you counterbalance it with taxes to pull money back out of the system and control price growth. These taxes don't exactly cancel the UBI if they disproportionately fall on the wealthy; this is why I like consumption taxes, which fall more heavily on people who have more money to spend.

tim333

Right now you are correct that giving everyone 5k would not produce more stuff so the dollars would devalue.

The idea however is that at the same time as giving everyone 5k, AI workers would produce more than an extra 5k/head of stuff.

As an alternative having the AIs produce all the cars and houses we need but giving all the money to Sam Altman wouldn't make sense as no one would have the money to buy them apart from Sam who couldn't use them all.

lxgr

Your assumption holds in exactly one scenario: One where everybody is currently earning exactly the same amount of money and has the same net worth.

Otherwise, such an UBI would indeed cause substantial redistribution, even after accounting for inflation and prices recalibrating.

nick111631

High interest rates, maybe a wealth tax, ought to do it. Cause deflation while giving UBI.

fractaled

96% of us are already given ~5k a month for 40hr/week of work.

If UBI is just printed, then sure there would be economic problems; but I think the idea is you redistribute it via taxation.

gnuly

[dead]

neom

Oh there have been LOADS over the years. The old ones say this is going to happen, the new ones say this is happening, the new new ones say oh btw our measurement systems for labour market and society during this change period are probably not accurate so we need new ones ASAP.

https://www.mckinsey.com/~/media/mckinsey/industries/public%...

https://www.oecd.org/en/publications/the-risk-of-automation-...

https://news.mit.edu/2019/work-future-report-technology-jobs...

https://arxiv.org/pdf/2306.12001

https://institute.global/insights/economic-prosperity/the-im...

https://ipc.mit.edu/research/work-of-the-future/

kadushka

None of these links provide any guesses about the impact of AGI on unemployment rates. By AGI I mean AI that is as smart and as capable as an intelligent and highly educated human. What will happen when in a few years companies realize that AI systems do a better job than their human employees?

awb

Probably a similar transition to the Industrial Revolution, but much faster. Laborers had to learn skills that machines weren’t better at.

Any work that is information based is probably in for a dramatic shift.

But maybe some areas that are less vulnerable: - Experiences (food, travel, accommodations, events, sports) - Manual labor (carpenters, plumbers, roofers) - Human connection (caregivers, therapists, teachers, coaches) - Public service (government, police, fire, healthcare) - Executives (CEOs, entrepreneurs)

The Industrial Revolution completely changed the world, but there are still many tasks where a human is better/faster/cheaper than a physical machine, so it didn’t replace everyone. My guess is that there will be some niche domains where humans are preferred to AGI.

neom

ALL of these links mention it, maybe you didn't read all the material, the Blair Institute I linked as a whole in depth section just on it, in fact if you cntrl F "unemp" you'll find half the site turns yellow. The whole McKinley report starting at page 7 is about nothing but job market shifts for 10+ pages, and If you look at the OEDC report, you'll find lots of referenced to it, AND the ability to find further research (Mokyr et al., 2015).

throwaway2037

"AI will replace many workers" feels eerily similar to the message of the early 2000s in the US: "Tech/Knowledge workers from low cost locations will replace many workers from high cost locations." In truth, the high cost locations continued to grow, side-by-side, with low cost locations. To be clear: I am not speculating on the why, rather only sharing what I see.

s__s

I firmly believe AI will become heavily taxed and regulated.

It would be foolish to ban it, but equally foolish to allow it to replace a large percentage of your country’s workforce. It would completely destabilize the economy and society as a whole.

AI can do a lot of great things for humanity. Putting vast amounts of people out of a job is not one of those things.

There needs to be more smart people talking about and studying what the future realistically looks like.

tim333

Taxing and regulating the AI itself wouldn't work very well as competitors abroad or who skirt the regulations would be able to use it. You might be able to require human supervision so rather than AI replacing a programer who was laid off, the human would become the AIs supervisor.

bathtub365

The world’s richest man has been empowered by the US government to gleefully lay off thousands of federal workers with no published plan beyond “delete government agencies” and his team is using AI as part of the execution. I’m skeptical that there are people in charge who think about anything other than consolidating their own power and lining their own pockets. I think the only AI taxes that might come into play are those that steer people towards services run by those already in power and disincentivize setting up your own AI infrastructure.

raghavtoshniwal

The world’s richest man got handed this mandate by someone that was elected by millions of US citizens and he explicitly told them he will do this. Enough people in the US want to see this happen.

I don't know if enough people would not agree to highly tax the corporations if they're themselves out of work and need the money to survive.

dottjt

Won't there be more jobs for people maintaining and servicing the data centers? Amongst all the other AI infrastructure?

bglazer

No, data centers require very minimal staffing. Like a few dozen people to staff a data center serving hundreds of millions of users

01100011

Plenty of jobs that are decades away from automation. Will a robot climb on your roof to fix your A/C or repair your plumbing soon? Probably not. Now I'm sure some enterprising startup is trying, but seriously, it's not close.

> But muh singularity!

Sure, any day now Optimus will become sentient and replace home builders... or not.

So yeah, probably less lawyers(yay), software techs, accountants, etc. But those dollars will flow to people taking vacations, renovating their homes, eating fancy food, etc. Who knows what the net effect will be in the long term?

raghavtoshniwal

Forget singularity, do you think the robotics problem is genuinely hard enough that if we devoted significant amount of intelligence to it, it would not get solved?

>any day now Optimus will become sentient and replace home builders.

I think you're kidding about being "sentient" but it feels like they just have to get somewhat good at a very few tasks and we would be able to automate some large swath of manual labour. We don't need that many fancy tricks to get there. A lot of people are already reporting significant speed ups in Bio research, why wouldn't we see that in robotics?

sealeck

Reading between the lines, I get the feeling that OpenAI may be starting to feel desperate if they feel the need to drive the hype like this.

raghavtoshniwal

It feels like every major lab is saying the same thing:

https://darioamodei.com/machines-of-loving-grace https://www.wsj.com/video/events/the-race-for-true-ai-at-goo...

Even folks _leaving_ OpenAI, who have no incentive to drive hype, are saying that we're very close to AGI. https://x.com/sjgadler/status/1883928200029602236

Even folks like Yoshua Benjio, Hinton are saying we're close to it. The models keep getting better at an exponential.

How much evidence does one need to dispel the "this is OpenAI/sama hype" argument?

sealeck

These are all people running AI labs! They want investment, and what better way to get investment than to tell people you're going to create Terminator? The people leaving OpenAI are joining other labs – their livelihoods depend on AI companies receiving investment: "it is difficult to get a man to understand something, when his salary depends on his not understanding it".

> The models keep getting better at an exponential [sic].

We don't know if this is true. A lot of growth that appears exponential is often quadratic (https://longform.asmartbear.com/exponential-growth/) or follows a logistic function (e.g. Moore's law).

Additionally there's a LOT of benchmark gaming going on, and a lot of the benchmark solving is not down to having a process that actually solves the problems; it just turns out that the problems already kind of lie in the span of text on the internet.

caseyy

> How much evidence does one need to dispel the "this is OpenAI/sama hype" argument?

For AGI hype, we need exactly one piece of evidence that machine AGI exists, or that we know how to build it. Otherwise, it's an exaggeration that AGI is imminent - otherwise known as hype. Or maybe it's hope, but sama suggests it should be an expectation.

iqandjoke

The article is written for journalist (and investor). See the footnote.

sealeck

Totally. "AGI = $100bn of profit" lol

abetusk

> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

> 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

> 3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

My own editorial:

Point 1 is pretty interesting as if P != NP then this is essentially "we can't do better than random search". So, in order to find progressively better solutions as we increase the (linear) input, we need exponentially more resources to find the answer. While I believe P != NP it's interesting to see this play out in the context of learning and AI.

Point 2 is semi well-known. I'm having trouble finding it but there was an article a while back talking about algorithmic efficiencies to the DFT (or DCT?) were outpacing efficiencies that could be just attributed to Moore's law. Meaning the DFT was improving a few orders of magnitude faster than just Moore's law would imply. I assume this is essentially a Wright's law but for attention, in some sense, where more attention to problems leads to better optimizations that dovetail with Moore's law.

Point 3 seems like it's almost a corollary, at least in the short term. If intelligence is capturing the exponential search and it can be re-used to find further efficiency, as in point 2, you get super-exponential growth. I think Kurzweil mentioned something about this as well.

I haven't read the whole article but this jumped out at me:

> Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.

A bald faced lie. Their mission is to capture value from developing AGI. Any benefit to humanity is incidental.

fsndz

The chief hype officer is back at it again. Altman is defending exponential progress when everything points to incremental progress with signs of diminishing returns. Altman thinks benefits will naturally trickle down when everything points to corporation replacing employees with AI to boost their profit margins. I now understand why some people say Sama might be wrong: https://www.lycee.ai/blog/why-sam-altman-is-wrong

bwfan123

"assume the sale" is the marketing tactic. ie, in every pitch, project that agi (whatever that means) is here, and is a persistent threat.

unfortunately, ai is not search or social. there are no network-effects here. so, get-big-fast is not going to work. and slowly the masses start waking up and asking what the hell is this good for except as a fancy auto-complete.

EA-3167

Unfortunately there is essentially a loose federation of quasi-religious "cults" that have emerged around the topic of AI. For an otherwise mostly secular group of people, the lure of reinventing Abrahamic religion in their own image was inescapable. I feel like most sensible people took the off ramp by the time Roko's Basilisk popped up, but as we saw with the Zizians that's not always the case.

So there's this thread of taking two assumptions at face value I see a lot here and elsewhere:

1. What we call "AI" now is actually some kind of AI, and the rest is just scaling up.

2. It's inevitable that AGI would conform to sci-fi tropes.

Meanwhile we've been watching BILLIONS spent on data centers, power for data centers, water for data centers... all of that going in one end, and LLM's coming out the other.

As long as the future AI overlord requires enough power and water to run a city, and the best it can manage amounts to a fun show, I'll keep my alarmism in check.

But Altman, man he really know his audience, and he's going to sell sell sell, to an audience that's been primed on fiction and religion to believe in him like some kind of blank-faced prophet.

raghavtoshniwal

The article you linked is from Sept '24 and points to the ARC-AGI test as "evidence" that we're not getting close.

We're in Feb '25. ARC-AGI (at least the version they're referencing) already has been solved by AI at above average human level.

>everything points to incremental progress with signs of diminishing returns.

Seems like everything in just Dec '24/ Jan '25 points the other way. These models are already helping PhDs in novel research, they're already getting super human at coding (yes yes, they're not perfect and I'm sure someone on HN has this weird coding job that AI can't replace yet and they're very excited to shit over AI), but they've already replaced a lot of real software dev jobs.

Also aren't you contradicting yourself?

> everything points to incremental progress with signs of diminishing returns

> corporation replacing employees with AI

If we have incremental progress, how are corporations going to replace employees with AI?

maleldil

> getting super human at coding

They're getting super-human at _competitive coding_, which is essentially identifying and writing algorithms. They _are not_ good at general coding, as demonstrated by their subpar scores at benchmarks like SWE-bench, and even those aren't particularly representative of what a real coding job is.

csomar

> Altman thinks benefits will naturally trickle down when everything points to corporation replacing employees with AI to boost their profit margins.

That's not what he is saying. He is saying that this investment in AI will yield incredible returns and power. The investor will dominate the next decade(s) and thus you should invest. Of course, he has to say it in a careful way not to alarm politically correct people. But, in essence, he is trying to create investor FOMO to drive his next round.

jb_rad

He explicitly stated progress is logarithmic.

> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

llamaimperative

> Over time, in fits and starts, the steady march of human innovation [alongside monumental efforts of risk mitigation of each new innovation] has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people’s lives.

Anyway, AI/AGI will not yield economic liberation for the masses. We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened. Why? Snuck in here:

> the price of... a few inherently limited resources like land may rise even more dramatically.

This is really the crux of it. The price of land will skyrocket, driving the "cost of living" (cost of land) wedge further between the haves and have-nots.

rand_r

This is a great point. Life is tough because we are all competing in a game. Tweaking the rules of the game so that each basket is worth more points doesn’t make the game easier for any player.

From Henry George:

> Now, to produce wealth, two things are required: labor and land. Therefore, the effect of labor-saving improvements will be to extend the demand for land. So the primary effect of labor-saving improvements is to increase the power of labor. But the secondary effect is to extend the margin of production. And the end result is to increase rent.

> This shows that effects attributed to population are really due to technological progress. It also explains the otherwise perplexing fact that laborsaving machinery fails to benefit workers

Isamu

>explains the otherwise perplexing fact that laborsaving machinery fails to benefit workers

I disagree, the reason why workers don’t benefit is because they are mostly paid to put hours in. Owners claim the gains of better machinery because they reason it is a capital investment at the business level.

Really I don’t see why see why this is perplexing. What is really perplexing is that some economists thought that productivity gains would somehow accrue gains for workers.

llamaimperative

You say this isn't perplexing while commenting on an article by one of the most important people in industry repeating exactly this fallacy?

HN is full of people who happily and earnestly propagate this "obvious" falsehood.

regularization

> Anyway, AI/AGI will not yield economic liberation for the masses. We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened.

In Marshall Sahlins's Stone Age Economics, he studies work time of hunter gatherer tribes in Africa, Papua New Guinea, the Amazon etc. They often less work less than 40 hours a week. The hunter gatherer painting in caves in Chauvet seemed to have leisure time. Less hours than some fresh college grad pounding out C++ or C# for Electronic Arts any how.

The past 50 years has seen the hourly wage stay flat while the profit workers create is sucked up by the heirs.

Paying for four years if college to get a CS degree, then studying Leetcode, interning, working cheap as an associate/junior used to be seen as a good path, but obviously since late 2022 this has stagnated for most.

kaashif

> The past 50 years has seen the hourly wage stay flat while the profit workers create is sucked up by the heirs.

Are you sure?

https://fred.stlouisfed.org/series/MEFAINUSA672N

The reality is that so much wealth has been created that the US has seen rising wages AND rising inequality, with an increasing proportion of growth ending up benefiting capital, not labour.

mitthrowaway2

Wait, that chart shows a 30% growth in family income. For a per-worker income comparison you'd need divide by the increase in dual-income families. Which has also increased by about 30% since 1975.

bko

> Paying for four years if college to get a CS degree, then studying Leetcode, interning, working cheap as an associate/junior used to be seen as a good path, but obviously since late 2022 this has stagnated for most.

What?

Go to a state school, median amount is ~11k a year. Then study Leetcode for a month (free or $35). Buy Cracking the Coding Interview ($33 new). Get a job as a software engineer making a median wage of $140k

Only on HN could someone see this opportunity set and think it's stagnated.

https://www.bankrate.com/loans/student-loans/average-cost-of...

https://builtin.com/salaries/us/software-engineer

derektank

Can you define what you mean by economic liberation? For most of economic history, I think the definition would have been freedom from famine and slavery and I think on both counts we've been wildly successful. Both basically only occur in failed states, where there's either no government at all (Somalia, Sudan, parts of Iraq and Syria) or the government is an authoritarian dictatorship (North Korea, Venezuela)

I ask, because I think you might be overestimating the ability of the current global system to produce "economic liberation", depending upon what you mean. A rough estimate of GDP per capita would put it at ~$20K if you adjust for purchasing power, less than $15K if you don't. That's well below what most people in the developed world would consider to be free of burden or worry and it assumes a completely equal distribution of all production, which is obviously unreasonable.

We need to continue to push the ball forward in growing wealth by continuing to improve productivity, which is going to require continued advancements in technology like mass adoption of AI.

llamaimperative

Oh sure, I'm not one of those "the world is so awful" people. We've made immense progress on a lot of very important dimensions.

Fair point on the current output not producing enough to really give people their time back.

And yes I agree, we should continue pushing productivity forward. My point is only that productivity growth by itself does not necessarily yield anything close to the optimal distribution of its benefits. In fact we have good reason to believe that higher tiers of technology which produce more technological leverage owned by fewer and fewer people are naturally antagonistic to optimal distribution of its benefits.

I'll take a fairly expansive definition of "optimal distribution" here to just say we should shoot at least for a distribution of wealth that is socially and politically stable for a free society in the long run.

pithanyChan

a) to be free from the enforcement of someones desperate desire to hard-code envy into children via products, ads and media

b) to not have schools and teachers filter and then reinforce pupils for jobs that serve class construction

c) to make sure that every kid can get it's near infinite and non-hazardous (before creative construction) LEGOs that they can play with in peaceful environments where fathers and mothers have enough on their accounts to provide peaceful and unhealthy-stress-free environments and food and water

c) so that anyone can at least try long enough, if they so wish, to become a polymath scientist, artist and craftsman, and if it doesn't work out, to never have to be angry at some pre-emptively envious people who create and abuse crisis to drive pointlessly higher and higher prices that are NOT supported by any logic or otherwise reasonable justification

djeastm

>A rough estimate of GDP per capita would put it at ~$20K if you adjust for purchasing power, less than $15K if you don't.

Is that spread across only working adults or does it include children? A better metric to examine might be per family, if you have it.

alexashka

> Can you define what you mean by economic liberation?

Not having to work outside of providing basic necessities for yourself and the rest of humanity.

As for GDP - economists have been embarrassing and discrediting themselves for decades - please don't cite their propaganda in discussions about the real world.

You need to measure real world things. Can we grow enough food, can we build enough shelter, can we transport goods, can we make clothes, basic medicine? Without working 40 hours/week? The answer should be obvious.

This insane talk of productivity is more economist propaganda. We've been plenty productive for a long time. The problem is an incompetent elite and a populace that continues to not believe their lying eyes, citing 'experts'.

lukev

This is a key point. Given current overall levels of economic productivity, we should all be working 3-4 day weeks, at most.

To whom do the benefits of all this newfound efficiency accrue?

newAccount2025

To the 1%? Just look at the historic wealth distribution chart. It’s wild.

https://en.m.wikipedia.org/wiki/Wealth_inequality_in_the_Uni...

graycat

> The price of land will skyrocket

Is that really true?

Let's look at some data from Google, the World Bank, etc.:

     US land area: 3,532,316 square miles

     US population: 334.9 million

     640 acres per square mile

     ( 640 * 3,532,316 ) / 334,900,000 =
     6.750 acres per person

     Fertility rate: 1.66 births per woman
     (2022) World Bank

     GDP growth rate: 2.5% annual change
     (2023) World Bank
So, per family of four, that would be

     ( 4 * 640 * 3,532,316 ) / 334,900,000
     = 27 acres per family of four
"Skyrocket?" With the fertility rate of 1.66, the US population is falling so that the acres per person is increasing.

A guess is that people are crowded into tall buildings in dense cities to reduce costs of communications. So, yes, in dense cities, the cost of land per acre is comparatively high.

But the Internet is reducing the costs of communications meaning that people can move to land that is less densely populated and cheaper.

So, for the near future, tough to believe:

> The price of land will skyrocket

llamaimperative

If this logic worked, all land and therefore rent would currently be ~free since there's ample open space all over the continent.

The price of land is set by the productivity of it. Productivity goes up (by increased density, public infrastructure, private investment, or technological advancement) -> price of land goes up.

computerdork

Actually, the part that that person was focusing on was that the fertility rate is below the replacement rate. Yeah, if we also decrease our immigration rate, our population could peak out even in a decade or two: https://www.axios.com/2023/11/09/us-population-decline-down-...

graycat

"Price of land goes up": In simple terms, the Internet is providing huge areas of land suddenly now feasible for use; the larger supply will lower US average costs per acre.

mrshadowgoose

> We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened. Why?

Because directing material resources towards that end requires a certain aggregate amount of willing/aligned general intelligence, which we simply don't have as a society. Failure after failure of socialist states demonstrate that human general intelligence is on average, uninterested in working towards the greater good. AGI won't have such limitations.

The blog post even addresses this: "the cost of intelligence and the cost of energy constrain a lot of things".

However, I actually do agree with you, but for a different reason. AGI is highly unlikely to yield economic liberation for the masses. Not because we won't have the intelligence capacity, but because the iota of people that will have their hands on the levers of power will in all likelyhood be uninterested in tending to the masses of now economically useless people.

DrScientist

> we can now imagine a world where we cure all diseases

Sure we can imagine it. However to make it happen, it's not enough to imagine the end goal, you need to understand and execute every single step.

I suspect his lack of knowledge about what's actually involved allows him to imagine it.

ie I notice he hasn't declared a world free of software bugs and failures ( a much easier task ), before declaring a world free of bugs in human biology.

TheAceOfHearts

> The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

I'm still stuck thinking about this point. I don't know that it's obviously true. Maybe a more bounded claim would make more sense, something like: increasing intelligence in the short-term has big compounding effects. But there's also a cap as society and infrastructure has to adapt. And I don't know how this plays out within an adversarial system where people might be competing for scarce resources like human attention.

Taken to the extreme, one could imagine a fantasy/scifi scenario where each person is empowered like a god in their own universe, allowing them to experiment, learn and and create endlessly.

jp42

30 years ago, no one in my native place has seen telephone. Now every single person young and old are connected to the internet, conduct banking and other transaction via phone and many many other things.

Many unimaginable things happened in last 30 year from vantage point of my native place. Assuming same level of of transformation for next 30 year it will be still massive progress. Given current tech and AI, the rate of progress for next 30 years will be far greater than last 30. So I am believer in AI and tech in general will make massive progress in next decade

dimgl

I’m not sure anyone is convinced this will empower individuals. On the contrary: if we get this tech “right” enough, the inequality gap will become an inequality chasm… There is no financial incentive to pay humans when a machine is a fraction of the cost.

chad_oliver

If each human body needs 0.2 acre of land to grow the food necessary for subsistence, what happens when the price of intelligence keeps dropping and one person's intelligence (even when directed towards the highest-value use!) is not enough to afford the use of that land? In other words, what happens when humans are no longer economically viable?

Jevon's Paradox means that the demand for intelligence will keep rising as the cost drops, so I can't help but expect a steady increase in the economic value of land _when used for AI_. It'll take a long time before it exceeds the economic value of land when used for human subsistence, but the growth curves are not pointing in encouraging directions.

TechDebtDevin

The next fifty years are going to be quite interesting.