Skip to content(if available)orjump to list(if available)

AGI fantasy is a blocker to actual engineering

simonw

Tip for AI skeptics: skip the data center water usage argument. At this point I think it harms your credibility - numbers like "millions of liters of water annually" (from the linked article) sound scary when presented without context, but if you compare data centers to farmland or even golf courses they're minuscule.

Other energy usage figures, air pollution, gas turbines, CO2 emissions etc are fine - but if you complain about water usage I think it risks discrediting the rest of your argument.

(Aside from that I agree with most of this piece, the "AGI" thing is a huge distraction.)

the__alchemist

I will go meta into what you posted here: That people are classifying themselves as "AI skeptics". Many people are treating this in terms of tribal conflict and identity politics. On HN, we can do better! IMO the move is drop the politics, and discuss things on their technical merits. If we do talk about it as a debate, we can do it when with open minds, and intellectual honesty.

I think much of this may be a reaction to the hype promoted by tech CEOs and media outlets. People are seeing through their lies and exaggerations, and taking positions like "AI/LLMs have no values or uses", then using every argument they hear as a reason why it is bad in a broad sense. For example: Energy and water concerns. That's my best guess about the concern you're braced against.

magicalist

> I will go meta into what you posted here: That people are classifying themselves as "AI skeptics"

The comment you're replying to is calling other people AI skeptics.

Your advice has some fine parts to it (and simonw's comment is innocuous in its use of the term), but if we're really going meta, you seem to be engaging in the tribal conflict you're decrying by lecturing an imaginary person rather than the actual context of what you're responding to.

Flavius

Expecting a purely technical discussion is unrealistic because many people have significant vested interests. This includes not only those with financial stakes in AI stocks but also a large number of professionals in roles that could be transformed or replaced by this technology. For these groups, the discussion is inherently political, not just technical.

tracerbulletx

I don't really mind if people advocate for their value judgements, but the total disregard for good faith arguments and facts is really out of control. The number of people who care at all about finding the best position through debate and are willing to adjust their position is really shockingly small across almost every issue.

lkey

| Drop the politics

Politics is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.

Most municipalities literally do not have enough spare power to service this 1.4 trillion dollar capital rollout as planned on paper. Even if they did, the concurrent inflation of energy costs is about as political as a topic can get.

Economic uncertainty (firings, wage depression) brought on by the promises of AI is about as political as it gets. There's no 'pure world' of 'engineering only' concerns when the primary goals of many of these billionaires is leverage this hype, real and imagined, into reshaping the global economy in their preferred form.

The only people that get to be 'apolitical' are those that have already benefitted the most from the status quo. It's a privilege.

null

[deleted]

techblueberry

I mean, it is intellectually honest to point out that the AI debate at the point is much more a religious or political than strictly technical really. Especially the way tech CEOs hype this as the end of everything.

HardCodedBias

It seems like there is a very strong correlation between identity politics and "AI skepticism."

I have no idea why.

I don't think that the correlation is 1, but it seems weirdly high.

pimeys

Yep. Same for the other direction: there is a very strong correlation between identity politics and praising AI on Twitter.

Then there's us who are mildly disappointed on the agents and how they don't live their promise, and the tech CEOs destroying the economy and our savings. Still using the agents for things that work better, but being burned out for spending days of our time fixing the issues the they created to our code.

paulryanrogers

Just because there are worse abuses elsewhere doesn't mean datacenters should get a pass.

Golf and datacenters should have to pay for their externalities. And if that means both are uneconomical in arid parts of the country then that's better than bankrupting the public and the environment.

simonw

From https://www.newyorker.com/magazine/2025/11/03/inside-the-dat...

> I asked the farmer if he had noticed any environmental effects from living next to the data centers. The impact on the water supply, he told me, was negligible. "Honestly, we probably use more water than they do," he said. (Training a state-of-the-art A.I. requires less water than is used on a square mile of farmland in a year.) Power is a different story: the farmer said that the local utility was set to hike rates for the third time in three years, with the most recent proposed hike being in the double digits.

The water issue really is a distraction which harms the credibility of people who lean on it. There are plenty of credible reasons to criticize data enters, use those instead!

belter

> The water issue really is a distraction which harms the credibility of people who lean on it

Is that really the case? - "Data Centers and Water Consumption" - https://www.eesi.org/articles/view/data-centers-and-water-co...

"...Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people..."

"I Was Wrong About Data Center Water Consumption" - https://www.construction-physics.com/p/i-was-wrong-about-dat...

"...So to wrap up, I misread the Berkeley Report and significantly underestimated US data center water consumption. If you simply take the Berkeley estimates directly, you get around 628 million gallons of water consumption per day for data centers, much higher than the 66-67 million gallons per day I originally stated..."

Etherlord87

A farmer is a valuable perspective but imagine asking a lumberjack about the ecological effects of deforestation, he might know more about it than an average Joe, but there's probably better people to ask for expertise?

> Honestly, we probably use more water than they do

This kind of proves my point, regardless of the actual truth in this regard, it's a terrible argument to make: availability of water starts to become a huge problem in a growing amount of places, and this statement implies the water usage of something, that in basic principle doesn't need water at all, uses comparable amount of water as farming, which strictly relies on water.

jtr1

I think the point here is that objecting to AI data center water use and not to say, alfalfa farming in Arizona, reads as reactive rather than principled. But more importantly, there are vast, imminent social harms from AI that get crowded out by water use discourse. IMO, the environmental attack on AI is more a hangover from crypto than a thoughtful attempt to evaluate the costs and benefits of this new technology.

danaris

But if I say "I object to AI because <list of harms> and its water use", why would you assume that I don't also object to alfalfa farming in Arizona?

Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?

This is nothing but whataboutism.

People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.

roywiggins

I don't think there's a world where a water use tax is levied such that 1) it's enough for datacenters to notice and 2) doesn't immediately bankrupt all golf courses and beef production, because the water use of datacenters is just so much smaller.

bee_rider

We definitely shouldn’t worry about bankrupting golf courses, they are not really useful in any way that wouldn’t be better served by just having a park or wilderness.

Beef, I guess, is a popular type of food. I’m under the impression that most of us would be better off eating less meat, maybe we could tax water until beef became a special occasion meal.

heymijo

You're not wrong.

My perspective from someone who wants to understand this new AI landscape in good faith. The water issue isn't the show stopper it's presented as. It's an externality like you discuss.

And in comparison to other water usage, data centers don't match the doomsday narrative presented. I know when I see it now, I mentally discount or stop reading.

Electricity though seems to be real, at least for the area I'm in. I spent some time with ChatGPT last weekend working to model an apples:apples comparison and my area has seen a +48% increase in electric prices from 2023-2025. I modeled a typical 1,000kWh/month usage to see what that looked like in dollar terms and it's an extra $30-40/month.

Is it data centers? Partly yes, straight from the utility co's mouth: "sharply higher demand projections—driven largely by anticipated data center growth"

With FAANG money, that's immaterial. But for those who aren't, that's just one more thing that costs more today than it did yesterday.

Coming full circle, for me being concerned with AI's actual impact on the world, engaging with the facts and understanding them within competing narratives is helpful.

amarcheschi

Not only electricity, air pollution around some datacenters too

https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...

jstummbillig

In what way are they not paying for it?

lynndotpy

Farmland, AI data centers, and golf courses do not provide the same utility for water used. You are not making an argument against the water usage problem, you are only dismissing it.

Aransentin

Growing almonds uses 1.3 trillion gallons of water annually in California alone.

This is more than 4 times more than all data centers in the US combined, counting both cooling and the water used for generating their electricity.

What has more utility: Californian almonds, or all IT infrastructure in the US times 4?

fmbb

Depends on what the datacenters are used for.

AI has no utility.

Almonds make marzipan.

kajika91

I'll take the almonds any day.

LPisGood

What does it mean to “use” water? In agriculture and in data centers my understanding is that water will go back to the sky and then rain down again. It’s not gone, so at most we’re losing the energy cost to process that water.

null

[deleted]

null

[deleted]

null

[deleted]

simonw

Right, I think a data center produces a heck of a lot more economic and human value in a year - for a lot more people - than the same amount of water used for farming or golf.

notahacker

you can make a strong argument for the greater necessity of farming for survival, but not for golf...

idiotsecant

I mean... Food is pretty important ...

null

[deleted]

dist-epoch

That is correct, AI data centers deliver far more utility per unit of water than farm/golf land.

reedf1

Yes - and the water used is largely non-consumptive.

overgard

I suppose instead we can talk about people's 401k's being risked in a market propped up by the AI bubble.

simonw

Absolutely.

efsavage

The water argument rings a bit hollow for me not due to whataboutism but more that there's an assumption that I know what "using" water means, which I am not sure I do. I suspect many people have even less of an idea than I do so we're all kind of guessing and therefore going to guess in ways favorable to our initial position whatever that is.

Perhaps this is the point, maybe the political math is that more people than not will assume that using water means it's not available for others, or somehow destroyed, or polluted, or whatever. AFAIK they use it for cooling so it's basically thermal pollution which TBH doesn't trigger me the same way that chemical pollution would. I don't want 80c water sterilizing my local ecosystem, but I would guess that warmer, untreated water could still be used for farming and irrigation. Maybe I'm wrong, so if the water angle is a bigger deal than it seems then some education is in order.

leoedin

If water is just used for cooling, and the output is hotter water, then it's not really "used" at all. Maybe it needs to be cooled to ambient and filtered before someone can use it, but it's still there.

If it was being used for evaporative cooling then the argument would be stronger. But I don't think it is - not least because most data centres don't have massive evaporative cooling towers.

Even then, whether we consider it a bad thing or not depends on the location. If the data centre was located in an area with lots of water, it's not some great loss that it's being evaporated. If it's located in a desert then it obviously is.

HPsquared

If it was evaporative, the amounts would be much less.

HPsquared

Put that way, any electricity usage will have some "water usage" as power plants turn up their output (and the cooling pumps) slightly. And that's not even mentioning hydroelectric plants!

slightwinder

> but if you compare data centers to farmland or even golf courses they're minuscule.

People are critical of farmland and golf courses, too. But Farmland at least has more benefit for society, so they are more vocal on how it's used.

randallsquared

The problem is more one of scale: a million liters of water is less than half of a single Olympic-sized swimming pool. A single acre of alfalfa typically requires 4.9 - 7.6 million liters a year for irrigation. Also, it's pretty easy to recycle the data center water, since it just has to cool and be sent back, but the irrigation water is lost to transpiration and the recycling-by-weather process.

So, even if there's no recycling, a data center that is said to consume "millions" rather than tens or hundreds of millions is probably using less than 5 acres of alfalfa in consumption, and in absolute terms, this requires only a swimming-pool or two of water per years. It's trivial.

slightwinder

> The problem is more one of scale:

I think the source is the bigger problem. If they take the water from sources which are already scarce, the impact will be harsh. There probably wouldn't be any complaints if they would use sewerage or saltwater from the ocean.

> Also, it's pretty easy to recycle the data center water, since it just has to cool

Cooling and returning the water is not always that simple. I don't know specifically about datacentres, but I know about wasting clean water in other areas, cooling in power plants, industry, etc. and there it can have a significant impact on the cycle. At the end it's a resource which is used at least temporary, which has impact on the whole system.

null

[deleted]

IgorPartola

It is ultimately a hardware problem. To simplify it greatly, an LLM neuron is a single input single output function. A human brain neuron takes in thousands of inputs and produces thousands of outputs, to the point that some inputs start being processed before they even get inside the cell by structures on the outside of it. An LLM neuron is an approximation of this. We cannot manufacture a human level neuron to be small and fast and energy efficient enough with our manufacturing capabilities today. A human brain has something like 80 or 90 billion of them and there are other types of cells that outnumber neurons by I think two orders of magnitude. The entire architecture is massively parallel and has a complex feedback network instead of the LLM’s rigid mostly forward processing. When I say massively parallel I don’t mean a billion tensor units. I mean a quintillion input superpositions.

And the final kicker: the human brain runs on like two dozen Watts. An LLM takes a year of running on a few MW to train and several KW to run.

Given this I am not certain we will get to AGI by simulating it in a GPU or TPU. We would need a new hardware paradigm.

HarHarVeryFunny

Assuming you want to define the goal, "AGI", as something functionally equivalent to part (or all) of the human brain, there are two broad approaches to implement that.

1) Try to build a neuron-level brain simulator - something that is a far distant possibility, not because of compute, but because we don't have a clear enough idea of how the brain is wired, how neurons work, and what level of fidelity is needed to capture all the aspects of neuron dynamics that are functionally relevant rather than just part of a wetware realization

2) Analyze what the brain is doing, to extent possible given our current incomplete knowledge, and/or reduce the definition of "AGI" to a functional level, then design a functional architecture/implementation, rather than neuron level one, to implement it

The compute demands of these two approaches are massively different. It's like the difference between an electronic circuit simulator that works at gate level vs one that works at functional level.

For time being we have no choice other than following the functional approach, since we just don't know enough to build an accurate brain simulator even if that was for some reason to be seen as the preferred approach.

The power efficiency of a brain vs a gigawatt systolic array is certainly dramatic, and it would be great for the planet to close that gap, but it seems we first need to build a working "AGI" or artificial brain (however you want it define the goal) before we optimize it. Research and iteration requires a flexible platform like GPUs. Maybe when we figure it out we can use more of a dataflow brain-like approach to reduce power usage.

OTOH, look at the difference between a single user MOE LLM, and one running in a datacenter simultaneously processing multiple inputs. In the single-user case we conceptualize the MOE as saving FLOPs/power by only having one "expert" active at a time, but in the multi-user case all experts are active all the time handling tokens from different users. The potential of a dataflow approach to save power may be similar, with all parts of the model active at the same time when handling a datacenter load, so a custom hardware realization may not be needed/relevant for power efficiency.

rekrsiv

On the other hand, a large part of the complexity of human hardware randomly evolved for survival and only recently started playing around in the higher-order intellect game. It could be that we don't need so many neurons just for playing intellectual games in an environment with no natural selection pressure.

Evolution is winning because it's operating at a much lower scale than we are and needs less energy to achieve anything. Coincidentally, our own progress has also been tied to the rate of shrinking of our toys.

zgk7iqea

it is an architecture problem, too. LLMs simply aren't capable of AGI

friendzis

> We would need a new hardware paradigm.

It's not even that. The architecture(s) behind LLMs are nowhere near close that of a brain. The brain has multiple entry-points for different signals and uses different signaling across different parts. A brain of a rodent is much more complex than LLMs are.

samuelknight

LLM 'neurons' are not single input/single output functions. Most 'neurons' are Mat-Vec computations that combine the products of dozens or hundreds of prior weights.

In our lane the only important question to ask is, "Of what value are the tokens these models output?" not "How closely can we emulate an organic bran?"

Regarding the article, I disagree with the thesis that AGI research is a waste. AGI is the moonshot goal. It's what motivated the fairly expensive experiment that produced the GPT models, and we can look at all sorts of other hairbrained goals that ended up making revolutionary changes.

us-merul

This is a great summary! I've joked with a coworker that while our capabilities can sometimes pale in comparison (such as dealing with massively high-dimensional data), at least we can run on just a few sandwiches per day.

schnitzelstoat

I remember reading about memristors when I was at University and the hope they could help simulate neurons.

I don't remember hearing much about neuromorphic computing lately though so I guess it hasn't had much progress.

naasking

> To simplify it greatly, an LLM neuron is a single input single output function. A human brain neuron takes in thousands of inputs and produces thousands of outputs

This is simply a scaling problem, eg. thousands of single I/O functions can reproduce the behaviour of a function that takes thousands of inputs and produces thousands of outputs.

Edit: As for the rest of your argument, it's not so clear cut. An LLM can produce a complete essay in a fraction of the time it would take a human. So yes, a human brain only consumes about 20W but it might take a week to produce the same essay that the LLM can produce in a few seconds.

Also, LLMs can process multiple prompts in parallel and share resources across those prompts, so again, the energy use is not directly comparable in the way you've portrayed.

captain_coffee

Correct - the vast majority of people vastly underestimate the complexity of the human brain and the emergent properties that develop from this inherent complexity.

travisgriggs

> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

Me too. But, I worry this “want” may not be realistic/scalable.

Yesterday, I was trying to get some Bluetooth/BLE working on a Raspberry CM 4. I had dabbled with this 9 months ago. And things were making progress then just fine. Suddenly with a new trixie build and who knows what else has changed, I just could not get my little client to open the HCI socket. In about 10 minutes prompt dueling between GPT and Claude, I was able to learn all about rfkill and get to the bottom of things. I’ve worked with Linux for 20+ years, and somehow had missed learning about rfkill in the mix.

I was happy and saddened. I would not have k own where to turn. SO doesn’t get near the traffic it used to and is so bifurcated and policed I don’t even try anymore. I never know whether to look for a mailing list, a forum, a discord, a channel, the newsgroups have all long died away. There is no solidly written chapter in a canonically accepted manual written by tech writers on all things Bluetooth for the Linux Kernel packaged with raspbian. And to pile on, my attention span driven by a constant diet of engagement, makes it harder to have the patience.

It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.

wilkommen

In the short term, it may be unrealistic (as you illustrate in your story) to try to successfully navigate the increasingly fragmented, fragile, and overly complex technological world we have created without genAI's assistance. But in the medium to long term, I have a hard time seeing how a world that's so complex that we can't navigate it without genAI can survive. Someday our cars will once again have to be simple enough that people of average intelligence can understand and fix them. I believe that a society that relies so much on expertise (even for everyday things) that even the experts can't manage without genAI is too fragile to last long. It can't withstand shocks.

lordleft

The language around AGI is proof, in my mind, that religious impulses don't die with the withering of religion. A desire for a totalizing solution to all woes still endures.

red75prime

Does language around fusion reactors ("bringing power of the sun to Earth" and the like) cause similar associations? Those situations are close in other aspects too: we have a physical system (the sun, the brain), whose functionality we try to replicate technologically.

IAmGraydon

People always create god, even if they claim not to believe in it. The rise of belief in conspiracy theories is a form of this (imagining an all powerful entity behind every random event), as is the belief in AGI. It's not a totalizing solution to all woes. It's just a way to convince oneself that the world is not random, and is therefore predictable, which makes us feel safer. That, after all, is what we are - prediction machines.

danielbln

The existential dread from uncertainty is so easily exploited too, and the root cause for many of societies woes. I wonder what the antidote is, or if there is one.

casey2

It's just a scam, plain and simple. Some scams can go on for a very long time if you let the scammers run society.

Any technically superior solution needs to have a built in scam otherwise most followers will ignore it and the scammers won't have incentive to prosthelytize, e.g. rusts' safety scam.

geerlingguy

I like the conclusion; like for me, Whisper has radically improved CC on my video content. I used to spend a few hours translating my scripts into CCs, and tooling was poor.

Now I run it through whisper in a couple minutes, give one quick pass to correct a few small hallucinations and misspellings, and I'm done.

There are big wins in AI. But those don't pump the bubble once they're solved.

And the thing that made Whisper more approachable for me was when someone spent the time to refine a great UI for it (MacWhisper).

sota_pop

Not only whispr, so much of the computer vision area is not as in vogue. I suspect because the truly monumental solutions unlocked are not that accessible to the average person; i.e. industrial manufacturing and robotics at scale.

Etheryte

Many big names in the industry have long advocated for the idea that LLM-s are a fundamental dead end. Many have also gone on and started companies to look for a new way forward. However, if you're hip deep in stock options, along with your reputation, you'll hardly want to break the mirage. So here we are.

red75prime

> Many big names in the industry have long advocated for the idea that LLM-s are a fundamental dead end.

There should be papers on fundamental limitations of LLMs then. Any pointers? "A single forward LLM pass has TC0 circuit complexity" isn't exactly it. Modern LLMs use CoT. Anything that uses Gödel's incompleteness theorems proves too much (We don't know whether the brain is capable of hypercomputations. And, most likely, it isn't capable of that).

wild_egg

They're a dead end for whatever their definition of "AGI" is, but still incredibly useful in many areas and not a "dead end" economically.

hoherd

"It is difficult to get a man to understand something when his salary depends upon his not understanding it" and "never argue with a man whose job depends on not being convinced" in full effect.

fallingfrog

I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it. The people working on AI are very smart and they will solve the associated challenges soon enough. The problem of how to slow down the development of these technologies- a political problem- is much more pressing right now.

chriswarbo

> I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it.

Ever since "AI" was named at Dartmouth, there have been very smart people thinking that their idea will be the thing which makes it work this time. Usually, those ideas work really well in-the-small (ELIZA, SHRDLU, Automated Mathematician, etc.), but don't scale to useful problem sizes.

So, unless you've built a full-scale implementation of your ideas, I wouldn't put too much faith in them if I were you.

random3

Uncovering and tackling the deep problems of society starts making sense once you believe/see the possibility to unlock things. The idea that anything can be slowed down or accelerated can be faulty though. What are the more pressing political problems you consider a priority?

fallingfrog

By the way downvoting me will not hurt my feelings and I understand why you are doing it, I don't care if you believe me or not. In your position I certainly would think the same thing you are. Its fine. The future will come soon enough without my help.

ajjahs

[dead]

mikemarsh

The idea of replicating a consciousness/intelligence in a computer seems to fall apart even under materialist/atheist assumptions: what we experience as consciousness is a product of a vast number of biological systems, not just neurons firing or words spoken/thought. Even considering something as basic as how fundamental bodily movement is to mental development, or how hormones influence mood ultimately influencing thought, how could anyone ever hope to to replicate such things via software in a way that "clicks" to add up to consciousness?

danielbln

I don't see a strong argument here. Are you saying there is a level of complexity involved in biological systems that can not be simulated? And if so, who says sufficient approximations and abstractions aren't enough to simulate the emergent behavior of said systems?

We can simulate weather (poorly) without modeling every hydrogen atom interaction.

kalkin

Conflating consciousness and intelligence is going to hopelessly confuse any attempt to understand if or when a machine might achieve either.

(I think there's no reasonable definition of intelligence under which LLMs don't possess some, setting aside arguments about quantity. Whether they have or in principle could have any form of consciousness is much more mysterious -- how would we tell?)

teeray

Where is all the moral outrage that completely stonewalled technologies like human cloning? For what most businesses want out of AGI, it's tantamount to having digital slaves.

killerstorm

On the other hand we have DeepMind / Demis Hassabis, delivering:

* AlphaFold - SotA protein folding

* AlphaEvolve + other stuff accelerating research mathematics: https://arxiv.org/abs/2511.02864

* "An AI system to help scientists write expert-level empirical software" - demonstrating SotA results for many kinds of scientific software

So what's the "fantasy" here, the actual lab delivering results or a sob story about "data workers" and water?

hagbarth

I believe AlphaFold, AlphaEvolve etc are _not_ looking to get to AGI. The whole article is a case against AGI chasing, not ML or LLM overall.

killerstorm

AlphaEvolve is a general system which works in many domains. How is that not a step towards general intelligence?

And it is effectively a loop around LLM.

But my point is that we have evidence that Demis Hassabis knows his shit. Just doubting him on a general vibe is not smart

HarHarVeryFunny

AlphaEvolve is a system for evolving symbolic computer programs.

Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.

DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the back who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.

HarHarVeryFunny

Yeah, in reality it seems that DeepMind are more the good guys, at least in comparison to the others.

You can argue about whether the pursuit of "AGI" (however you care to define it) is a positive for society, or even whether LLMs are, but the AI companies are all pursuing this, so that doesn't set them apart.

What makes DeepMind different is that they are at least also trying to use AI/ML for things like AlphaFold that are a positive, and Hassabis' appears genuinely passionate about the use of AI/ML to accelerate scientific research.

It seems that some of the other AI companies are now belatedly trying to at least appear to be interested in scientific research, but whether this is just PR posturing or something they will dedicate substantial resources to, and be successful at, remains to be seen. It's hard to see OpenAI, planning to release SexChatGPT, as being sincerely committed to anything other than making themselves a huge pile of money.

per1

Hao is not just a "ai is bad" book... Those exist but Hao is a highly credited journalist.

SalmoShalazar

I’m not sure you understand what AGI is given the citations you’ve provided.

killerstorm

> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."

Is that not general enough for you? or not intelligent?

Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?

mofeien

> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

> LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable.

It's a bit unsatisfying how the last paragraph only argues against the second and third points, but is missing an explanation on how LLMs fail at the first goal as was claimed. As far as I can tell, they are already quite effective and correct at what they do and will only get better with no skill ceiling in sight.

ajjahs

[dead]

jillesvangurp

We should do things because they are hard, not because they are cheap and easy. AGI might be a fantasy but there are lots of interesting problems that block the path to AGI that might get solved anyway. The past three years we've seen enormous progress with AI. Including a lot of progress in making this stuff a lot less expensive, more efficient, etc. You can now run some of this stuff on a phone and it isn't terrible.

I think the climate impact of data centers is way overstated relative to the ginormous amounts of emissions from other sources. Yes it's not pretty but it's a fairly minor problem compared to people buying SUVs and burning their way through millions of tons of fuel per day to get their asses to work and back. Just a simple example. There are plenty.

Data centers running on cheap clean power is entirely possible; and probably a lot cheaper long term. Kind of an obvious cost optimization to do. I'd prefer that to be sooner rather than later but it's nowhere near the highest priority thing to focus on when it comes to doing stuff about emissions.

gizajob

Elon thinking Demis is the evil supervillain is hilariously backward and a mirror image of the reality.

Cthulhu_

That one struck me as... weird people on both ends. But this is Musk, who is deep into the Roko's Basilisk idea [0] (in fact, supposedly he and Grimes bonded over that) where AGI is inevitable, AGI will dominate like the Matrix and Skynet, and anyone that didn't work hard to make AGI a reality will be yote in the Torment Nexus.

That is, if you don't build the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus, someone else will and you'll be punished for not building it.

[0] https://en.wikipedia.org/wiki/Roko%27s_basilisk

danaris

...or, depending on your particular version of Roko's Basilisk (in particular, versions that assume AGI will not be achieved in "your" lifetime), it will punish not you, yourself, but a myriad of simulations of you.

Won't someone think of the poor simulations??

dist-epoch

Why not both.

captainbland

"From my point of view the Jedi are evil!" comes to mind.