Skip to content(if available)orjump to list(if available)

Where do scientists think this is all going?

biophysboy

I did a biophysics PhD, and I do think the main value of AI in academia will be rapid bespoke scripting. Most of my code in grad school was little one-off scripts to perform a new experiment or show a new result. The code is not really the goal; in fact, it is frequently annoying and in the way of your actual goal. I would've killed for a tool that could style my figure for a talk, or perform a rolling average on a time trace.

HPsquared

This applies to a lot of professional jobs that involve programming.

biophysboy

Yes, exactly. AI will soon be treated like yet another technology, not a sentient alien, and will be used largely to accomplish mundane, well-trodden tasks.

Its analogous to how social media founders dreamily promised global democracy at first; in reality, we got an app to sell a used monitor, complain about a new thing, and look at some cat pictures.

Balgair

HN's bubble/biases are pretty obvious here and we should expect them. But, as someone that uses code but is not a coder, I'll confirm this.

Nearly none of my coworkers were hired as coders. Yet we all code in some small way or another. As such, 100% of our code is really bad. No, its okay, we know it, it really is bad.

To echo the GP too; I had a friend in grad school that was trying to do some neuroscience experiments and analyze the data. He wanted my help with some matlab code and I said, sure, I'll sit down with you for a six pack. After the 11th nested if-statement, I upped the price to a case.

Like, most of the people I work with do not care at all about the code itself. They care about the result. I know much of HN does care about the code, and I'm not calling you out on it. Your feelings are quite valid! But so are those of myself and my coworkers.

LLMs that can, and very much do, code for us? That is the thing I think HN is really missing out on, understandably so. The power of AIs is not going to be trying to figure out the code base. From what I hear on here, it's bad at this. The power I see in my life is that suddenly, things are possible that we never thought we'd ever be able to do. And most of those things are under 200 lines of code, probably under 15 lines really.

I tend to think of AI as a wheelchair or other mobility aids. For a lot of people I know, AI/LLMs let us get moving at all. The dark ages where we just sat there knowing that we weren't smart enough to do the code we wanted to, to get the results we need? Those days are over! It's so nice!

null

[deleted]

qoez

Not really seeing any answer to where they think it's going other than "we don't know" or the typical worries about education. Was hoping for a more scifi or more interesting answer.

pwndByDeath

My money is on, "meh" once we get over the hype, deep learning is a mix of the useful English major intern who writes well but doesn't always understand what they write, and the asshat in the meeting who says the ultra obvious but with confidence that appeals to the other pretenders.

nosianu

I think its strengths already lie where complete accuracy does not matter.

Campaigns - political or other, dialog for dialog's sake, keeping people busy and "engaged", and then there is the huge field of generating pictures, video and audio.

Then there will be the applications in business and administration where accuracy would matter, except that a certain percentage of failure is allowed in the system. Support, and decision making, e.g. in insurance. Those lucky may get a second chance to speak to a real human after a decision against them. The government is going to allow it as long as it saves costs and not too many people are impacted, a certain amount of dissatisfaction is priced into such systems already anyway. Worse for those without means.

pwndByDeath

I want to ammend my comment. I think the honest conclusion is the product of our intelligence can be duplicated by a slime mold with the right preconditions.

HarHarVeryFunny

I see a lot of people here replying on the assumption that AI=LLMs, which I don't think will last for very long. LLMs have unlocked a primitive level of AI faster then many people expected, but it is only that. Where AI is going is surely going to be more complex/structured ANN-based architectures, built for the job (i.e. cognitive architectures), not simplistic pass-thru transformers which were never intended to be more than a seq-2-seq architecture.

I don't see any reason to suppose that we won't achieve human-level/human-like AGI, and do so fairly soon. Transformers may or may not be part of it, but I think we've now seen enough of what different ANN architectures can do, the "unexpected" power of prediction, have sufficient compute, etc, that the joke of AI/AGI always being 50(?) years away no longer applies.

I think achieving real human-level AGI is now within grasp as more of an engineering challenge (and not such a big one!) than an open-ended research problem. Of course (was it Chollet who said this?) LLMs have sucked all the oxygen/funding out of the room, so it may be a while until we see a radical "cognitive architecture" direction shift from any of the big players, although who knows what Sutskever or anyone else operating in stealth mode is working on?!

So, I think the interesting way to interpret the question of "where is this all going" is to assume that we do achieve this, and then ask what does that look like?

One consequence would seem to be that the vast majority of all white collar jobs (including lawyers, accountants, managers - not just tech jobs) will be done by computers, at least in countries where salaries are/were high enough to justify this, probably leading to the need for some type of universal basic income, meaning a big reduction in income for this segment of society. One could dream of an idyllic future where we're all working far less pursuing hobbies, etc, but it seems more likely that we're headed for a dystopian future where the masses live poorly and at the grace of the wealthy elite who are profiting from the AI labor, and only vote for UBI to extent of preventing the mass riots that would threaten their own existence.

While white collar and intellectual jobs disappear, and likely become devalued as the realm of computers rather than what makes humans special, it seems that (until that falls to AI too) manual and human-touch jobs/skills may become more valued and regarded as the new "what makes us special".

Over time even emotions and empathy will likely fall to AI, since these are easy to understand at mechanical level in terms of how they operate in the brain, although it'd take massive advances in robotics for them to be able to deliver the warm soft touch of a human.

TheOtherHobbes

It's 2075. The stock markets are doing better than ever. Resource wars are a thing of the past. Climate change is no longer a problem.

The colonies on the Moon, Mars, and the major asteroids are thriving. Research suggests subquantum physics may make FTL possible by the end of the century.

The last human died five years ago.

dirtyhippiefree

Philipov missed the last line, “The last human died five years ago.”

Correction: The Holocene Extinction has been happening for a couple of centuries and we can’t bear to look, even indirectly.

All hail Earth’s coming overlords…

null

[deleted]

philipov

[flagged]

peterlk

I am surprised at the negativity and cynicism in this thread. I suppose the pendulum of hype has high amplitude.

Just because AI isn’t going to be some kind of all-knowing sci-fi AGI doesn’t make it all a sham. The recent research models are miracles of technology - we can now get in ~15 minutes an undergraduate-level report (that would take an undergrad days or weeks) on almost any topic. That’s incredible! The capability of an AI model is approximately junior level in the fields I’ve tested it in (programming, law, philosophy, neuroscience). If you’d don’t see any possible uses for the technology, keep thinking about it.

lsy

While the technology is indeed incredible, the question is not whether someone, somewhere, will find it useful for something, but whether the sorts of things it's useful for will economically justify the massive expenditure in both financial and human capital this trend is currently soaking up.

E.g. if "undergraduate-level reports" were something there was a mass market for, the economics of university education would be pretty different. And the same goes for idle searches, sycophantic therapizing, blog article generation, and toy code development: there is a solid user base while costs are free or low, but that says little about whether there is an appetite to pay for these tools, especially if the prices are commensurate with the cost of operation.

peterlk

I think you’re fixating on the specific example that I cited rather than imagining the possibilities of the technology. We now have: zero-shot classification capabilities for basically any task, performance that is consistent enough for independent LLMs to collaboratively produce robust long-form responses, the ability to produce almost any web UI element on command with functional hookups to an API. And that’s just LLMs. SAM2 has good performance for realtime object detection in video.

Perhaps a more fruitful line of inquiry could be: what would the internet look like if every web application implemented support for A2A?

Kamq

> That’s incredible! The capability of an AI model is approximately junior level in the fields I’ve tested it in (programming, law, philosophy, neuroscience). If you’d don’t see any possible uses for the technology, keep thinking about it.

It is absolutely incredible from a technical perspective, but your next statement does not follow.

In a lot of (most?) fields, juniors are negative ROI, and their main value is that they will eventually be seniors. If AI isn't on the road to that, then the majority of the hype has been lies, and it's negative value for a lot of fields. That changes AI from a transformative technology to an email summarizer for a lot of people.

hybrid_study

So true. It's equivalent to someone saying "so you can search billions of internet pages - big deal" when Google first appeared; or someone saying to Guttenberg, "sure you can mass produce books, but you still gotta read them".

It's as if cynics are jealous or so deeply trouble by the technology that their primary responses are mostly negative. Or they want a perfect solution (immediately, right this minute).

Totally irrational.

quantumHazer

How do you judge? Are you an expert in four different and complex fields? I'm not saying that LLMs are useless anyway, I'm just curious to know how you can judge a report as "undergraduate level".

horhay

My brain just bleeps out "x-level" as a description of LLM output. It doesn't really make sense and is usually most used by Silicon Valley folk who can't even begin to quantify such definitions with any type of consistency.

api

I’ve been on HN for a very long time as you can probably tell from my user name.

I lived through the incredible tech optimism of the late 90s into the early to mid 2000s. I miss that era, but then again many of its optimistic predictions were wrong. The failure of so much of that optimism to pan out has inspired a strong counter reaction of techno pessimism.

Here’s a few of the optimistic ideas we had versus the reality.

Optimism: the Internet will be a massive engine for decentralization and democracy. Reality: this one is exactly wrong. Networks drive centralization, winner take all markets, and empower authoritarians by making “command and control” much easier at all levels of society and in technology.

Optimism: the Internet will be a huge engine for education and will place the collective knowledge of humanity into everyone’s hands. Reality: it did this, but nobody cares. People would rather scroll and stare at unbelievably vacant crap. What happened was that concurrent with the Internet delivering on this promise, Internet companies figured out how to build machines to hack the dopamine system to create ad platforms. We did think about this back then but vastly underestimated how successful it would be. I am just floored by the awesome brain rotting power of engagement algorithms and how well they steer people toward absolute garbage.

Optimism: decentralized cryptocurrency. Reality: what was created to provide a decentralized democratized alternative to financial capitalism turned into an absurdist parody of financial capitalism, then into a pure casino, which is what it is today.

Optimism: people will have privacy and control over their own digital world. Reality: lol. This one is also possible, but only for the tech savvy. What we didn’t understand was that making computers easy to use is so difficult that only very well funded companies can do it. Computers are actually incredibly confusing to 95% of people and require years to master, and the UI/UX aspect of making a product is often orders of magnitude more time consuming and costly than the more technical parts.

I could write a lot more on the causes for these failed predictions, but a lot of it boils down to not thinking through the economic reality behind these things. The net turned into an addictive chum machine and a casino because it was not built from the ground up with a billing and payment system built in, and because making and delivering media is expensive. So that role was taken up by ads and other less savory things like surveillance.

Now people think about the downsides first. When AI comes around, the first thought is “how will this ruin the world and fuck us?” That’s an over correction. We were too blindly optimistic before and are being too blindly pessimistic now.

the_snooze

From the history you've summarized, I think we're at the right level of pessimism. All this tech is amazing, and the smart people who put in the work to make it happen should be proud of it.

But at the end of the day, economics and game theory will drive the values that get propagated through the tech. The past several decades of technological progress have shown that values like "resiliency," "reliability," and "user empowerment" aren't at the top of the list, so why should we believe otherwise with AI and give it the benefit of the doubt?

I like to put AI systems through its paces with low-stakes but easy-to-check sports trivia. It should absolutely ace that given the plethora of accurate data and text out there. Yet it fails again and again. "Reliability" is not a design priority for this technology.

api

I’ve toyed around with an ultimate heresy: that the Internet, or at least the way we built it, was a mistake, and that something more like the telecoms and their OSI channel based network would have been socially superior.

The PC age was incredible. Jump in your time machine and go back and buy a decent PC in 1995, probably the height of the pre Internet PC era. It’d be kind of a big clunky box, sure, but the striking thing you’d find was a machine brimming with features and software almost all built primarily to serve the user of the machine. It was a product designed for you, the customer, full of attempts to deliver value to you.

It was a product of the good kind of capitalism, the kind where you try to create and trade value for value. (BTW think on this and you’ll understand why libertarian thought was popular then. Capitalism didn’t look so bad in this era. A 90s PC was an argument that Ayn Rand was right.)

You might use this PC to call BBSes, which were of course slow and very limited, but they too were either volunteer efforts aimed at building a community or services to serve, well, their users. Volunteer free or low cost BBSes were pubs, third spaces, while things like Compuserve were more like paid libraries, basically the pro version.

Ten years later in 2005 you can already see this world giving way to the dystopia of today where you are the product and the machine is there as a host for things to hack your dopamine system.

A telco OSI net would have been more expensive and limited. It would have been basically fast data calls. But it would have been point to point. Your PC would have called other PCs or PC services like with modems, just faster. No NAT and more importantly no unpermissioned access to your machine so no security armageddon driving the installation of firewalls that break end to end connectivity.

You probably would have gotten cloud eventually but its role might be different. You might not have gotten the www as we know it, and that might not be a bad thing. You might not have gotten Facebook or Instagram or TikTok, and that’s like saying we might never have had the AIDS epidemic. Social media has been a pretty strong net negative for humanity.

That network would probably have had a billing mechanism built in too. You’d be able to put up the equivalent of 1-900 numbers, services that automatically bill their callers. That would have allowed a profusion of small businesses serving and aggregating data with working business models that do not inevitably lead to enshittification.

I’m just speculating of course. You can’t rerun history. But I do wonder if a more limited and managed network would have counter intuitively led to a more free, open, and decentralized computing landscape with an economic model centered around the user as the customer.

Instead we’ve gone down this terrible road where the net and computer tech is primarily about delivering the user to the real customer: advertisers, political parties, and ultimately authoritarian political regimes. It’s becoming increasingly obvious to me that this ends with a command and control architecture where a small number of despotic god-kings drive humanity by mass dopamine system hacking with the assistance of AI. That is a dark, ugly future.

null

[deleted]

indigodaddy

I don't see any article really, just some blurbs, wtf

macawfish

The blurbs are from a series of articles linked at the top.

indigodaddy

Ah I actually thought that was just an ad to another article on the website, should have known that I guess

null

[deleted]

shadowgovt

[flagged]

null

[deleted]

rolph

I expect to see more websites like this, as more variations of anti scraping practices are adopted as a standard of security.

jfengel

The machines can OCR better than you can read image text (which can't have its color, size, font set to your preferences).

Soon scraping will be the only readers.

varispeed

[flagged]

Etheryte

It's hard to give this take serious thought when just today I built a bunch of utility software I'd been putting off due to lack of time and domain specific knowledge, all by working together with an LLM. Takes some time and back and forth, but it works and does the things I need. It doesn't need to get everything right nor to get it right the first time, humans don't tick either of those boxes either. The value add is already clearly there and it's most likely going to improve over time.

grey-area

Another way to look at this amazing performance improvement for you in making small tools is you’re just using bits of other people’s work that were lying around on the internet, without licensing, permission, or attribution.

Are you ok with that?

SubmarineClub

In what way, functionally, is it any different than scouring through StackOverflow and blog posts until you've got whatever thing working?

beisner

Intellectual property is theft

deadbabe

Chances are the majority of whatever you built has already been built and published as open source by someone somewhere.

Etheryte

You could make the same quip about nearly any piece of software, yet most everyone on HN still has a business or a job. If it exists, but I can't find it or it's too cumbersome to set up or use, it's functionally as good to me as it not existing.

the__alchemist

This line or reasoning itself has already been stated many times, notably as Saunt Lora's Proposition.

mock-possum

Sure but the person you’re replying to was able to build their thing with no knowledge of that.

Lamad1234

[dead]

chii

> this is just what it predicted to be most fitting from it's training data

and your own prediction, from your own brain, is doing something quite similar i would imagine.

The mechanism behind how it works is irrelevant. The results can be judged on its own. People used to judge chess engines as tho they are just merely searching and trying every possibility, and it's not "really playing chess", like humans' intuitions. And yet, by just doing searching and almost brute force, it produces a better chess game than any humans can.

And LLM are still in the early days. It's only been around for less than 3 years.

hatefulmoron

> Actually, LLMs function similarly to brains!

> .. but if they don't, it doesn't matter because they could be functionally better than humans!

> .. but if they're not, it's just because they aren't there yet.

Presumably we could add an extra clause, something like ".. but if they won't get there, they're super good as it stands!"

I would agree with that too, I just think that the progressive weakening comes across as kinda weak.

suddenlybananas

>and your own prediction, from your own brain, is doing something quite similar i would imagine.

well I'm glad you solved human cognition.

chasd00

You’re getting flamed (haven’t used that word in a while) a bit here but have a point. AI is useful but not the civilization disrupting tech we were promised or warned. To me it’s like conversational stack overflow.

null

[deleted]

maxdoop

Do you use AI tools at all?

I am slightly astonished someone makes these sorts of comments in 2025. AI has been remarkably useful for many many things across many industries; I’m curious what you think

fumeux_fume

> AI has been remarkably useful for many many things across many industries

This is really a statement of faith more than a statement of fact. I think you and many others believe this to be true without much concrete evidence. For work, I help companies adopt AI driven solutions. Sometimes it makes things a little better, sometimes it makes things worse. I've yet to see a project use LLMs in the transformative way that many AI optimists put forward. Don't get me wrong, I find tools like Claude and ChatGPT to be fascinating and useful for looking up all kinds of information. I can't really say if we're just scratching the surface or if we've dug ourselves into rut with the present state of LLMs. The firsthand evidence I've seen and verified points more to the latter, but things are changing fast in this area. I'm excited to see what's around the corner.

pwndByDeath

Simple google searches did this too, it still takes an true intelligence to apply it. At best its a ridiculously inefficient search engine that sits atop the million corpses of failed models

parpfish

Not OP, but I use AI tools and sometimes it’s great, sometimes they’ll distractingly lead you in circles, and other times they completely shit the bed.

Luckily, I’m using AI tools to do things that I am capable of doing without AI so I can tell which path I’m going down pretty quickly.

So, letting experts augment their skills with AI is something that can work depending on the specific task. The nice thing is that an expert is able to see errors pretty quickly and determine that this task is a mismatch for the AI.

The problem is that AI is being sold as a magic solution so people never need to develop expertise in the first place. Blind trust in a system that is often confidently incorrect will lead to problems

dismalaf

Dunno, on one hand, I think LLMs are somewhat of a dead end. We trained them on all human knowledge, and they're still not great. Useful enough, but a tad underwhelming versus the hype.

On the other hand, with all the money flooded into AI, all the hardware that's been produced and bought, it's reignited the entire AI industry (that mostly died in the 80's) and there is a chance for innovation beyond LLMs.

gmassman

Would we consider the calculator useful if it sometimes told us 2+2=22? Would this at all be considered a sign of creativity or abstract thought?

kelseyfrog

LLMs are a calculator for language, but not for reasoning.

When you realize that language and reasoning are in fact two separate skills, only one of which an LLMs is good at, they make much more sense.

Until now, language skill and reasoning skill have been correlated - people with greater skill using language are usually better skilled at reasoning. Put another way, we typically discount poorly written material regardless of it's actual content.

LLMs turn this on its head - great at language, poor at reasoning. So the crutch, the heuristic, we used before no longer applies. We MUST recognize that language ability and reasoning ability are now independent.

null

[deleted]

suddenlybananas

[flagged]

null

[deleted]