Skip to content(if available)orjump to list(if available)

Geoffrey Hinton said machine learning would outperform radiologists by now

bonoboTP

Hinton doesn't take social/power aspects into account, he's a naive technical person. Healthcare is an extremely strongly regulated field and its professionals are among those with the highest status in society. Radiology won't be among the first applications to be automated. I see many naive software/ML people trying to enter the medical domain, but they are usually eaten for breakfast by the incumbents. This is not an open market like taxis vs Uber or one entertainment app vs another. If you try to enter the medical domain, you will be deep deep into politics and lobbying and you're going to be defeated.

Better deploy it in fields like agriculture and manufacturing that are already in good part automated. Then later in another generation one can try the medical domain.

jhbadger

>This is not an open market like taxis vs Uber

Interesting choice of example. In most places taxis are/weren't an "open market" either but had various laws preventing newcomers from providing taxi rides. In many places what Uber did was illegal. Various locales either banned Uber, changed the laws to allow them, or simply didn't enforce the laws that made them illegal.

bonoboTP

Yes, even Uber was pushed out in many places with lawfare and jobs reasons. Taxi drivers do have some moderately strong lobby in many places. So just think about taking on doctors... The discussion was never really about which service is better. It was about whose toes they were stepping on.

In healthcare you can't take the "asking for forgiveness is easier than permission" route. That's a quick way to jail. In taxiing, it's just some fines that fit the budget.

My overall point isn't to argue whether these disruptions are socially beneficial. I'm trying to point at the more neutral observation that who you're taking on and how much power they have will be crucial to the success of a disruptor. It's not just about "product quality" based on some objective dispassionate politics-free scientific benchmarks.

bko

Medallion holders had a weaker more fragmented system. If there was a nation wide system, they would have been able to organize better and kill Uber. Considering groups like the AMA literally dictate how many people are allowed to graduate from med school in a given year, the obviously have more control.

Furthermore, on the state level there are often "Need Based Laws" that require healthcare providers to obtain approval before establishing or expanding certain healthcare facilities or services. This is unironically designed to prevent unnecessary duplication of healthcare services, control healthcare costs and ensure equitable distribution of healthcare resources. Imagine how expensive it would be if we have two many choices!

The cab cartel was more or less city level.

Add in the fact that "health care" is much more politically charged and its easy to find useful idiots that want to "protect people's health" that enforcement is a lot easier, they're entirely different.

eszed

In context, I think GP meant "open" from a sociopolitical point of view. The "taxi-industrial complex" had minimal political power, so Uber was repeatedly able to steamroll the incumbents - exactly as you describe in your final sentence. Medical interests have immense sociopolitical power, so the same playbook won't work. The differences in your positions are merely semantic.

0xpgm

I wish there was less focus on AI replacing humans and more focus on AI helping humans do better i.e. Intelligence Augmentation. IA drove a lot of the rapid progress in the earlier days of computing that made computers usable by non-experts.

I suspect a lot of the thinking of replacing humans is driven a lot by the nerd scifi facination with the Singularity, but if a lot of the hype fails to materialize after billions of dollars have been poured into AI there could be an over-correction by the market that would take away funding from even useful AI research.

I hope we'll hear less of AI replacing X, and more of AI enhancing X, where X is radiologists, programmers, artists etc.

from-nibly

People becoming slightly more productive helps people. People being replaced helps quarterly earnings reports.

randomdata

Replacing is better than helping, though. It frees people up to develop capital – the very source of progress you speak of! More capital development, more progress.

If you still have to do the same job, but better, then you aren't be freed up for a higher level purpose. If we had taken your stance in earlier days of human history we'd all still be out standing in the field. Doing it slightly better perhaps, but still there, "wasting" time.

No, we are clearly better off that almost every job from the past was replaced. And we will be better off when our current jobs are replaced so that our time is freed up so to move onto the next big thing.

You don't think we have reached the pinnacle of human achievement already, surely?

croes

Capital is a tool for progress not the source.

Newton, Einstein etc. didn’t do neither for nor by the capital.

And we all know that those “freed” people are less likely to gain any capital.

mattlondon

Perhaps in the US where medicine and healthcare are hugely hugely expensive with loads of vested interests and profiteering and fingers-in-pies from every side of the table, sure.

But everywhere else in the developed world that has universal and free healthcare, I can imagine a lot of traction for AI from governments looking to make their health service more efficient and therefore cheaper (plus better for the patient too in terms of shorter wait times etc).

DeepMind has been doing a load of work with the NHS, Royal Free Foundation, Moorfields for instance (although there have been some data protection issues there, but I suspect they are surmountable)

bonoboTP

I agree. These developments will be first deployed in countries like India, Mexico, China. If you're an entrepreneur aiming at making a difference through medical AI, it's probably best to focus on places where they honestly want and need efficiency, instead of just talking about it.

jdbc

No one need/should wait for the US govt or the structurally exploitative private sector to reduce prices.

The US has concocted huge myths about why prices cannot fall no matter what tech or productivity gains happen. Its becomes like religious dogma.

Ppl are already flying overseas in the tens of thousands for medical tourism. All Mexico has to do is set up hospitals along the border so people arent flying elsewhere. This is what will happen in the US. The US has no hope of changing by itself.

alephnerd

> Perhaps in the US

Medical Systems in countries like the UK [0], Australia [1], and Germany [2] all leverage teleradiology outsourcing at scale, and have done this for decades

[0] - https://www.england.nhs.uk/wp-content/uploads/2021/04/B0030-...

[1] - https://onlinelibrary.wiley.com/doi/abs/10.1111/1754-9485.13...

[2] - https://www.healthcarebusinessinternational.com/german-hospi...

croes

The point is he is wrong.

AI isn’t better than humans but we still have now less radiologists.

Now imagine it were a free market:

To raise the profits people get replaced by AI but AI underperforms.

The situation would be much worse.

alephnerd

> Radiology won't be among the first applications to be automated

Radiology and a lot of other clinical labwork is heavily outsourced already, and has been for decades [0][1].

Much of the analysis and diagnosis is already done by doctors in India and Pakistan before being returned to the hospitals in the West.

In fact, this is how Apollo Group and Fortis Health (two of India and Asia's largest health groups) expanded rapidly in the 2000s.

It's the back office teleradiology firms that are the target customers for Radiology Agents, and in some cases are funding or acquiring startups in the space already.

This has been an ongoing saga for over a decade now.

[0] - https://www.jacr.org/article/S1546-1440(04)00466-1/fulltext

[1] - https://www.reuters.com/article/business/healthcare-pharmace...

Galanwe

Very counter-intuitive, yet after reading the sources definitely factual. Thanks for pointing that out.

bonoboTP

Outsourcing is indeed a force pushing against doctor power. This just means that radiologists are already under attack and may feel cornered, so they are on the defensive.

I'm not saying automation won't ever happen. But it will need to be slow, as to allow the elite doctor dynasties to recalibrate which specialties to send their kids to, and not to disrupt the already practicing ones. So Hinton's timeline was overly optimistic due to thinking only in terms of the tech. It will happen, but on a longer timescale, maybe a generation or so.

alephnerd

> But it will need to be slow, as to allow the elite doctor dynasties to recalibrate which specialties to send their kids to, and not to disrupt the already practicing ones

That's not how much of medicine from a business standpoint is run anymore.

Managed Service Organizations (MSOs) and PE consolidation has become the norm for much of the medical industry now because the running a medical practice AND practicing medicine at the same time is hard.

Managing insurance billing, patient records, regulatory paperwork, payroll, etc is an additional 20-30 hours of work on top of practicing as a doctor (which is around 30-50 hours as well).

Due to this, single practitioner clinics or partnership models get sold off and the doctor themselves gets a payout plus gets treated as an employee.

> It will happen, but on a longer timescale, maybe a generation or so

I agree with you that a lot of AI/ML application timelines are overhyped, but in Radiology specifically the transition has already started happening.

The outsourced imaging model has been the norm for almost 30 years now, and most of the players in this space began funding or acquiring startups in this space a decade ago already.

Is 100% automation in the next 5 to 10 years realistic? Absolutely not!

Is 30-50% automation realistic? I'd say so.

moralestapia

Yeah, that's cool.

But it's not automation.

GP's point is that it will be really hard to take the specialist out of this process, mainly because of regulatory issues.

alephnerd

The point is, if you can substitute even 30-50% of the headcount used in initial diagnostics and analysis, you're profit margins grow exponentially.

The customers for these kinds of Radiology Agents are the teleradiology and clinical labwork companies, as well as MSOs and Health Groups looking to cut outside spend by bringing some subset back in-house.

100% automation is unrealistic for a generation, but 30-50% of teleradiology and clinical labwork headcount being replaced by CV applications is absolutely realistic.

A lot of players that entered the telemedicine segment during the COVID pandemic and lockdown have begun the pivot into this space, as well as the new-gen MSOs as well as legacy telemedicine organizations (eg. Apollo, Fortis).

(Also, why are you getting downvoted? You brought up a valid point of contention)

antegamisou

Or maybe over-confident arrogant CS majors aren't experts on every other irrelevant subdomain and it's not some absurd absence of meritocracy/bad gatekeepers from preventing them from ruining other fields by letting them rely on everything?

bonoboTP

Yes, but this mostly plays out in ignoring social factors, like who can take on the blame, etc. CS people also underestimate the extreme levels of ego and demigod-like self-image of doctors. They passed a rite of passage that was extremely exhausting and tiring and perhaps even humiliating during residency, so afterwards they are not ready to give up their hard earned self image. Doctor mentality remains the same as in Semmelweis' time.

Good faith is difficult to assume. I do agree that the real world is much more complex than simply which tool works better.

An analogy could be jury-based courts in the US. The reason for having juries and many of the rules are not really evidence-based. It's well known that juries can be biased in endless ways and are very easy to manipulate. Their main purpose though is not to make objectively correct decisions. The purpose is giving legitimacy to a consensus. Similarly, giving diagnosis-power to human doctors is not a question of accuracy. It's a question of acceptability/blame/status.

napoleoncomplex

There is a tremendous share of medicine specialties facing shortages, and fear of AI is not a relevant trend causing it. Even the link explaining shortages in the above article is pretty clear on that.

I do agree with the article's author's other premise, radiology was one of those fields that a lot of people (me included) have been expecting to be largely automated, or at least the easy parts, as the author mentions, and that the timelines are moving slower than expected. After all, pigeons perform similarly well to radiologists: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651348/ (not really, but it is basically obligatory to post this article in any radiology themed discussion if you have radiology friends).

Knowing medicine, even when the tech does become "good enough", it will take another decade or two before it becomes the main way of doing things.

bachmeier

The reason AI is hyped is because it's easy to get the first 80% or 90% of what you need to be a viable alternative at some task. Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years. But the low-hanging fruit is easy and fast. It may never be feasible to complete the last few percent. Then it changes from "AI replacement" to "AI assisted". I don't know much about radiology, but I remember before the pandemic one of the big fears was what we'd do with all the unemployed truck drivers.

edanm

> The reason AI is hyped is because it's easy to get the first 80% or 90% of what you need to be a viable alternative at some task.

No, it's because if the promise of certain technologies is reached, it'd be a huge deal. And of course, that promise has been reached for many technologies, and it's indeed been a huge deal. Sometimes less than people imagine, but often more than the naysayers who think it won't have any impact at all.

pavel_lishin

> Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years

Extrapolating in a linear fashion, in a few years my child will be ten foot tall, weight six hundred pounds, and speak 17 languages.

The first 90% is the easy part. It's the other 90% that's hard. People forget that, especially people who don't work in software/technology.

candiddevmike

> Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years

That's a hefty assumption, especially if you're including accuracy.

hnthrowaway6543

> That's a hefty assumption, especially if you're including accuracy.

That's exactly what the comment is saying. People see AI do 80% of a task and assume development speed will follow a linear trend and the last 20% will get done relatively quickly. The reality is the last 20% is hard-to-impossible. Prime example is self-driving vehicles, which have been 80% done and 5 years away for the past 15 years. (It actually looks further than 5 years away now that we know throwing more training data at the problem doesn't fix it.)

rsynnott

That's their point, I think; since the 50s or so, people have been making this mistake about AI and AI-adjacent things, and it never really plays out. That last '10%' often proves to be _impossible_, or at best very difficult; you could argue that OCR has managed it, finally, at least for simple cases, but it took about 40 years, say.

bdndndndbve

OP is being facetious, the "last 20%" is a common saying implying that you've back-loaded the hard part of a task.

bonoboTP

> There is a tremendous share of medicine specialties facing shortages

The supply of doctors is artificially strapped by the doctor cartel/mafia. There are plenty who want to enter but are prevented by artificial limits in the training.

singleshot_

The supply of doctors would be much greater if incompetent people were allowed into the training pathway, for sure.

bonoboTP

Yep, this is precisely what they argue. They don't simply say they want to keep their high salary and status due to undersupply. They argue that it's all about standards, patient safety etc. In the US, even doctors trained in Western Europe are kept out or strangled with extreme bureaucratic requirements. Of course, again the purported argument is patient safety. As if doctors in Europe were less competent. Health outcome data for sure doesn't indicate that, but smoke and mirrors remain effective.

DaveExeter

Bad news...they're already there!

mananaysiempre

Medical professionals are highly paid, thus an education in medicine is proportionally expensive. An education in medicine is expensive, thus younger medical professionals need to be highly paid in order to afford their debt. Until the vicious cycle is here is broken (e.g. less accessible student loans? and more easily defaultable is one way to spell less accessible), things are not going to improve. And there’s also the problem that you want your doctors to be highly paid, because it’s a stressful, high-responsibility job with stupidly difficult education.

bonoboTP

US doctors are ridiculously overpaid compared to the rest of the developed world, such as the UK or western EU. There's no evidence that this translates to better care at all. It's all due to their regulatory capture. One possible outcome is that healthcare costs continue to balloon and eventually it pops and the mafia gets disbanded and more immigrant doctors will be allowed to practice, driving prices to saner levels.

infecto

I wouldn’t dismiss the premise so quickly. Other factors certainly play a role, but I imagine that after 2016, anyone considering a career in radiology would have automation as a prominent concern.

randomdata

Automation may be a concern. Not because of Hinton, though. There is only so much time in the day. You don't become a leading expert in AI like Hinton has without tuning out the rest of the world, which means a random Average Joe is apt to be in a better position to predict when automation is capable of radiology tasks than Hinton. If an expert in radiology was/is saying it, then perhaps it is worth a listen. But Hinton is just about the last person you are going to listen to on this matter.

nopinsight

I wonder what experts think about this specialized model:

"Harrison.rad.1 excels in the same radiology exams taken by human radiologists, as well as in benchmarks against other foundational models.

The Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids exam is considered one of the leading and toughest certifications for radiologists. Only 40-59% of human radiologists pass on their first attempt. Radiologists who re-attempt the exam within a year of passing score an average of 50.88 out of 60 (84.8%).

Harrison.rad.1 scored 51.4 out of 60 (85.67%)."

https://harrison.ai/harrison-rad-1/

bee_rider

It looks like there are 3 possible outcomes: AI will do absolutely nothing, AI will enhance the productivity of radiologists, or AI will render their skills completely obsolete.

In case 1, learning radiology is a fine idea. In case 2, it becomes a little tricky, I guess if you are one of the most skilled radiologists you’ll do quite well for yourself (do the work of 10 and I bet you can take the pay of 2). For case 3, it becomes a bad career choice.

Although, I dunno, it seems like an odd thing to complain about. I mean, the nature of humans is that we make tools—we’re going to automate if it is possible. Rather I think the problem is that we’ve decided that if somebody makes the wrong bet about how their field will go, they should live in misery and deprivation.

gtirloni

Scenario 2 is the most likely by far. It's just the continuation of a trend in radiology. Clinics already employ many more "technicians" with basic training and have a "proper" radiologist double checking and signing their work. If you're a technician (or a radiologist NOT supervising technicians), you're in hot water.

Kalanos

My cousin is an established radiologist. Since I am well-versed in AI, we talked about this at length.

He says that the automated predictions help free up radiologists to focus on the edge cases and more challenging disease types.

amelius

I predict most AI developers will be out of a job soon.

Say you are making a robot that puts nuts and bolts together using DL models. In a few years, Google/OpenAI will have solved this and many other physical tasks, and any client that needs nuts and bolts put together will just buy a GenericRobot that can do that.

Same for radiology-based diagnostics. Soon the mega companies will have bought enormous amounts of data and they will easily put your small radiology-AI company out of business.

Making a tool to do X on a computer? Soon, there will be LLM-based tools that can do __anything__ on a computer and they will just interface with your screen and keyboard directly.

Etc. etc.

Just about anyone working in AI right now is building a castle in someone else's kingdom.

krisoft

> I predict most AI developers will be out of a job soon.

Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

> Just about anyone working in AI right now is building a castle in someone else's kingdom.

This doesn't ring true to me. When do you feel we should circle back to this prediction? Ten years? Fifteen?

Can you formulate your thesis in a form we can verify in the future? (long bets style perhaps)

amelius

I thought about it some more.

> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

This may not be true if data is the bottleneck. AI developers may be out of a job long before the data collection has been finished to train the models.

jvanderbot

> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

Same for software devs: We'll all be working on AI that's working on our problems, but we'll also work on our problems to generate better training data, build out the rest of the automation and verify the AI's solutions, so really AI will be doing all the work and so will we.

amelius

> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

Maybe so, but what if AI developments accelerate and since the big companies have all the data and computational power they put you out of business faster than you can see the next thing the market wants?

candiddevmike

> Just about anyone working in AI right now is building a castle in someone else's kingdom.

Things are innovating so fast that even if you've convinced yourself that you own the kingdom, it seems like you're one research paper or Nvidia iteration away from having to start over.

cryptonym

This is not putting people out of a job, this is changing their jobs. As always, workforce will adapt. Just like all previous tech, this is trying to improve efficiency. LLM can't do "anything" on a computer that's a misunderstanding of the tech but they can already interface with screen/keyboard.

You overestimate the innovations abilities of big tech companies. They'll buy a lot of companies designing robots that are putting nuts and bolts together, or diagnostics, that's a great opportunity.

ben_w

> As always, workforce will adapt

I was never a fan of the term "Singularity" for the AI thing. When mathematical singularities pop up in physics, it's usually a sign the physics is missing something.

Instead, I like to think of the AI "event horizon", the point in the future — always ahead, yet getting ever closer — beyond which you can no longer predict what happens next.

Obviously that will depend on how much attention you pay to the developments in the field (and I've seen software developers surprised by Google Translate having an AR mode a decade after the tech was first demonstrated), but there is an upper limit even for people who obsessively consume all public information about a topic: if you're reading about it when you go to sleep, will you be surprised when you wake up by all the overnight developments?

When that happens, no workforce can possibly adapt.

How close is that? Dunno, but what I can say is that I cannot tell you what to expect 2 months from now, despite following developments in this field as best I can from the outside, and occasionally implementing one or other of the recent-ish AI models myself.

cryptonym

You don't ask people to know everything about last tech, they just need to do their job. If a new cool tool benefit is high enough, everybody will know about it and use it. Based on what you said, one could infer Translate AR mode is a nice cool but not something that'll meaningfully change the life of the average joe.

Humanity had been through multiple "event horizon" or whatever you want to call it. With industrialisation, one man with some steam could suddenly do the work of thousands, and it expanded to each and every activity. Multiple "event horizon" later (transport, communication, IT), we are still there with everyone working, differently than previous generations, but still working. We improved a bit reducing child labor but doubled the workforce by including women and we always need more people at work for longer time.

The only constant thing is we keep people at work until we figure out how to burn the last remaining atom. People are far too optimistic believing a new tech will move people away from work.

moralestapia

I don't know why you're downvoted, you are correct. People just don't like their bubbles being popped.

I do software, have done it for about 20 years. My job has been perpetually "on the verge of becoming obsolete in the next few years", lol. Also, "yeah but as you get older people will not want to hire you" and all that.

Cool stories, *yawn*.

Meanwhile, in reality, my bank account disagrees, and more and more work has been increasingly available for me.

hobs

I predict that the current wave of AI will be useful but not nearly as useful as you hope, requiring ever more people to pour their time and energy into the overarching system it creates.

naveen99

He updated the timeline to 10-15 years in 2023. That is like an eternity. Even artists, lawyers, programmers are scared about being replaced now in that time frame. We might have humanoid robots in 15 years.

odyssey7

Why automate the analysis of an MRI when the bottleneck is that they only have two MRI machines in the whole city, for some reason?

righthand

I worked for a start up that was contracted by a German doctor. We built some software for automating identifying lung lesions: a viewer app for each xray, scientific trials software, and an LLM to process data from the trials software. The result was pretty cool and probably would have moved it forward. We even held a few trials to gather data from many doctors in China. Then the doctor didn’t pay his bills and the project was shut down.

This wouldn’t have put radiologists out of work but would have made the process to diagnose much quicker maybe more affordable.

stego-tech

Not a bad piece. I quite liked how they manage to find the likely middle ground between the doomers on both sides while still twisting the knife in the typical “workforce will adapt” chuds who think they’re above the fracas.

It’s not about whether the present AI bubble is a giant sham that’ll implode like a black hole, or if it’s the real deal and AGI is days away. It’s always been about framing the consequences of either outcome in a society where 99.9% of humans must work to survive, and the implications of a giant economic collapse on those least-able to weather its impact in a time of record wealth and income inequality. It doesn’t matter which side is right, because millions of people will lose their jobs as a result and thousands will die. That is what some of us are trying to raise the alarm about, and in a post-2008 collapse world, young people are absolutely going to do what it takes to survive rather than what’s best for society - which is going to have knock-on effects like the author described (labor shortages in critical fields) for decades to come.

In essence, if one paper from one guy managed to depress enrollment in a critical specialty for medicine, and he was flat-out wrong, then imagine the effects the Boosters and Doomers are having on other fields in the present cycle. If I were a betting dinosaur, I suspect there will be a similar lack of highly-skilled programmers and knowledge workers due to the “copilot tools will replace programmers” hype of present tools, and then we’ll have these same divorced-from-society hypesters making the same predictions again, ignorant of the consequences of their prior actions.

Which is all a lot of flowery language just to say, “maybe we need to stop evaluating these things solely as ways to remove humans from labor or increase profit margins, and instead consider the broader effects of our decisions before acting on them.”

hggigg

That is because Mr Hinton is full of shit. He constantly overstates the progress to bolster his position on associated risks. And now someone gave him a bloody Nobel so he'll be even more intolerable.

What is behind the curtain is becoming obvious and while there are some gains in some specific areas, the ability for this technology as it stands today to change society is mostly limited to pretending to be a solution for usual shitty human behaviour of cost cutting and reducing workforce. For example IBM laying off people under the guise of AI when actually it was a standard cost cutting exercise with some marketing smeared over the top while the management told the remaining people to pick up the slack. A McKinsey special! And generating content which is to be consumed by people who can't tell the difference between humans and muck.

A fine example is from the mathematics side we are constantly promised huge gains in LLMs but we can't replace a flunked undergrad with anything yet. And this is because it is the wrong tool for the fucking job. Which is the problem in one simple statement: it's mostly the wrong tool for most things.

Still I enjoyed the investment ride! I could model that one with my brain fine.

auggierose

I hear existential despair. Mathematics is indeed a great example. Automation (AI) has made great strides in automating theorem proving in the last 30 years, and LLMs are just a cherry on top of that. A cherry though that will accelerate progress even further, by bringing the attention of people like Terence Tao to the cause. It will not change how mathematics is done within 2 years. It will totally revolutionise how mathematics is done within 20 years (and that is a conservative guess).

meroes

ChatGPT and others can’t reliably sum several integers together still, when is an LLM going to replace anything meaningful in higher math?

I spent more time deciphering ChatGPT’s mistakes than programming something in python to sum some integers this week.

hggigg

No it's a terrible example. We have ITPs and ATPs and any progress on ATPs, which are the ones that are currently considered magical, are turning into a fine asymptote. ITPs are useful however but have nothing to do with ML at all.

Putting a confident timescale on this stuff is like putting a timescale on UFT 50 years ago. A lie.

Oh and lets not forget that we need to start with a conjecture first and we can't get any machine to come up with anything even remotely new there.

There is no existential despair. At all.

jamal-kumar

I think my favorite thing is how people are out there reconsidering their programming careers and doing shit like quitting when you have a bunch of less than quality developers lazily deploying code they pulled out of the ass of an LLM which does no bounds checking, no input validation, none of that basic as can be security shit at all, they just end up deploying all that code to production systems facing the internet and then nobody notices because either the competent developers left for greener pastures or got laid off. I keep on telling people this is going to take like 5 or 10 years to untangle considering how hard it is in a lot of cases to fix code put into tons of stuff like embedded devices mounted up on a light pole or whatever that was considered good enough for a release, but I bet it'll be even more of a pain than that. These are after all security holes in devices that in many cases peoples day or life going well depends on.

What it has done so far for me is put copywriters out of a job. I still find it mostly useful for writing drivel that gets used as page filler or product descriptions for my ecommerce side jobs. Lately the image recognition capabilities lets me generate the kind of stuff I'd never write, which is instagram posts with tons of emojis and other things I'd never have the gumption to do myself but increases engagement. I actually used to use markov chain generator for this going back to 2016 though so the big difference here is that at least it can form more coherent sentences.

hggigg

Oh I'm right in the middle of your first point. I'm going to have on my grave stone: "Never underestimate the reputation and financial damage a competent but lazy software developer can do with an incompetent LLM"

So what you're saying is that it's a good bullshit generator for marketing. That is fair :)

jamal-kumar

It's really excellent for that. I can give it an image of a product I'm trying to sell, and then say 'describe this thing but use a ton of marketing fluff to increase engagement and conversions' and it does exactly that.

There's tools out there to automatically do this with A/B testing and I think even stuff like shopify plugins, but I still do it relatively manually.

I'll use it for code sparingly for stuff like data transformations (Take this ridiculously flat, two table database schema and normalize it to the third normal form) but for straight up generating code to be executed I'm way more careful to make sure it isn't giving me or the people who get to use that code garbage. It isn't that bad for basic security audits actually and I suggest any devs reading this re-run their code through it with security related prompting to see if they missed anything obvious. The huge problem is that at least half of devs with deadlines to meet are apparently not doing this, and I get to see it in the drop in quality of pull requests within the past 5 years.

620gelato

[dead]

null

[deleted]