Skip to content(if available)orjump to list(if available)

An Overwhelmingly Negative and Demoralizing Force

justonceokay

I’ve always been the kind of developer that aims to have more red lines than green ones in my diffs. I like writing libraries so we can create hundreds of integration tests declaratively. I’m the kind of developer that disappears for two days and comes back with a 10x speedup because I found two loop variables that should be switched.

There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…

ajjenkins

AI can definitely produce a deletion. In fact, I commonly use AI to do this. Copy some code and prompt the AI to make the code simpler or more concise. The output will usually be fewer lines of code.

Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.

Freedom2

I'm no big fan of LLM generated code, but the fact that GP bluntly states "AI will never produce a deletion" despite this being categorically false makes it hard to take the rest of their spiel in good faith.

As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.

fifilura

I'd also also be wary of the risk of being an architecture-astronaut.

A declarative framework for testing may make sense in some cases, but in many cases it will just be a complicated way of scripting something you use once or twice. And when you use it you need to call up the maintainer anyway when you get lost in the yaml.

Which of course feels good for the maintainer, to feel needed.

ryandrake

I messed around with Copilot for a while and this is one of the things that actually really impressed me. It was very good at taking a messy block of code, and simplifying it by removing unnecessary stuff, sometimes reducing it to a one line lambda. Very helpful!

buggy6257

> sometimes reducing it to a one line lambda.

Please don't do this :) Readable code is better than clever code!

KurSix

AI can refactor or trim code. But in practice, the way it's being used and measured in most orgs is all about speed and output

Lutger

So its rather that AI amplifies the already existing short-term incentives, increasing the harder to attribute and easier to ignore long-term costs.

The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.

specialist

This is probably just me projecting...

u/justonceokay's wrote:

> The solution to bad code is more code.

This has always been true, in all domains.

Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).

u/justonceokay's wrote:

> AI will never produce a deletion.

I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".

But what of novelty, craft, innovation? Can Gen-AI, moot the need for code? Like the oft-cited example of -2,000 LOC? https://www.folklore.org/Negative_2000_Lines_Of_Code.html

Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?

Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?

You wrote:

> only focus on adding new features

Yup.

Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.

The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.

Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.

gopher_space

> Can Gen-AI, moot the need for code?

During interviews one of my go-to examples of problem solving is a project I was able to kill during discovery, cancelling a client contract and sending everyone back to the drawing board.

Half of the people I've talked to do not understand why that might be a positive situation for everyone involved. I need to explain the benefit of having clients think you walk on water. They're still upset my example isn't heavy on any of the math they've memorized.

It feels like we're wondering how wise an AI can be in an era where wisdom and long-term thinking aren't really valued.

bitwize

> Can Gen-AI, moot the need for code?

No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute." Relatedly is an idea I often quote against "low/no code tooling" that by the time you have an idea of what you want done specific enough for a computer to execute it, whatever symbols you use to express that idea -- be it through text, diagrams, special notation, sounds, etc. -- will be isomorphic to constructs in some programming language. Relatedly, Gerald Sussman once wrote that he sought a language in which to discuss ideas with his friends, both human and electronic.

Code is a notation, like mathematical notation and musical notation. It stands outside prose because it expresses an idea for a procedure to be done by machine, specific enough to be unambiguously executable by said machine. No matter how hard you proompt, there's always going to be some vagueness and nuance in your English-language expression of the idea. To nail down the procedure unambiguously, you have to evaluate the idea in terms of code (or a sufficiently code-like notation as makes no difference). Even if you are working with a human-level (or greater) intelligence, it will be much easier for you and it to discuss some algorithm in terms of code than in an English-language description, at least if your mutual goal is a runnable version of the algorithm. Gen-AI will just make our electronic friends worthy of being called people; we will still need a programming language to adequately share our ideas with them.

futuraperdita

> But what of novelty, craft, innovation?

I would argue that a plurality, if not the majority, of business needs for software engineers do not need more than a single person with those skills. Better yet, there is already some executive that is extremely confident that they embody all three.

pja

> Unseen were all the sleepless nights we experienced from untested sql queries and regexes and misconfigurations he had pushed in his effort to look good. It always came back to a lack of testing edge cases and an eagerness to ship.

If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.

lovich

>If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.

If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?

Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.

762236

AI writes my unit tests. I clean them up a bit to ensure I've gone over every line of code. But it is nice to speed through the boring parts, and without bringing declarative constructs into play (imperative coding is how most of us think).

AnimalMuppet

If the company values that 10x speedup, there is absolutely still a place for you in this environment. Only now it's going to take five days instead of two, because it's going to be harder to track that down in the less-well-structured stuff that AI produces.

Leynos

Why are you letting the AI construct poorly structured code? You should be discussing an architectural plan with it first and only signing off on the code design when you are comfortable with it.

gitpusher

> I wonder what will make the AI developers feel old…

When they look at the calendar and it says May 2025 instead of April

bitwize

If you've ever had to work alongside someone who has, or whose job it is to obtain, all the money... you will find that time to market is very often the ONLY criterion that matters. Turning the crank to churn out some AI slop is well worth it if it means having something to go live with tomorrow as opposed to a month from now.

LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.

bookman117

It feels like LLMs are doing to coding what the internet/attention economy did to journalism.

bitwize

Yeah, future math professors explaining the Prisoners' Dilemma are going to use clickbait journalism and AI slop as examples instead of today's canonical ones, like steroid use among athletes.

DeathArrow

>AI use makes the metric for success speed-to-production

Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?

AdieuToLogic

>>AI use makes the metric for success speed-to-production

> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?

This reminds me of an old software engineering adage.

  When delivering a system, there are three choices
  stakeholders have:

  You can have it fast,
  You can have it cheap,
  You can have it correct.

  Pick any two.

cies

> I wonder what will make the AI developers feel old…

They will not feel old because they will enter into bliss of Singularity(TM).

https://en.wikipedia.org/wiki/Technological_singularity

wedn3sday

Had a funny conversation with a friend of mine recently who told me about how he's in the middle of his yearly review cycle, and management is strongly encouraging him and his team to make greater use of AI tools. He works in biomedical lab research and has absolutely no use for LLMs, but everyone on his team had a great time using the corporate language model to help write amusing resignation letters as various personalities, pirate resignation, dinosaur resignation etc. I dont think anyone actually quit, but what a great way to absolutely nuke team moral!

davesque

I've been getting the same thing at my company. Honestly no idea what is driving it other than hype. But it somehow feels different than the usual hype; so prescribed, as though coordinated by some unseen party. Almost like every out of touch business person had a meeting where they agreed they would all push AI for no reason. Can't put my finger on it.

Loughla

Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.

Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.

ethbr1

> this one is super easy for an MBA to point at and sort of see a way to integrate it

Because a core feature of LLMs is to minimize the distance between {quality answers} and {gibberish that looks correct}.

As a consequence, this maximizes {skill required to distinguish the two}.

Are we then surprised that non-subject matter experts overestimate the output's median usefulness?

AdieuToLogic

>> I've been getting the same thing at my company. Honestly no idea what is driving it other than hype.

> Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.

This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.

Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."

0 - https://en.wikipedia.org/wiki/Assembly_line

johnnyanmac

Shame that management is deciding that listening to marketing is more important than the craftsmen they push it on.

ahaucnx

Can we stop with MBA bashing?

I feel it degrades a whole group of people to a specific stereotype that might or might not be true.

How about lawyers, PhDs, political science majors, etc.

Let’s look at the humans and their character, not titles.

By the way, I have an MBA too and feel completely misjudged with statements like that.

bondarchuk

Automation of knowledge work. Simply by using AI you are training your own replacement and integrating it into company processes.

rep_lodsb

Rather than some conspiracy, my suspicion is that AI companies accidentally succeded in building a machine capable of hacking (some) people's brains. Not because it's superhumanly intelligent, or even has any agenda at all, but simply because LLMs are specifically tuned to generate the kind of language that is convincing to the "average person".

Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.

zdragnar

> strongly encouraging him and his team to make greater use of AI tools

I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.

AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.

chairhairair

Has your friend talked with current bio research students? It’s very common to hear that people are having success writing Python/R/Matlab/bash scripts using these tools when they otherwise wouldn’t have been able to.

Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.

fumeux_fume

Recommending that someone in the industry take pointers from how students do their work is always solid advice.

theoreticalmal

Unironically, yes. The industry clearly has more experience, but it’s silly to assume students don’t have novel and useful ideas that can (and will) be integrated

amarcheschi

I'm taking a course on computational health laboratory. I do have to say gemini is helping me a lot, but someone who knows what's happening is going to be much better than us. Our professor told us it is of course allowed to make things with llms, since on the field we will be able to do that. However, I found they're much less precise with bio-informatic libraries than others...

I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms

antifa

I have general one-shot success asking chatgpt to make bash/python scripts and one-liners where otherwise it would take 1hr to a day to figure out on my own (and I'd use one of my main languages maybe) or I might not even bother trying, which is great for productivity but also over 90% of my job doesn't need throw-away scripts and one-liners.

KurSix

That is both hilarious and depressingly on-brand for how AI is being handled in a lot of orgs right now. Management pushes it because they need to tick the "we're innovating" box, regardless of whether it makes any sense for the actual work being done

whizzter

Our org seems to be taking some benefits from being sped up by using AI tools for code generation (much of it is CRUD or layout stuff), however at times I'm asked for help by colleagues and the first thing I've done is Googled and found the answer and gotten a "Oh right, you can google also" since they've been trying to figure out the issue with ChatGPT or similar.

throwaway173738

Gemini loves to leave poetry on our reviews, right below the three bullet points about how we definitely needed to do this refactor but also we did it completely wrong and need to redo it. So we mainly just ignore it. I heard it gives good advice to web devs though.

dullcrisp

I really hope that if someone does quit over this, they do it with a fun AI-generated resignation letter. What a great idea!

Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.

im3w1l

If you are not building AI into your workflows right now you are falling behind those that do. It's real, it's here to stay and it's only getting better.

bwoj

That’s such outdated thinking. I’m using AI to build AI into my workflows.

im3w1l

I unironically agree with that idea.

recursivedoubts

I teach compilers, systems, etc. at a university. Innumerable times I have seen AI lead a poor student down a completely incorrect but plausible path that will still compile.

I'm adding `.noai` files to all the project going forward:

https://www.jetbrains.com/help/idea/disable-ai-assistant.htm...

AI may be somewhat useful for experienced devs but it is a catastrophe for inexperienced developers.

"That's OK, we only hire experienced developers."

Yes, and where do you suppose experienced developers come from?

Again and again in this AI arc I'm reminded of the magicians apprentice scene from fantasia.

ffsm8

> Yes, and where do you suppose experienced developers come from?

Strictly speaking, you don't even need university courses to get experienced devs.

There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.

Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.

bluefirebrand

> People like that will always be more effective at their job once employed

This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff

In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly

He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"

I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it

But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly

taosx

I was self taught before I studied, most of the "foundational" knowledge is very easy to acquire. I've mentored some self-taught juniors and they surprised me at how fast they picked up concepts like big O just by looking at a few examples.

ehnto

It doesn't seem to matter if someone went to university. I have had to unpick crap code from uni grads and self taught. Experience may be the only true reliable tell, and I don't mean jobs held I mean world experience on projects.

There are also different types of self taught, and different types of uni grad. You have people who love code, have a passion for learning, and that's driven them to gain a lot of experience. Then you have those who needed to make a living, and haven't really stretched beyond their wheelhouse so lack a lot of diverse experience. Both are totally fine and capable of some work, but you would have better luck with novel work from an experienced passionate coder. Uni trained or not.

triyambakam

s/self taught/degreed/g and it's still true. It's a skill issue no matter the pedigree.

ffsm8

I literally said as much?

> Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.

erikerikson

GP didn't mention university degrees.

You get experienced devs from inexperienced devs that get experience.

[edit: added "degrees" as intended. University was mentioned as the context of their observation]

ffsm8

The first sentence contextualized the comment to university degrees as far as I'm concerned. I'm not sure how you could interpret it any other way, but maybe you can enlighten me.

philistine

> There will always be individuals that enjoy coding and do so without any formal teaching.

We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.

dingnuts

I'm so glad I learned to program so I could either be called a basement dweller or a tech bro

65839747

> There will always be individuals that enjoy coding and do so without any formal teaching.

That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.

robinhoode

> Yes, and where do you suppose experienced developers come from?

Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.

Don't get me wrong, it will take huge social upheaval to replace the current economic system.

But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.

lcnPylGDnU4H9OF

> criticizing the humans that are using AI to replace workers, instead of criticizing AI itself

I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.

It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.

bayindirh

Actually, there are two main problems with AI:

    1. How it's gonna be used and how it'll be a detriment to quality and knowledge.
    2. How AI models are trained with a great disregard to consent, ethics, and licenses.
The technology itself, the idea, what it can do is not the problem, but how it's made and how it's gonna be used will be a great problem going forward, and none of the suppliers tell that it should be used in moderation and will be harmful in the long run. Plus the same producers are ready to crush/distort anything to get their way.

... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.

EFreethought

I would say the huge environmental cost is a third problem.

clown_strike

> How AI models are trained with a great disregard to consent, ethics, and licenses.

You must be joking. Consumer models' primary source of training data seems to be the legal preambles from BDSM manuals.

ToucanLoucan

> Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.

This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.

onemoresoop

Who assured you?

recursivedoubts

i don't think it's an either/or situation

rchaud

> I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.

Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.

Couple that with the unavailability of community and social supports and you have a a recipe for disaster.

nathan_compton

When LLMs came out I suppressed my inner curmudgeon and dove in, since the technology was interesting to me and seemed much more likely than crypto to be useful beyond crime. Thus, I have used LLMs extensively for many years now and I have found that despite the hype and amazing progress, they still basically only excel first drafts and simple refactorings (where they are, I have to say, incredibly useful for eliminating busy work). But I have yet to use a model, reasoning or otherwise, that could solve a problem that required genuine thought, usually in the form of constructing the right abstraction, bottom up style. LLMs write code like super-human dummies, with a tendency to put too much code in a given function and with very little ability to invent a domain in which the solution is simple and clearly expressed, probably because they don't care about that kind of readability and its not much in their data set.

I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.

For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.

fhd2

My solution is to _only_ chat. No auto completion, nothing agentic, just chats. If it goes off the rails, restart the conversation. I have the chat window in my "IDE" (well, Emacs) and though it can add entire files as context and stuff like that, I curate the context in a fairly fine-grained way through either copy and pasting, quickly writing out pseudo code, and stuff like that.

Any generated snippets I treat like StackOverflow answers: Copy, paste, test, rewrite, or for small snippets, I just type the relevant change myself.

Whenever I'm sceptical I will prompt stuff like "are you sure X exists?", or do a web search. Once I get my problem solved, I spend a bit of time to really understand the code, figure out what could be simplified, even silly stuff like parameters the model just set to the default value.

It's the only way of using LLMs for development I've found that works for me. I'd definitely say it speeds me up, though certainly not 10x. Compared to just being armed with Google, maybe 1.1x.

esafak

Companies need to be aware of the long-term affects of relying on AI. It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.

I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...

My strong advice, is to digest every line of generated code; don't let it run ahead of you.

dkobia

It is absolutely terrifying to watch tools like Cursor generate so much code. Maybe not a great analogy, but it feels like driving with Tesla FSD in New Delhi in the middle of rush hour. If you let it run ahead of you, the amount of code to review will be overwhelming. I've also encountered situations where it is unable to pass tests for code it wrote.

tmpz22

Like TikTok AI Coding breaks human psychology. It is engrained in us that if we have a tool that looks right enough and highly productive we will over-apply it to our work. Even diligent programmers will be lured to accepting giant commits without diligent review and they will pay for it.

Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.

varelse

[dead]

Analemma_

When have companies ever cared about the long-term effects of anything, and why would they suddenly start now?

Aeolun

> It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.

I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.

It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.

At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.

KurSix

AI can get you to "something that runs" frighteningly fast, but understanding why it works (or doesn't) is where the real time cost creeps in

chilldsgn

100% agree with you, my sentiment is the same. Some time ago I considered making the LLM create tests for me, but decided against it. If I don't understand what needs to be tested, how can I write the code that satisfies this test?

We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.

null

[deleted]

terminalbraid

This story just makes me sad for the developers. I think especially for games you need a level of creativity that AI won't give you, especially once you get past the "basic engine boilerplate". That's not to say it can't help you, but this "all in" method just looks forced and painful. Some of the best games I've played were far more "this is the game I wanted to play" with a lot of vision, execution, polish, and careful craftspersonship.

I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.

tjpnz

Asset flips (half arsed rubbish made with store bought assets) were a big problem in the games industry not so long ago. They're less prevalent now because gamers instinctively avoid such titles. I'm sure they'll wise up to generative slop too, I've personally seen enough examples to get a general feel for it. Not fun, derivative, soulless, buggy as hell.

hnthrow90348765

But make some shallow games with generic, cell-shaded anime waifus accessed by gambling and they eat that shit up

ang_cire

If someone bothered to make deep, innovative games with cell-shaded anime waifus without gambling, they'd likely switch. This is more likely a market problem of US game companies not supplying sufficient CSAWs (acronym feels unfortunate but somehow appropriate).

Analemma_

Your dismissive characterization is not really accurate. Even in the cell-shaded anime waifu genre, there is a spectrum of gameplay quality and gamers do gravitate toward and reward the better games. The big reason MiHoYo games (Genshin Impact, Star Rail) have such a big presence and staying power is that even though they are waifu games at the core, the gameplay is surprisingly good (they're a night-and-day difference compared to slop like Blue Archive), and they're still fun even if you resolve to never pay any microtransactions.

caseyy

AI is the latest "overwhelmingly negative" games industry fad, affecting game developers. It's one of many. Most are because nine out of ten companies make games for the wrong reason. They don't make them as interactive art, as something the developers would like to play, or to perfect the craft. They make them to make publishers and businessmen rich.

That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.

[0] https://www.matthewball.co/all/stateofvideogaming2025

[1] https://www.youtube.com/watch?v=5tJdLsQzfWg

jordwest

I don't share your optimism, I think as long as there are truly great games being made and the developers earning well from them, the business people are going to be looking at them and saying "we could do that". What those studios lack in creativity or passion they more than make up for in marketing, sales, and sometimes manipulative money extraction game mechanics.

caseyy

It's not so much optimism as facts. Large AAA game companies have driven away investors[0] and talent[1]. The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.

AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.

Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).

Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.

[0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.

[1] The games industry has seen more than 10% sustained attrition over the last 5 years, and about 50% of employees hope to leave their employer within a year: https://www.skillsearch.com/news/item/games---interactive-sa...

s_trumpet

> The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.

I just don't think that's true in a world where Marvel Rivals was the biggest launch of 2024. Live service games like Path of Exile, Counter-Strike, Genshin Impact, etc. make boatloads of money and have ever rising player counts.

The problem is that it's a very sink-or-swim market - if you manage to survive 2-3 years you will probably make it, but otherwise you are a very expensive flop. Not unlike VC-funded startups - just because some big names failed doesn't make investing into a unicorn any less attractive.

milesrout

Very selective data in that presentation. The worst figures are always selected for comparisons: in one it's since 2019, then since 2020, then since 2022, then since 2020, then 2019, and on and on.

There is nothing wrong with making entertainment products to make money. That's the reason all products are made: to make money. Games have gone bad because the audience has bad taste. People like Fortnite. They like microtransactions. They like themepark rubbish that you can sell branded skins for. It is the same reason Magic: the Gathering has been ruined with constant IP tie-ins: the audience likes it. People pay for it. People like tat.

internet_points

In Norway, there was a recent minor scandal where a county released a report on how they should shut down some schools to save money, and it turned out half the citations were fake. Quite in line with the times. So our Minister of Digitizing Everything says "It's serious. But I want to praise Tromsø Municipality for using artificial intelligence." She's previously said she wants 80% of public sector to be using AI this year and 100% by 5 years. What does that even mean? And why and for what and what should they solve with it? It's so stupid and frustrating I don't even

null

[deleted]

snitty

My favorite part about this (and all GenAI) comments section is where one person says, "This is my personal experience using AI" and then a chorus of people chime in "Well, you're using it wrong!"

probably_wrong

I personally prefer the one where everyone tells you that your error is because you used the outdated and almost unusable version from yesterday instead of the revolutionary release from today that will change everything we know. Rinse and repeat tomorrow.

namaria

Not to mention the variations of "you need to prompt better" including now "rules files" which begs the question: wouldn't just writing code be a much better way to exercise control over the machine?

cglace

In my tests of using AI to write most of my code, just writing the code yourself(with Copilot) and doing manual rounds with Claude is much faster and easier to maintain.

ilrwbwrkhv

One very irritating problem I am seeing in a bunch of companies that I have invested in and my money is at stake is that they have taken up larger investments from normal VCs who are usually dumb as rocks but have a larger share is that they are pushing heavily for AI in the day to day processes of the company.

For example, some companies are using AI to create tickets or to collate feedback from users.

I can clearly see that this is making them think far less through the problem and a lot of this sixth sense understanding of the problem space happens through working through these ticket creation or product creation documents which are now being done by AI.

That is causing the quality of the work to become this weird drone like NPC like state where they aren't really solving real issues yet they're getting a lot of stuff done.

It's still very early so I do not know how best to talk to them about it. But it's very clear that any sort of creative work, problem solving, etc has huge negative implications when AI is used even a little bit.

I have also started to think that a great angel investment question is to ask companies if they are a non AI zone and investing in them will bring better returns in the future.

Aeolun

It’s because it’s never “this is my personal experience”, it’s always of the “this whole AI thing is nonsense because it doesn’t work for me” variety.

maeln

The same can be said about the other side. It is rarely phrased has "LLM is a useful tool with some important limitations" but "Look the LLM manage to create a junior-level feature, therefor we won't need developers in 2 years from now".

It tends to be the same with anything hyped / divisive. Human tend to exaggerate in both direction in communication, especially in low-stake environment such as an internet forum, or when they stand to gain something from the hype.

milesrout

You seem to have confused "The whole AI thing is nonsense. [Anecdote]." with "The whole AI thing is nonsense because [anecdote]." I see a lot of "LLMs are not useful. e.g. the other day I asked it to do X and it was terrible." That is not somebody saying that that one experience definitively proves that LLMs are useless, or saying that you should believe that LLMs are useful based only on that one anecdote. It is people making their posts more interesting than just giving their opinions along with arguments for those opinions.

Obviously their views are based on the sum of all their experience with LLMs. We don't have to say so every time.

johnfn

It's because everyone's "personal experience" is "I used it once and it didn't work".

kstrauser

There are many, many reasons to be skeptical of AI. There are also excellent tasks it can efficiently help with.

I wrote a project where I'd initially hardcoded a menu hierarchy into its Rust. I wanted to pull that out into a config file so it could be altered, localized, etc without users having it and recompile the source. I opened a “menu.yaml” file, typed the name of the top-level menu, paused for a moment to sip coffee, and Zed popped up a suggested completion of the file which was syntactically correct and perfect for use as-is.

I honestly expected I’d spend an hour mechanically translating Rust to YAML and debugging the mistakes. It actually took about 10 seconds.

It’s also been freaking brilliant for writing docstrings explaining what the code I just manually wrote does.

I don't want to use AI to write my code, any more than I'd want it to solve my crossword. I sure like having it help with the repetitive gruntwork and boilerplate.

ilrwbwrkhv

This sort of extremely narrow use case is what I think AI is good for but the problem is that if you have it for this one you will use it for other things and slowly atrophy.

jongjong

> In terms of software quality, I would say the code created by the AI was worse than code written by a human–though not drastically so–and was difficult to work with since most of it hadn’t been written by the people whose job it was to oversee it.

This is a key insight. The other insight is that devs spend most of their time reading and debugging code, not writing it. AI speeds up the writing of code but slows down debugging... AI was trained with buggy code because most code out there is buggy.

Also, when the codebase is complex and the AI cannot see all the dependencies, it performs a LOT worse because it just hallucinates the API calls... It has no idea what version of the API it is using.

TBH, I don't think there exists enough non-buggy code out there to train an AI to write good code which doesn't need to be debugged so much.

When AI is trained on normal language, averaging out all the patterns produces good results. This is because most humans are good at writing with that level of precision. Code is much more precise and the average human is not good at it. So AI was trained on low-quality data there.

The good news for skilled developers is that there probably isn't enough high quality code in the public domain to solve that problem... And there is no incentive for skilled developers to open source their code.

mattgreenrocks

Management: "devs aren't paid to play with shiny new tech, they should be shipping features!"

Also management: "I need you to play with AI and try to find a use for it"

undebuggable

Then maybe under pretext of playing with AI, finally refactor and clean up that codebase?