Skip to content(if available)orjump to list(if available)

Some things to expect in 2025

Some things to expect in 2025

89 comments

·January 16, 2025

kirubakaran

> A major project will discover that it has merged a lot of AI-generated code

My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

alisonatwork

I have heard the same response from junior devs and external contractors for years, either because they copied something from StackOverflow, or because they copied something from a former client/employer (popular one in China), or even because they just uncritically copied something from another piece of code in the same project.

From the point of view of these sorts of developers they are being paid to make the tests go green or to make some button appear on a page that kindasorta does something in the vague direction of what was in the spec, and that's the end of their responsibility. Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.

I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

Sorry, got a little carried away. Anywho, the point is LLMs are just another tool for these folks. It's not new, it's just worse now because of the mixed messaging where executives are hyping the tech as a magical solution that will allow them to ship more features for less cost.

bryanrasmussen

>Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.

maybe I am just supremely lucky but while I have encountered people like (in the coding part) it is somewhat rare from my experience. These comments on HN always makes it seem like it's at least 30% of the people out there.

alisonatwork

I think even though these types of developers are fairly rare, they have a disproportionate negative impact on the quality of the code and the morale of their colleagues, which is perhaps why people remember them and talk about it more often. The p95 developers who are more-or-less okay aren't really notable enough to be worth complaining about on HN, since they are us.

ojbyrne

I have been told (at a FAANG) not to fix those kind of code smells in existing code. “Don’t waste time on refactoring.”

devsda

> then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant

That's because they come across as result oriented, go getter kind of persons while the others will be seen as uptight individuals. Unfortunately, management for better or worse self selects the first kind.

LLMs are only going to make it worse. If you can write clean code in half a day and an LLM can generate a "working" sphagetti mess in few mins, management will prefer the mess. This will be the case for many organizations where software is just an additional supporting expense and not critical part of the main business.

Taylor_OD

This is more of an early career engineer thing than a ChatGPT thing. 'I don't know, I found it on stackoverflow' could have easily been the answer for the last ten years.

devsda

The main problem is not the source of solution but not making an effort to understand the code they have put in.

The "I don't know" might as well be "I don't care".

DowsingSpoon

I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.

dyauspitr

Why? I encourage all my devs to use AI but they need to be able to explain what it does.

bigstrat2003

To be fair I don't think someone should get fired for that (unless it's a repeat offense). Kids are going to do stupid things, and it's up to the more experienced to coach them and help them to understand it's not acceptable. You're right that it's not ok at all, but the first resort should be a reprimand and being told they are expected to understand code they submit.

LastTrain

Kids, sure. University trained professional and paid like one? No.

DowsingSpoon

I understand the point you’re trying to get across. For many kinds of mistakes, I agree it makes good sense to warn and correct the junior. Maybe that’s the case here. I’m willing to concede there’s room for debate.

Can you imagine the fallout from this, though? Each and every line of code this junior has ever touched needs to be scrutinized to determine its provenance. The company now must assume the employee has been uploading confidential material to OpenAI too. This is an uncomfortable legal risk.

How could you trust the dev again after the dust is settled?

Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon. That’s a frankly dangerous level of professional incompetence. (At least they didn’t try to hide it.)

Well now I’m wondering what the correct way would be to handle a junior doing this with ChatGPT, and what the correct way would be to handle similar kinds of mistakes such as copy-pasting GPL code into the proprietary code base, copy-pasting code from Stack Overflow, sharing snippets of company code online, and so on.

bitmasher9

Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.

dehrmann

It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.

phinnaeus

Are you hiring?

userbinator

In such an environment, it would be more common for access to ChatGPT (or even most of the Internet) to be blocked.

ghxst

Was this a case of something along the lines of an isolated function that had a bunch of bit shifting magic for some hyper optimization that was required, or was it just regular code?

Not saying it's acceptable, but the first example is maybe worth a thoughtful discussion while the latter would make me lose hope.

userbinator

At least he's honest.

gunian

the saddest part is if i wrote the code myself it would be worse lol GPT is coding at a intern level and as a dumb human being I feel sad I have been replaced but not as catastrophic as they made it seem

it's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level

nozzlegear

I don't think that's the concern at all. The concern (imo) is that you should at least understand what the code is doing before you accept it verbatim and add it to your company's codebase. The potential it has to introduce bugs or security flaws is too great to just accept it without understanding it.

dataviz1000

I've been busy with a personal coding project. Working through problems with a LLM, which I haven't used professionally yet, has been great. Countless times in the past I've spent hours pouring over Stack Overflow and Github repository code looking for solutions. Quite often I would have to solve them myself and would always post the answer a day or two later below my question on Stack Overflow. A big milestone for a software engineer is getting to the point where any difficult problem can't be solved with internet search, asking colleagues, or asking the question no matter how well written and detailed on Stack Overflow because the problems are esoteric -- the edge of innovation is solitude. Today I give the input to the LLM, tell it what the output should be, and magically a minute later it is solved. I was thinking today about how long it has been since I was stuck and stressed on a problem. With this personal project, I'm prototyping and doing a lot of experimentation so having a LLM saves a ton of time keeping the momentum at a fast pace. The iteration process is a little different with frequent stop, refactor, cleanup, make the code consistent, and log the input and output to console to verify.

Perhaps take intern's LLM code and have the LLM do the code review. Keep reviewing the code with the LLM until the intern gets it correct.

gunian

Exactly why devs are getting the bug bucks

that is right now at some point what if someone figures out a way to make it deterministic and able to write code without bugs?

chrisweekly

"AI is the payday loan* of tech debt".

jahewson

ChatGPT needs two years of exceeds expectations for before that can happen.

gunian

I been writing at troll level since i first got my computer at 19 so it looks like exceeds expectations to me lol

deadbabe

I hope that junior engineer was reprimanded or even put on a PIP instead of just having the reviewer say lgtm and approve the request.

WaxProlix

Probably depends a lot on the team culture. Depending on what part of the product lifecycle you're on (proving a concept, rushing to market, scaling for the next million TPS, moving into new verticals,...) and where the team currently is, it makes a lot of sense to generate more of the codebase by AI. Write some decent tests, commit, move on.

I wish my reports would use more AI tools for parts of our codebase that don't need a high bar of scrutiny, boilerplate at enterprise scale is a major source of friction and - tbh - burnout.

not2b

Unless the plan is to quickly produce a prototype that will be mostly thrown away, any code that gets into the product is going to generate far more work maintaining it over the lifetime of a product than the cost to code it in the first place.

As a reviewer I'd push back, and say that I'll only be able to approve the review when the junior programmer can explain what it does and why it's correct. I wouldn't reject it solely because chatgpt made it, but if the checkin causes breakage it normally gets assigned back to the person who checked it in, and if that person has no clue we have a problem.

bradly

Yes and the team could be missing structures to support junior engineers. What made them not ask for help or pairing is really important to dig in to and I would expect a senior manager to understand this and be introspective on what environment they have created where this human made this choice.

XorNot

I mean if that was an answer I got given by a junior during a code review the next email I'd be sending would be to my team lead about it.

hardbants

[dead]

christina97

> A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does.

Not to detract from this point, but I don’t think I understand what half the code I have written does if it’s been more than a month since I wrote it…

WaitWaitWha

I am confident that you do understand it at time of writing.

> We depend on our developers to contribute their own work and to stand behind it; large language models cannot do that. A project that discovers such code in its repository may face the unpleasant prospect of reverting significant changes.

At time of writing and commit, I am certain you "stand behind" your code. I think the author refers to the new script kiddies of the AI time. Many do not understand what the AI spits out at time of copy/paste.

ozim

Sounds a lot like bashing copy pasting from StackOverflow. So also like old argument “kids these days”.

No reasonable company pipes stuff directly to prod you still have some code review an d QA. So doesn’t matter if you copy from SO without understanding or LLM generates code that you don’t understand.

Both are bad but still happen and world didn’t crash.

bigstrat2003

> Sounds a lot like bashing copy pasting from StackOverflow.

Which is also very clearly unacceptable. If you just paste code from SO without even understanding what it does, you have fucked up just as hard as if you paste code from an LLM without understanding it.

BenjiWiebe

LLM can generate a larger chunk of code then you'll find on SO, so I think it's a larger issue to have LLM code then copy-pasted SO code.

bitmasher9

> No reasonable company pipes stuff directly to prod

I’ve definitely worked at places where the time gap between code merge and prod deployment is less than an hour, and no human QA process occurs before code is servicing customers. This approach has risks and rewards, and is one of many reasonable approaches.

kstenerud

I can always understand code I wrote even decades ago, but only because I use descriptive names, and strategic comments to describe why I'm using a particular approach, or to describe an API. If I fail to do that, it takes a lot of effort to remember what's going on.

anonzzzies

I have heard that before and never understood that; I understand code I wrote 40 years ago fine. I have issues understanding code by others, but my own I understand no matter when it was written. Of course others don't understand my code until they dive in and, like me theirs, forget how it works weeks after.

I do find all my old code, even from yesterday, total shite and it should be rewritten, but probably never will be.

elcritch

Well LLM generated code doesn't often work for non-trivial code or cases that aren't re-hashed a million times like fizzbuzz.

So I find it almost always requires going through the code to understand it in order to find "oh the LLM's statistical pattern matching made up this bit here".

I've been using Claude lately and it's pretty great for translating code from other languages. But in a few bits it just randomly swapped to variables or plain forgot to do something, etc.

dehrmann

Ah, yes. The good old "what idiot wrote this?" experience.

null

[deleted]

isaiahwp

> A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does.

"Oh Machine Spirit, I call to thee, let the God-Machine breathe half-life unto thy data flow and help me comprehend thy secrets."

bodge5000

And they told me laptop-safe sacred oils and a massive surplus of red robes were a "bad investment", look who's laughing now

merksoftworks

That's how ye' get yerself Tzeench'd

aithrowawaycomm

> Meanwhile, we will see more focused efforts to create truly free generative AI systems, perhaps including the creation of one or more foundations to support the creation of the models

I understand this will be free-as-in-beer and free-as-in-freedom... but if it's also free-as-in-"we downloaded a bunch of copyrighted material without paying for it" then I have no interest in using it myself. I am not sure there even is enough free-as-in-ethical stuff to build a useful LLM. (I am aware people are trying, maybe they've had success and I missed it.)

reaperducer

free-as-in-"we downloaded a bunch of copyrighted material without paying for it"

That's "free-as-in-load."

dgfitz

Ignoring all the points made, this was a very pleasant reading experience.

Not ignoring the points made, I cannot put my finger on where LLMs land in 2025. I do not think any sort of AGI type of phenomenon will happen.

tkgally

Yes, it was a good read. As someone with no direct connection to Linux or open-source development, I was surprised to find myself reading to the end. And near the end I found this comment particularly wise:

> The world as a whole does not appear to be headed in a peaceful direction; even if new conflicts do not spring up, the existing ones will be enough to affect the development community. Developers from out-of-favor parts of the world may, again, find themselves excluded, regardless of any personal culpability they may have for the evil actions of their governments or employers.

throwaway2037

    > the launch of one or more foundations aimed specifically at providing support for maintainers
Doesn't Red Hat (and other similar companies) already fulfill this role?

BirAdam

Didn’t the leader of the kernel rust team resign in September?

SoftTalker

sched-ext sounds interesting. Anyone doing any work with it? Wondering if it's one of those things that sounds cool but probably is only suitable in some very specific use cases.

divbzero

> we will see more focused efforts to create truly free generative AI systems, perhaps including the creation of one or more foundations to support the creation of the models

What are the biggest barriers to making this a reality? The training data or the processing power?

Which open-source projects, if any, are the farthest along in this effort?

vivzkestrel

hey OP what were your predictions for 2024, mind sharing here?

ranger207

Their predictions for 2024 were reviewed for accuracy here: https://lwn.net/Articles/1002368/

AtlasBarfed

Linux will politically continue to fail to extract needed monetary support from first world countries and mega corps principally dependant on it.

In particular, my libraries and national security concerns.

The US government has its underwear in a bunch over various Chinese sources hardware, but continues to let a bunch of hobbyists maintain the software.

I almost think it is time to hold these massive orgs accountable by merging targeted vulnerabilities and performance bombs unless they start paying up. Microsoft and other monopolized software companies have no issue using whatever tactics are necessary to shale the revenue from software dependent/ addicted orgs.

not2b

Most Linux kernel contributors are professionals who are paid for their work. They aren't hobbyists.

However, there are quite a few critically important tools and libraries that are essentially maintained by a volunteer as a hobby, and yes, that's a risk.

SoftTalker

Hence the observation that "single-maintainer projects (or subsystems, or packages) will be seen as risky".

jahewson

Per Wikipedia:

“An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work”

https://en.m.wikipedia.org/wiki/Linux

The_Colonel

I would be the percentage increased since then.

spencerflem

If you don't want corporations using your software, don't put it out in a license that invites them to do so. (illegal scraping by ai notwithstanding)

nindalf

Yeah bashing big tech is an evergreen source of upvotes. Especially since it’s not always clear how something was funded. Take io_uring for example, an async I/O subsystem for Linux. Could you say offhand if this was funded by some big tech company or not? I’ll bet most people couldn’t.

Another example - everyone knows the xz attack. How many people can name offhand the company where Andres Freund worked? He was a full time employee of a tech company working on Postgres when he found this attack.

It’s always worth discussing how we can improve financial situation for maintainers in important open source projects. Hyperbole like your comment is useless at best and counterproductive at worst.