Some things to expect in 2025
187 comments
·January 16, 2025kirubakaran
alisonatwork
I have heard the same response from junior devs and external contractors for years, either because they copied something from StackOverflow, or because they copied something from a former client/employer (popular one in China), or even because they just uncritically copied something from another piece of code in the same project.
From the point of view of these sorts of developers they are being paid to make the tests go green or to make some button appear on a page that kindasorta does something in the vague direction of what was in the spec, and that's the end of their responsibility. Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.
I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.
Sorry, got a little carried away. Anywho, the point is LLMs are just another tool for these folks. It's not new, it's just worse now because of the mixed messaging where executives are hyping the tech as a magical solution that will allow them to ship more features for less cost.
KronisLV
> I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.
For them, this clearly sound like personal success.
There's also a lot of folks who view programming just as a stepping stone in the path to becoming well paid managers and couldn't care any less about all of the stuff the nerds speak about.
Kind of unfortunate, but oh well. I also remember helping out someone with their code back in my university days and none of it was indented, things that probably shouldn't be on the same line were and their answer was that they don't care in the slightest about how it works, they just want it to work. Same reasoning.
anal_reactor
I used to be fascinated about computers, but then I understood that being a professional meeting attender pays more for less effort.
oytis
> Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.
Wow. I am probably very lucky, but most of managers, and especially architects I know are actually also exceptional engineers. A kind of exception was a really nice, helpful and proactive guy who happened to just not be a great engineer. He was still very useful for being nice, helpful and proactive, and was being promoted for that. "Failing up" to management would actually make a lot of sense for him, unfortunately he really wanted to code though.
arkh
What you describe is the state of most devops.
Copy / download some random piece of code, monkey around to change some values for your architecture and up we go. It works! We don't know how, we won't be able to debug it when the app goes down but that's not our problem.
And that's how you end up with bad examples or lack of exhaustive options in documentations, most tutorials being a rehash of some quickstart and people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.
Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.
whatevertrevor
I'm ashamed to say this is me with trying to get Linux to behave tbh.
I like fully understanding my code and immediate toolchain, but my dev machine is kinda held together with duct tape it feels.
sofixa
I disagree. A lot of DevOps is using abstractions, yes. But using a Terraform module to deploy your managed database without reading the code and checking all options is the same as using a random library without reading the code and checking all parameters in your application. People skimping on important things exist in all roles.
> people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.
I mean, this is just wrong. Both Ansible roles and Helm charts have normalised documentations. Official Ansible modules include docs with all possible parameters, and concrete examples how they work together. Helm charts also come with a file which literally lists all possible options (values.yaml). And yes, checking the code is always a good idea when using third party code you don't trust. Which is it you're complaining about, that DevOps people don't understand the code they're running or that you have to read the code? It can't be both, surely.
> Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.
Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?
quietbritishjim
It's definitely worse for LLMs than for StackOverflow. You don't need to fully understand a StackOverflow answer, but you at least need to recognise if the question could be applicable. With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.
I think young people today are at severe risk of building up what I call learning debt. This is like technical debt (or indeed real financial debt). They're getting further and further, through university assignments and junior dev roles, without doing the learning that we previously needed to. That's certainly what I've seen. But, at some point, even LLMs won't cut it for the problem they're faced with and suddenly they'll need to do those years of learning all at once (i.e. the debt becomes due). Of course, that's not possible and they'll be screwed.
ben_w
> With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.
To an extent. The failure modes are still weird, I've tried this kind of automation loop manually to see how good it is, and while it can as you say produce functional mediocre code*… it can also get stuck in stupid loops.
* I ran this until I got bored; it is mediocre code, but ChatGPT did keep improving the code as I wanted it to, right up to the point of boredom: https://github.com/BenWheatley/JSPaint
bryanrasmussen
>Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.
maybe I am just supremely lucky but while I have encountered people like (in the coding part) it is somewhat rare from my experience. These comments on HN always makes it seem like it's at least 30% of the people out there.
alisonatwork
I think even though these types of developers are fairly rare, they have a disproportionate negative impact on the quality of the code and the morale of their colleagues, which is perhaps why people remember them and talk about it more often. The p95 developers who are more-or-less okay aren't really notable enough to be worth complaining about on HN, since they are us.
beAbU
Do other companies not have static analysis integrated into the CI/CD pipeline?
We by default block any and all PRs that contain funky code: high cyclomatic complexity, unused variables, bad practise, overt bugs, known vulnerabilities, inconsistent style, insufficient test coverage, etc.
If that code is not pristine, it's not going in. A human dev will not even begin the review process until at least the static analysis light is green. Time is then spent mentoring the greens as to why we do this, why it's important, and how you can get your code to pass.
I do think some devs still use AI tools to write code, but I believe that the static analysis step will at least ensure some level of forced ownership over the code.
liontwist
I think it’s a good thing to use such tools. But no amount of tooling can create quality.
It gives you an illusion of control. Rules are a cheap substitute for thinking.
lrem
Just wait till AI learns how to pass your automated checks, without getting any better in the semantics. Unused variables bad? Let’s just increment/append whatever every iteration, etc.
ericmcer
That is a softball question for an AI: this block of code is throwing these errors, can you tell me why?
ojbyrne
I have been told (at a FAANG) not to fix those kind of code smells in existing code. “Don’t waste time on refactoring.”
dawnerd
To be fair sometimes it just isn’t worth the companies time.
devsda
> then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant
That's because they come across as result oriented, go getter kind of persons while the others will be seen as uptight individuals. Unfortunately, management for better or worse self selects the first kind.
LLMs are only going to make it worse. If you can write clean code in half a day and an LLM can generate a "working" sphagetti mess in few mins, management will prefer the mess. This will be the case for many organizations where software is just an additional supporting expense and not critical part of the main business.
Taylor_OD
This is more of an early career engineer thing than a ChatGPT thing. 'I don't know, I found it on stackoverflow' could have easily been the answer for the last ten years.
devsda
The main problem is not the source of solution but not making an effort to understand the code they have put in.
The "I don't know" might as well be "I don't care".
arkh
That's where you'd like your solution engine to be able to tell you how to get the solution it is giving you. Something good answers on Stack Overflow will do: links to the relevant documentation, steps you can go through to get a better diagnostic of your problem etc.
Get the fire lit with the explanation of where to get wood and how to light it in your condition so next time you don't need to consult you solution engine.
Vampiero
No, a real engineer goes on SO to understand. A junior goes on SO to copy and paste. If your answer is "I don't know I just copied" you're not doing any engineering and it's awful to pretend you are. Our job is literally about asking "why" and "how" until we don't need to anymore because our pattern matching skills allow us to generalize.
At this point in my career I rarely ever go to SO, and when I do it's because of some obscure thing that 7 other people came across and decided to post a question about. Or to look up "how to do the most basic shit in language I am not familiar with", but that role was taken over by LLMs.
mrweasel
There's nothing inherently wrong with getting help from either and LLM, or StackOverflow, it's the "I don't know' part that bothers me.
One the funnier reactions to "I got it from StackOverflow" is the followup question "From the question or the answers?"
If you just adds code, without understanding how it works, regardless of where it came from and potential licensing issues, then I question your view on programming. If I have a paint come in and paint my house and get paint all over the place, floors, windows, electrical socket but still get the walls the color I want, then I wouldn't consider that person a professional painter.
sebazzz
The LLM also tends to do a good bit of the integrations of the code in your codebase. With SO you need to do it yourself, so you at least need to understand the outer boundary of the code. And on StackOverflow it often has undergone some form of peer review. The LLM just outputs without any bias or footnote.
DowsingSpoon
I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.
bitmasher9
Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.
dehrmann
It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.
bigstrat2003
To be fair I don't think someone should get fired for that (unless it's a repeat offense). Kids are going to do stupid things, and it's up to the more experienced to coach them and help them to understand it's not acceptable. You're right that it's not ok at all, but the first resort should be a reprimand and being told they are expected to understand code they submit.
LastTrain
Kids, sure. University trained professional and paid like one? No.
DowsingSpoon
I understand the point you’re trying to get across. For many kinds of mistakes, I agree it makes good sense to warn and correct the junior. Maybe that’s the case here. I’m willing to concede there’s room for debate.
Can you imagine the fallout from this, though? Each and every line of code this junior has ever touched needs to be scrutinized to determine its provenance. The company now must assume the employee has been uploading confidential material to OpenAI too. This is an uncomfortable legal risk.
How could you trust the dev again after the dust is settled?
Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon. That’s a frankly dangerous level of professional incompetence. (At least they didn’t try to hide it.)
Well now I’m wondering what the correct way would be to handle a junior doing this with ChatGPT, and what the correct way would be to handle similar kinds of mistakes such as copy-pasting GPL code into the proprietary code base, copy-pasting code from Stack Overflow, sharing snippets of company code online, and so on.
phinnaeus
Are you hiring?
userbinator
In such an environment, it would be more common for access to ChatGPT (or even most of the Internet) to be blocked.
dyauspitr
Why? I encourage all my devs to use AI but they need to be able to explain what it does.
ben_w
> He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
I remember being a junior nearly 20 years back, a co-worker someone asked me how I'd implemented an invulnerability status, and I said something equally stupid despite knowing perfectly well how I'd implemented it and there not being any consumer grade AI more impressive than spam filters and Office's spelling and grammar checking.
Which may or may not be relevant to the example of your friend's coworker, but I do still wonder how much of my answers as a human are on auto-complete. It's certainly more than none, and not just from that anecdote… https://duckduckgo.com/?t=h_&q=enjoy+your+meal+thanks+you+to...
ErrantX
Feels like a controls failure as much as anything else. Any decently sized company that allows unrestricted access to llms, well that's going to be the tip of the iceberg.
Also, the culture of don't care comes from somewhere, not ChatGPT
gunian
the saddest part is if i wrote the code myself it would be worse lol GPT is coding at a intern level and as a dumb human being I feel sad I have been replaced but not as catastrophic as they made it seem
it's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level
nozzlegear
I don't think that's the concern at all. The concern (imo) is that you should at least understand what the code is doing before you accept it verbatim and add it to your company's codebase. The potential it has to introduce bugs or security flaws is too great to just accept it without understanding it.
dataviz1000
I've been busy with a personal coding project. Working through problems with a LLM, which I haven't used professionally yet, has been great. Countless times in the past I've spent hours pouring over Stack Overflow and Github repository code looking for solutions. Quite often I would have to solve them myself and would always post the answer a day or two later below my question on Stack Overflow. A big milestone for a software engineer is getting to the point where any difficult problem can't be solved with internet search, asking colleagues, or asking the question no matter how well written and detailed on Stack Overflow because the problems are esoteric -- the edge of innovation is solitude. Today I give the input to the LLM, tell it what the output should be, and magically a minute later it is solved. I was thinking today about how long it has been since I was stuck and stressed on a problem. With this personal project, I'm prototyping and doing a lot of experimentation so having a LLM saves a ton of time keeping the momentum at a fast pace. The iteration process is a little different with frequent stop, refactor, cleanup, make the code consistent, and log the input and output to console to verify.
Perhaps take intern's LLM code and have the LLM do the code review. Keep reviewing the code with the LLM until the intern gets it correct.
gunian
Exactly why devs are getting the bug bucks
that is right now at some point what if someone figures out a way to make it deterministic and able to write code without bugs?
chrisweekly
"AI is the payday loan* of tech debt".
jahewson
ChatGPT needs two years of exceeds expectations for before that can happen.
gunian
I been writing at troll level since i first got my computer at 19 so it looks like exceeds expectations to me lol
dyauspitr
It’s coding way, way above intern level. Honestly it’s probably a mid level.
deadbabe
I hope that junior engineer was reprimanded or even put on a PIP instead of just having the reviewer say lgtm and approve the request.
WaxProlix
Probably depends a lot on the team culture. Depending on what part of the product lifecycle you're on (proving a concept, rushing to market, scaling for the next million TPS, moving into new verticals,...) and where the team currently is, it makes a lot of sense to generate more of the codebase by AI. Write some decent tests, commit, move on.
I wish my reports would use more AI tools for parts of our codebase that don't need a high bar of scrutiny, boilerplate at enterprise scale is a major source of friction and - tbh - burnout.
not2b
Unless the plan is to quickly produce a prototype that will be mostly thrown away, any code that gets into the product is going to generate far more work maintaining it over the lifetime of a product than the cost to code it in the first place.
As a reviewer I'd push back, and say that I'll only be able to approve the review when the junior programmer can explain what it does and why it's correct. I wouldn't reject it solely because chatgpt made it, but if the checkin causes breakage it normally gets assigned back to the person who checked it in, and if that person has no clue we have a problem.
bradly
Yes and the team could be missing structures to support junior engineers. What made them not ask for help or pairing is really important to dig in to and I would expect a senior manager to understand this and be introspective on what environment they have created where this human made this choice.
GeoAtreides
> Write some decent tests, commit, move on.
Move on to what?! Where does a junior programmer who doesn't understand what the code does moves on to?
XorNot
I mean if that was an answer I got given by a junior during a code review the next email I'd be sending would be to my team lead about it.
sofixa
I have a better one, a senior architect who wrote a proposal for a new piece of documentation, and when asked about his 3 main topics in the doc and why them, said "LLM said those are the main ones". The rest of the doc was obviously incoherent LLM soup as well.
aithrowawaycomm
> Meanwhile, we will see more focused efforts to create truly free generative AI systems, perhaps including the creation of one or more foundations to support the creation of the models
I understand this will be free-as-in-beer and free-as-in-freedom... but if it's also free-as-in-"we downloaded a bunch of copyrighted material without paying for it" then I have no interest in using it myself. I am not sure there even is enough free-as-in-ethical stuff to build a useful LLM. (I am aware people are trying, maybe they've had success and I missed it.)
reaperducer
free-as-in-"we downloaded a bunch of copyrighted material without paying for it"
That's "free-as-in-load."
ASalazarMX
I don't think blindly abiding to copyright is the higher moral instance here, even if it's the law. Knowledge wants to be free, and the way AIs need to be trained now is a sign that copyright laws have become unreasonably restrictive and commercialized.
Not only AIs should be allowed to train on pirated content, humans should too. Copyright laws need to be scaled back so that creators are protected for a reasonable period, but humanity is not gated out of its culture for decades. The cheaper culture distribution has become, the harsher copyright laws have evolved.
dgfitz
Ignoring all the points made, this was a very pleasant reading experience.
Not ignoring the points made, I cannot put my finger on where LLMs land in 2025. I do not think any sort of AGI type of phenomenon will happen.
tkgally
Yes, it was a good read. As someone with no direct connection to Linux or open-source development, I was surprised to find myself reading to the end. And near the end I found this comment particularly wise:
> The world as a whole does not appear to be headed in a peaceful direction; even if new conflicts do not spring up, the existing ones will be enough to affect the development community. Developers from out-of-favor parts of the world may, again, find themselves excluded, regardless of any personal culpability they may have for the evil actions of their governments or employers.
openrisk
Overwhelming fraction of comments focus on the "AI contributed code" while back to reality:
> Global belligerence will make itself felt in our community. The world as a whole does not appear to be headed in a peaceful direction
If the geopolitical landscape continues deteriorating the tech universe as we knew it will cease to exist. Fragmentation is already a reality in egregious cases but the dynamic could become much more prevalent.
The_Colonel
Kinda depends on what you mean exactly. For example, open source world will likely not be affected aside from a few cases like the Russian Linux developers. Neither China nor Russia are likely to completely block access to internet and developers won't have any incentives to do isolate themselves.
openrisk
That sounds quite optimistic. It doesn't take complete blocking before there are significant implications. There are many aspects to consider, from more friction in getting access to distribution channels to the more fundamental "forking" of initiatives and visions. This might be already happening to some degree but is hard to quantify.
The_Colonel
> It doesn't take complete blocking before there are significant implications.
Mostly for consumers. Advanced users in e.g. China (likely in Russia as well) use VPNs routinely already.
> from more friction in getting access to distribution channels to the more fundamental "forking" of initiatives and visions
What's in it for the devs/companies to fork just because of the geopolitical situation? A fork means more work, more costs. In some cases, like the Linux kernel, Russian companies (Baikal) are forced to fork, but I don't seem them doing this on a massive scale for projects where they don't have to.
I think there is some parallel development going on in China, but that's more because of the language/cultural barrier and has always been so, so I don't expect a major change.
christina97
> A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does.
Not to detract from this point, but I don’t think I understand what half the code I have written does if it’s been more than a month since I wrote it…
WaitWaitWha
I am confident that you do understand it at time of writing.
> We depend on our developers to contribute their own work and to stand behind it; large language models cannot do that. A project that discovers such code in its repository may face the unpleasant prospect of reverting significant changes.
At time of writing and commit, I am certain you "stand behind" your code. I think the author refers to the new script kiddies of the AI time. Many do not understand what the AI spits out at time of copy/paste.
ozim
Sounds a lot like bashing copy pasting from StackOverflow. So also like old argument “kids these days”.
No reasonable company pipes stuff directly to prod you still have some code review an d QA. So doesn’t matter if you copy from SO without understanding or LLM generates code that you don’t understand.
Both are bad but still happen and world didn’t crash.
bigstrat2003
> Sounds a lot like bashing copy pasting from StackOverflow.
Which is also very clearly unacceptable. If you just paste code from SO without even understanding what it does, you have fucked up just as hard as if you paste code from an LLM without understanding it.
BenjiWiebe
LLM can generate a larger chunk of code then you'll find on SO, so I think it's a larger issue to have LLM code then copy-pasted SO code.
thayne
It's not very common for people to do drive-by pull requests that just copy code from Stack Overflow on open source projects. I've already started seeing that with LLM generated code. And yeah, hopefully the problems with it are caught, but it wastes the maintainers time and drives maintainer Burnout.
bitmasher9
> No reasonable company pipes stuff directly to prod
I’ve definitely worked at places where the time gap between code merge and prod deployment is less than an hour, and no human QA process occurs before code is servicing customers. This approach has risks and rewards, and is one of many reasonable approaches.
elcritch
Well LLM generated code doesn't often work for non-trivial code or cases that aren't re-hashed a million times like fizzbuzz.
So I find it almost always requires going through the code to understand it in order to find "oh the LLM's statistical pattern matching made up this bit here".
I've been using Claude lately and it's pretty great for translating code from other languages. But in a few bits it just randomly swapped to variables or plain forgot to do something, etc.
dehrmann
Ah, yes. The good old "what idiot wrote this?" experience.
Ntrails
Don't forget the revelation 2 weeks later when you realise immediate past you should've trusted deep past you instead of assuming he'd somehow got wiser in the intervening months.
Instead intermediate past you broke things properly because they forgot about the edge deep past you was cautiously avoiding
kstenerud
I can always understand code I wrote even decades ago, but only because I use descriptive names, and strategic comments to describe why I'm using a particular approach, or to describe an API. If I fail to do that, it takes a lot of effort to remember what's going on.
anonzzzies
I have heard that before and never understood that; I understand code I wrote 40 years ago fine. I have issues understanding code by others, but my own I understand no matter when it was written. Of course others don't understand my code until they dive in and, like me theirs, forget how it works weeks after.
I do find all my old code, even from yesterday, total shite and it should be rewritten, but probably never will be.
null
anshulbhide
> A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does. We depend on our developers to contribute their own work and to stand behind it; large language models cannot do that. A project that discovers such code in its repository may face the unpleasant prospect of reverting significant changes.
A lot of companies are going to discover in 2025. Also, a major product company is going to find LLM-generated code that might have been trained on OSS code, and their compliance team is going to throw a fit.
isaiahwp
> A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does.
"Oh Machine Spirit, I call to thee, let the God-Machine breathe half-life unto thy data flow and help me comprehend thy secrets."
bodge5000
And they told me laptop-safe sacred oils and a massive surplus of red robes were a "bad investment", look who's laughing now
merksoftworks
That's how ye' get yerself Tzeench'd
throwaway2037
> the launch of one or more foundations aimed specifically at providing support for maintainers
Doesn't Red Hat (and other similar companies) already fulfill this role?usr1106
There are many widely used open source components without a maintainer who is allowed to work on them (enough) during paid working time.
1vuio0pswjnm7
"The OpenWrt One, which hit the market in 2024, quickly sold out its initial production run."
But have its distributors sold out their inventory from this initial production run.
For example,
https://www.aliexpress.us/item/3256807609464530.html?spm=526...
NB. The 2.5GbE and Wi-Fi firmware are not open source
1vuio0pswjnm7
https://www.aliexpress.com/item/1005007795779282.html
https://www.aliexpress.com/item/1005007870205805.html
https://www.aliexpress.com/item/1005008112786213.html
https://www.aliexpress.com/item/1005007826746106.html
https://www.aliexpress.com/item/1005007827097740.html
https://www.aliexpress.com/item/1005007795557607.html
https://www.aliexpress.com/item/1005008352147850.html
https://www.aliexpress.com/item/1005008394714162.html
https://www.aliexpress.com/item/1005008292548739.html
https://www.aliexpress.com/item/1005008344848967.html
https://www.aliexpress.com/item/1005008301213347.html
https://www.aliexpress.com/item/1005008193932681.html
https://www.aliexpress.com/item/1005008295761196.html
https://www.aliexpress.com/item/1005008295538495.html
https://www.aliexpress.com/item/1005007803789952.html
https://www.aliexpress.com/item/1005008339442242.html
https://www.aliexpress.com/item/1005007803843791.html
SoftTalker
sched-ext sounds interesting. Anyone doing any work with it? Wondering if it's one of those things that sounds cool but probably is only suitable in some very specific use cases.
yjftsjthsd-h
https://www.phoronix.com/news/LAVD-Scheduler-Linux-Gaming seems like a real use
steeleduncan
> global belligerence will make itself felt in our community
Sadly this has already happened. The Israel/Palestine situation was frequently referenced during the bitterest arguments in the NixOS community governance issues last year
lionkor
> A major project will discover that it has merged a lot of AI-generated code
After a code review, at least the reviewer should know the feature well enough to maintain it. This is, at least in my experience, the main part of the job of the reviewer at the time of review: Understand what the code does, why it does it, how it does it, such that you agree with it as if it's code you've written.
If major projects merge code because "lgtm" is taken literally, then they have been merging bogus code before LLMs.
> A major project will discover that it has merged a lot of AI-generated code
My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"