Skip to content(if available)orjump to list(if available)

I am a programmer, not a rubber-stamp that approves Copilot generated code

krackers

I find LLM generated code ends up pushing review/maintenance burden onto others. It "looks" right at first glance, and passes superficial tests, so it's easy to get merged. But then as you build on top of it, you realize the foundations are hastily put together, so a lot of it needs to be rewritten. Fine for throwaway or exploratory work, but heaven help you if you're working in a project where people use LLMs to "fix" bugs generated by previous LLM generated code.

So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked. It's like a game of hot potato, if you want to game the metrics you better be the one working on greenfield code (although I suppose maintenance work has never been looked at favorably in performance review; but now the cycle of code rot is accelerated)

dm270

Im working on some website and created some custom menu. Nothing fancy. AI got it done after some tries and I was happy as web development is not my area of expertise. After some time I realized the menu results to scrolling when it shouldn’t and wanted to make the parent container expand. This was impossible as the AI did a rather unusual implementation even for such a limited use case. Best part: my task now is impossible to solve with AI as it doesn’t really get its own code. I resulted to actually just looking into CSS and the docs and realized there is a MUCH simpler way to solve all of my issues.

Turns out sometimes the next guy who has to do maintenance is oneself.

Terr_

> Turns out sometimes the next guy who has to do maintenance is oneself.

Over the years I've been well-served by putting lots of comments into tickets like "here's the SQL query I used to check for X" or "an easy local repro of this bug is to disable Y", etc.

It may not always be useful to others... but Future Me tends to be glad of it when a similar issue pops up months later.

piva00

On the same boat, I've learnt to leave breadcrumbs for the future quite a long time ago, and it's paid off many, many times.

After it becomes second-nature is really relaxing to know I have left all the context I could muster around, comments in tickets, comments in the code referencing a decision, well-written commit messages for anything a little non-trivial. I learnt that peppering all the "whys" around is just being a good citizen in the codebase, even if only for Future Me.

thefz

> it doesn’t really get its own code

It doesn’t really get its own anything, as it is unable to "get". It's just a probabilistic machine spitting out the next token

Kudos

Hey, I think everyone understands how they work by now and the pedantry isn't helpful.

mellosouls

This is pretty much how permanent staff often have to work with consultants/contractors or job-hoppers in some sectors.

Shiny new stuff quickly produced, manager smiles and pays, contractor disappears, heaven help the poor staffers who have to maintain it.

It's not new, just in a new form.

izacus

Don't ignore the difference in scale though. Something happening some of the time isn't the same as happening most of the time.

samrus

This misallignment of incentives is why we have shitty software in everyday life

karmakurtisaani

What's new though is that now you can do it to your future self!

eloisant

In my experience, AI generated code is much higher quality than code written by external service companies. For example it will look at your code base and follow the style and conventions.

cjfd

Style can conventions are very superficial properties of code. The more relevant property is how many bugs are lurking below the surface.

sussmannbaka

this just means the bugs it creates are better camouflaged

Gigachad

This has been described a lot as “workslop”, work that superficially looks great but pushes the real burden on the receiver of the work rather than the producer.

Fomite

One of the things about AI generally is it doesn't "save" work - it pushes work from the one who generates the work to the person who has to evaluate it.

loveparade

That sounds more like an organizational problem. If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible. Previously that took the form of copying cheap templates, copying and pasting code from StackOverflow as-is without adjustments, not caring about style, using tools to autogenerate bindings, and so on. I remember a long time ago I took over a web project that a freelancer had worked on, and when I opened it I saw one large file of mixed python and HTML. He literally just copied and pasted whole html pages into the render statements in the server code.

The same is true for many people submitting PRs to OSS. They don't care about making real contributions, they just want to put something on their resume.

AI is probably making it more common, but it really isn't a new issue, and is not directly related to LLMs.

tschumacher

Yes, this is it. The idea that LLMs somehow write this deceptive code that magically looks right but isn't is just silly. Why would that be the case? If someone finds they are good at writing code (hard to define of course but take a "measure" like long term maintainability for example) but they fail to catch bad code in review it is just an issue with their skill. Reviewing code can be trained just as writing code can be. A good first step might be to ask oneself: "how would I have approached this".

trklausss

I'd say is a change of paradigm, and it might be even faster if you have test-driven development... Imagine writing your tests manually, getting LLM code, trying to pass the tests, done.

Of course, golden rules are 1. write the tests yourself, don't let the LLM write them for you and 2. don't paste this code directly on the LLM prompt and let it generate code for you.

In the end it boils down to specification: the prompt captures the loosely-defined specification of what you want, LLM spouts something already very similar to what you want, tweak it, test it, off you go.

With test driven development this process can be made simpler, and other changes in other parts of the code are also checked.

Degorath

I've decided to fight it the same way I fight tactical tornadoes - by leaving those people negative reviews at mid-year review.

(I also find the people who simply paste LLM output to you in chat are the much bigger evil)

m463

I'm sort of reminded of the south park movie.

They kept repeatedly getting an NC-17 from the MPAA and kept on resubmitting it (6 times) until just before release when they just relented, gave it an R and released it as-is.

https://en.wikipedia.org/wiki/South_Park:_Bigger,_Longer_%26...

overgard

The worst part of AI is the way it's aggressively pushed. Sometimes I have to turn off AI completions in the IDE just because it becomes extremely aggressive in showing me very wrong snippets of code in an incredibly distracting way. I hope when the hype dies down the way these tools are pushed on us in a UX sense is also dialed down a bit.

jasonkester

Indeed. That’s my only interaction with AI coding.

Every time Visual Studio updates, it’ll turn back on the thing that shoves a ludicrously wrong, won’t even compile, not what I was in the middle of doing line of garbage code in front of my cursor, ready to autocomplete in and waste my time deleting if I touch the wrong key.

This is the thing that Microsoft thinks is important enough to be worth burning goodwill by re-enabling every few weeks, so I’m left to conclude that this is the state of the art.

Thus far I haven’t been impressed enough to make it five lines of typing before having to stop what I’m doing and google how to turn it off again.

ptsneves

I feel you. I totally disabled AI completions as they actually were often sidelining me from my reasoning.

It is like having an obnoxious co-worker shoving me to the side everytime i type a new line and complete a whole block of code and asking me if it is good without regards to how many times I rejected those changes.

I still use AI, but favor a copy paste flow where I at least need to look at what i am copying and locating the code I am pasting to. At least i am aware of the methods or function names and general code organization.

I also ask for small copy paste changes so that I keep it digestible. A bonus point is that ChatGPT in firefox when the context gets too big, the browser basically slowsdown locks and it works as a form extra sense that the context window is too big and LLM is about to start saying non-sense.

That said AI, is an amazing tool for prototyping and help when out of my domain of expertise.

XenophileJKO

So one really big thing that can make the AI autocomplete super useful is to follow the old method from "Code Complete", Pseudocode Programming Process (PPP).

Write a comment first on what you intend to do, then the AI generally does a good job auto-completing below it. I mean you don't have to "sketch everything out", but just that the AI is using the page as context and the comment just helps disambiguate what you want to do and it can autocomplete significant portions when you give it a nudge with the comment.

I've almost fully converted to agentic coding, but when I was using earlier tools, this was an extremely simple method to get completions to speed you up instead of slow you down.

eloisant

The worse is when writing comments. I'm writing a comment such as "Doing X because..." and it never get it right.

I'm making a comment precisely because it's not obvious when reading the code, and the AI will make up some generic and completely wrong reason.

Gigachad

I disabled the inline auto suggestions. It’s like the tech version of that annoying person who interrupts every sentence with the wrong ending.

matt3210

I really get irritated when AI is opt out. Opt out is not consent.

LeoPanthera

Does big tech understand consent?

[ ] Yes

[ ] Maybe later

klabb3

[ ] Use recommended settings

jstummbillig

Agents are great (in so far the models are able to complete the task). Autocomplete copilot just feels like bad UX. It's both, not super effective and also disruptive to my thinking.

1dom

I think it depends on the context. If I've been writing the same language and frameworks and code solidly for a few months, then autocomplete gets in the way. But that rarely happens, I like to keep trying and learning new things.

If I'm familiar with something (or have been) but not done it in a while, 1 - 2 line autocomplete saves so much time doing little syntax and reference lookups. Same if I'm at that stage of learning a language or framework where I get the high level concepts, principals, usescases and such, but I just haven't learned all the keywords and syntax structures fluently yet. In those situations, speedy 1 - 2 line AI autocomplete probably doubles the amount of code I output.

Agents is how you get the problems discussed in this thread: code that looks okay on the surface, but falls apart on deeper review, whereas 1 - 2 line autocomplete forces every other line or 2 to be intentional.

pjmlp

On VS you can change that to only come up if you do a key shortcut.

For those on VS, this is how to hide it, if using 17.14 or later,

https://learn.microsoft.com/en-us/visualstudio/ide/copilot-n...

Zardoz84

My little experience with AI coding, using copilot on Eclipse, was mixed... Context: I work with an old Java source code that uses Servlets and implements his own web framework. There is a lot of code without tests or comments.

The autocomplete, I find it useful. Specially doing menial, very automatic stuff like moving stuff when I refactor long methods. Even the suggestions of comments looks useful. However, the frequency with it jumps it's annoying. It needs to be dialed down somehow (I can only disable it). Plus, it eats the allowed autocomplete quota very quickly.

The "agent" chat. It's like tossing a coin. I find very useful when I need to write a tests for a class that don't have. At least, allows me to avoid writing the boiler player. But usually, I need to fix the mocking setup. Another case when it worked fine, it's when helped me to fix a warning that I had on a few VUE2 components. However, in other instances, I saw miserable falling to write useful code or messing very bad with the code. Our source code is in ISO8859-1 (I asked many times to migrate it to UTF8), and for some reason, sometimes Copilot agent messes the encoding and I need to manually fix all the mess.

So... The agent/chat mode, I think that could be useful, if you know in what cases it would do it ok. The autocomplete is very useful, but needs to be dialed down.

bloppe

If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping competing code that works. In that sense, the author is painting an incredibly bright picture of the future of the software industry: one where founders don't have to be particularly talented to hit the jackpot.

ehnto

Saving misguided AI codebases is going to be quite lucrative for contract work I suspect.

A lot of non-technical people are going to get surprisingly far into their product without realising they are on a bad path.

It already happens now when a non-technical founder doesn't get a good technical hire.

The surprising thing for developers though, is how often a shit codebase makes millions of dollars before becoming an issue. As much as I love producing rock solid software, I too would take millions of dollars and a shit codebase over a salary and good code.

numpy-thagoras

"...one where founders don't have to be particularly talented to hit the jackpot."

That's where we're at right now anyways.

"If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping--"

And that's how we got here.

The code rot issue will blow up a lot more over the next few years, that we can finally complete the sentence and start "shipping competing code that works".

I worry that mopping up this catastrophe is going to be a task that people will again blindly set AI upon without the deep knowledge of what exactly to do, rather than "to do in general, over there, behind that hill".

fancyfredbot

Yes this is only bad news if you are working for morons.

Unfortunately a lot of people are in that situation. You can basically forget about disruption. Meritocracy is dead, long live the Peter principle.

lovecg

Steelmanning the "we must force tool usage" position: it's possible that a tool does increase productivity, but there's either a steep learning curve (productivity only improves after sustained usage) or network effects (most people must use it for anyone to benefit).

No opinion on whether or not this applies to the current moment. But maybe someone should try forcing Dvorak layout on everyone or something like that for a competitive edge!

resonious

I once had a boss who saw me use Vim and was really impressed with how quickly I could jump around files and make precision edits. He tried getting the other devs (not many, < 5) to use Vim too but it didn't quite pan out.

I would guess that interest, passion, and motivation all play a role here. It's kind of like programming itself. If you sit people down and make them program for awhile, some will get good at it and some won't.

eCa

> I would guess that interest, passion, and motivation all play a role here.

And, to use less pointed language, people’s brains are wired differently. What works for one doesn’t necessarily work for another, even with similar interest, passion, and motivation.

rkomorn

I agree with this.

I was using emacs for a while, but when I switched to vim, something about the different modes just really meshed with how I thought about what I was doing, and I enjoyed it way more and stuck to it for a couple of decades.

I see people that I'd say are more proficient with their emacs, VS Code, etc setups than I am with my vim setup, so I don't think there's anything special about vim other than "it works for me".

mabster

I worked with a developer that copied and pasted A LOT and would keep his fingers on the old copy and paste buttons (Ctrl-Ins, etc.). I've even seen him copy and paste single letters. He's one of the most productive developers I've ever worked with.

Xenoamorphous

I've had plenty of interest, passion and motivation during my career. But never, ever, directed at learning something like vim, even if it's going to make me more productive.

I'd rather learn almost any other of the myriad of topics related with software development that the quirks of an opinionated editor. I especially hate memorising shortcuts and commands.

lelandfe

Your old boss probably would have been a bit chastened if he knew said devs would then be spending their hours learning how to exit Vim instead of programming

vidarh

There was a time where I'd change to a different terminal and do sudo killall -9 to get out vim.

And that time when I changed vim to a symlink to emacs on a shared login server and sat back and enjoyed the carnage. (I did change it back relatively quickly)

lawn

If learning how to exit Vim takes hours then they aren't worth keeping as employees anyway.

raverbashing

Vim's learning curve is much steeper to be honest

procaryote

Coding agents seem to be in the fun paradox of "it's so easy to use, anyone can code!" and "using it productively is a deep skill, and we have to force people to use it so they learn"

ozgrakkurt

Programming isn’t a government desk job. The interface between programmer and company should be the output only, they can’t force a programmer to use w/e bs they think is good at the time

monster_truck

I feel bad for my friends that are married with kids working at places like microsoft, telling me how their copilot usage is tracked and they fear that if they don't hit some arbitrary weekly metric they will fall victim to the next wave of layoffs.

oezi

And that's why performance tracking is prohibited in countries where unions still have a bit of power.

rapsey

And why those countries tend to have barely any growth in their economies (i.e. europe).

pjmlp

It is ok, I earn enough to pay my bills, the ones from my family, a bit of travelling around and healthcare.

Usually over here we don't dream of making it big with big villas and a Ferrari on the garage, we work to live, not live to work.

lm28469

The economy is supposed to serve us, not the other way, there is no pride in being a slave, it's not the flex you think it is lol.

Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy

deaux

Korea has strong worker protections and unions. Not on tracking, but in general.

ehnto

Which seems to be a great thing for liveability and happiness metrics across the board.

bloppe

Yeesh. Prohibited? Then how do you decide who gets a promotion? At random?

lnsru

There are no real promotions. It‘s about employment duration. In Bavaria you have like 12 salary groups. For white collar workers 9 is entry level, 10 is for some experience, 11 for experienced and 12 is the carrot to work harder for. Some companies do some downgrade to pay less. Group 8 for experienced folks job ads started appearing recently. The bonus is up to 28% depending on the performance. So basically you can slack all day, have +5% bonus on the base salary when someone doing overnighters will have +15%. The higher bonuses are reserved for oldtimers. This system is absolutely cringe. Btw most of these unionized companies offer 35 hours contracts. 40 hours must be negotiated as a bonus… Anyway union will take care of regular base salary increase, that’s really nice. +6% for doing nothing good is amazing!

OlivOnTech

You have human managers discussing with their team (instead of human-decided metrics that cannot see the full picture)

ehnto

Not that hard, but also why would you want to promote based on metrics? That will get you people gaming the system, and I can't imagine a single software dev metric that actually captures the full gamut of value a dev can provide. You will surely miss very valuable devs in your metrics.

teiferer

Even married people with kids can switch companies. Sometimes that implies a pay cut, but not always.

And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.

Root_Denied

>And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.

I'd say that there's some room for nuance there. Tech hiring has slowed significantly, such that even people in senior roles who get laid off may be looking for a long time.

If you work for Microsoft you're not getting top tier comp already (at least as compared with many other tech companies), and then on top of that you're required to work out of a V/HCOL city. Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck who weren't having that issue a couple of years ago.

Check the prices in Seattle, SF, LA, DC, and NYC metro areas for 2-4 bedroom rentals and how they've jumped the last few years. You're looking at 35%-45% of their take home pay just on rent even before utilities. I'm not sure the math works out all that well for people trying to support a family, even with both parents working.

teiferer

> Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck

If you maxed out your lifestyle relative to your income then yes, that is the case. It will always be, no matter how much you make.

It's also the case for the guy stocking the shelves at your local Walmart if he maxes out his lifestyle. But if you compare both in absolute terms, there are huge differences.

Which lifestyle you have is your choice. How big of a house, what car, where to eat, hobbies, clothes, how many kids, etc. If you max that out, fine, enjoy it. But own that it was your choice and comes with consequences, i.e., if expenses rise more than income, then suddenly your personal economy is stretched. And that's on you.

pjmlp

Depends on the job market on their area.

tho23i4909234u

For the H1Bs, I've heard that it's a nightmare.

zwnow

Absolutely, programmers are paid exceptionally well compared to a lot of other jobs. If they live paycheck to paycheck they are doing things wrong, especially when having family.

ViscountPenguin

The hedonic treadmill really gets away from some people. I've had coworkers on 7 figures talk about how they couldn't possibly retire because the costs of living in (HCOL city) are far too high for that.

When you dig down into it, there's usually some insane luxury that they're completely unwilling to give up on.

If you're a software engineer in the United States, or in London, you can almost certainly FIRE.

null

[deleted]

adammarples

Easy way to game that would be to spam a couple of pages of unread documentation for every page of code you write. 2/3rds copilot usage, it's not critical, and documenting existing code is a much more likely to work use case for an LLM.

p_v_doom

I mean, nobody reads documentation anyway

moomoo11

Why feel bad? They signed up for that. There is no reason to feel bad for people who enter into voluntary contracts willingly.

Personally I want my MSFT position to increase, so I’m cool with whatever the company does to increase the share price.

piva00

Feel bad because you have empathy?

Or perhaps that's the problem, lacking it.

rdtsc

> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see.

It’s a bit like returning to the office. If it’s such an obvious no-brainer performance booster with improved communication and collaboration, they wouldn’t have to force people to do it. Teams would chomp at the bit to do it to boost their own performance.

vineyardmike

I don't want to wade into the actual effectiveness of RTO nor LLMs at boosting productivity, but if you buy into the claims made by advocates, it seems pretty obvious that the "in office boosts communication" claim is only true if your coworker (the other side of the conversation) is in office. Not everyone has the same priorities, so you'd have to mandate compliance to see the benefits.

Similarly, many people don't like learning new tools, and don't like changing their behavior. Especially if it's something they enjoy vs something good for the business. It's 2025 and people will have adamantly used vim for 25 years; some people aren't likely to change what they're comfortable with. Regardless of what is good for productivity (which vim may or may not be), developers are picky about their tools, and its hard to convince people to try new things.

I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment, and if "the business" must wait for them to explore and discover it on their own time, they risk forgoing profits associated with that employee's work.

gherkinnn

I don't see how using vim is in any way bad for business, what a terrible example. And I don't even use it myself.

Your argument also hinges on "business" knowing what is good for productivity, which they generally don't. Admittedly, neither do many programmers, else we'd have a lot less k8s.

vidarh

Indeed, I detest vim but I think mentioning it detracted from the argument by showing why developers tend to not trust it when others try to dictate what is "good for the business" based on their own views rather than objective metrics.

danielrothmann

You've got a point on RTO. Because it's a group behaviour, if you believe it will have positive effects, mandating it could be a way of jumpstarting the group dynamic.

With LLMs, I'm not so sure. Seems more like an individual activity to me. Are some people resistant to new tools, sure. But a good tool does tend to diffuse naturally. I think LLMs are diffusing naturally too, but maybe not as fast as the AI-boosters would like.

The mistake these managers are making is assuming it's a good tool for work that they're not qualified to assess.

Gigachad

I’m lazy. I’d rather work from home even if the office is more productive because it’s easier for me to not have to go to the office.

If the AI tools actually worked how they are marketed I’d use them because that’s less work for me to have to do. But they don’t.

forgotusername6

There are psychological barriers to using a tool that diminishes the work you previously thought was complex.

BiteCode_dev

That's assuming this is most people's objective when they are at work.

And even if it was, that's also assuming this benefit would be superior to the benefit of remote work for the individual.

visarga

The forcing argument has merit, it should not be forced, in fact they should say very little about how we do our work.

But the "rubber-stamp" framing is wrong, if it were true then you would not be needed at all. It's actually harder to use gen AI than to code manually. Gen AI has a rapid pace and overwhelming quantity of code you need to ensure is not broken in non-obvious ways. You need to layer constraints, tests, feedback systems for self repair and handle memories across contexts.

I recently vibe coded 100K LOC across dozens of apps, I feel the rush of power in coding agents but also the danger. At any moment they could hallucinate, misunderstand or use a different premise than you did. Going past 1000 LOC requires sustained focus, it will quickly unravel into a mess otherwise.

therein

It is not harder if you don't care about or even understand what could go wrong. It is harder if you care and want to be as confident of this code as if it is your own hand-written code.

Feels like you are assuming everyone has your diligence and the diligence that exists in the industry isn't already rapidly decaying due to what's happening.

Ekaros

I myself are among people I would trust least to approve any code. In general I am way too trusting that others either know better or have properly thought through their work.

In scenarios were especially the later might not be true it seems like a inevitable failure. And I am not even sure any fixes will be thought trough either... Which makes me rather sceptical of whole thing.

null

[deleted]

serf

interesting to see the phrase 'programmer' coming back en masse - especially as someone who never really stopped using it.

I thought we were all 'full stack engineeers' now, otherwise the resume got thrown into the circular file?

Great. I wait with anticipation for the slide back to 'Calculator'.

xkbarkar

As a response to the AI negativity in the thread. Remember that this thing is in its infancy.

Current models are the embryos of what is to come.

Code quality of the current models is not replacing skilled software engineers, network or ops engineers.

Tomorrows models may well do that though.

Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.

Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.

We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.

All while todays tech talent spends energy bickering on HN about the loss of being the code review King.

Everyone hated the code review royalty anyway. No one mourns them. Move on.

sirwhinesalot

Current LLMs are already trained on the entirety of the interwebs, including very likely stuff they really should not have had access to (private github repos and such).

GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).

Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.

Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.

[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.

awesan

If managers are pushing a clearly not-working tool, it makes perfect sense for workers to complain about this and share their experiences. This has nothing to do with the future. No one knows for sure if the models will improve or not. But they are not as advertised today and this is what people are reacting to.

throw-10-13

In its infancy, but still forced on people like it's a mature product.

The marketing around AI as a feature complete tool ready for production is disingenuous at best, and outright fraud in many cases.

p0w3n3d

I totally agree, the employer requires me to take ownership of the code I pushed to the repository. I should not be enforced to use some tool if I think that the tool does wrong.

In a larger scope, I tend to break many "rules" when I code, because I say that my experience proves against it, and this is what makes me unique. Of course nowadays, I need to convince my team to approve it, but sometimes things that are written differently are free from certain flaws that I want in this very case to avoid.

-- EDIT --

I think that this management trend comes from the bad management principles. There's a joke that a bad manager is a person who knowingly that one woman delivers a baby in 9 months, will consider that nine women deliver a baby in one month. I'd say similar principle comes in here - they were bought by the commercials on how AI makes things faster, they have put the numbers into their spreadsheet and now they expect the numbers they pay get similar to the numbers on the sheet. And if the numbers do not fit, they start pushing.