Skip to content(if available)orjump to list(if available)

Firing programmers for AI will destroy everything

dham

There's such a huge disconnect between people reading headlines and developers who are actually trying to use AI day to day in good faith. We know what it is good at and what it's not.

It's incredibly far away from doing any significant change in a mature codebase. In fact I've become so bearish on the technology trying to use it for this, I'm thinking there's going to have to be some other breakthrough or something other than LLM's. It just doesn't feel right around the corner. Now completing small chunks of mundane code, explaining code, doing very small mundane changes. Very good at.

csmpltn

I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.

The gap between people with deep, hands-on experience that understand how a computer works and prompt engineers will become so insanely deep.

Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.

hombre_fatal

I think we who are already in tech have this gleeful fantasy that new tools impair newcomers in a way that will somehow serve us, the incumbents, in some way.

But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates. And then they backfill their knowledge over time.

My first serious foray into software wasn't even Ruby. It was Ruby on Rails. I built some popular services without knowing how anything worked. There was always a gem (lib) for it. And Rails especially insulated the workings of anything.

An S3 avatar upload system was `gem install carrierwave` and then `mount_uploader :avatar, AvatarUploader`. It added an avatar <input type="file"> control to the User form.

But it's not satisfying to stay at that level of ignorance very long, especially once you've built a few things, and you keep learning new things. And you keep wanting to build different things.

Why wouldn't this be the case for people using LLM like it was for everyone else?

It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that. You get better, you learn more, and you become the question-answerer. And one day you sheepishly look at your question history in amazement at how far you've come.

askonomm

Difference here being that you actually learned the information about Ruby on Rails, whereas the modern programmer doesn't learn anything. They are but a clipboard-like vessel that passes information from an LLM onto a text editor, rarely ever actually reading and understanding the code. And if something doesn't work, they don't debug the code, they debug the LLM for not getting it right. The actual knowledge here never gets stored in the brain, making any future learning or evolving impossible.

lolinder

> Why wouldn't this be the case for people using LLM like it was for everyone else?

I feel like it's a bit different this time because LLMs aren't just an abstraction.

To make an analogy: Ruby on Rails serves a similar role as highways—it's a quick path to get where you're going, but once you learn the major highways in a metro area you can very easily break out and explore and learn the surface streets.

LLMs are a GPS, not a highway. They tell you what to do and where to go, and if you follow them blindly you will not learn the layout of the city, you'll just learn how to use the GPS. I find myself unable to navigate a city by myself until I consciously force myself off of Google Maps, and I don't find that having used GPS directions gives me a leg up in understanding the city—I'm starting from scratch no matter how many GPS-assisted trips I've taken.

I think the analogy helps both in that the weaknesses in LLM coding are similar and also that it's not the end of the world. I don't need to know how to navigate most cities by memory, so most of the time Google Maps is exactly what I need. But I need to recognize that leaning on it too much for cities that I really do benefit from knowing by heart is a problem, and intentionally force myself to do it the old-fashioned way in those cases.

thechao

I think you're right; I can see it in the accelerating growth curve of my good Junior devs; I see grandOP's vision in my bad Junior devs. Optimistically, I think this gives more jr devs more runway to advance deeper into more sophisticated tech stacks. I think we're gonna need more SW devs, not fewer, as these tools get better: things that were previously impossible will be possible.

csmpltn

> "But it's not satisfying to stay at that level of ignorance very long"

It's not about satisfaction: it's literally dangerous and can bankrupt your employer, cause immense harm to your customers and people at home, and make you unhirable as an engineer.

Let's take your example of "an S3 avatar upload system", which you consider finished after writing 2 lines of code and a couple of packages installed. What makes sure this can't be abused by an attacker to DDOS your system, leading to massive bills from AWS? What happens after an attacker abuses this system and takes control of your machines? What makes sure those avatars are "safe-for-work" and legal to host in your S3 bucket?

People using LLMs and feeling all confident about it are the equivalent of hobby carpenters after watching a DIY video on YouTube and building a garden shed over the weekend. You're telling me they're now qualified to go build buildings and bridges?

> "It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that."

I meet people like this during job interviews all of the time, if I'm hiring for a position. Can't tell you how many people with 10+ years of industry experience I met recently that can't explain how to read data from a local file, from the machine's file system.

geodel

Great points. I see my journey from an offshore application support contractor to full time engineer and learning a lot along the way. Along the journey I've seen folks who held good/senior engineering roles just stagnated or moved to management role.

Industry is now large enough to have all sort of people. Growing, stagnating, moving out, moving in, laid off, retiring early, or just plain retiring etc.

unyttigfjelltol

> But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates.

The article is right in a zoomed-in view (fundamental skills will be rare and essential), but in the big picture the critique in the comment is better (folks rarely start on nand gates). Programmers of the future will have less need to know code syntax the same way current programmers don't have to fuss with hardware-specific machine code.

The people who still do hardware-specific code, are they currently in demand? The marketplace is smaller, so results will vary and probably like the article suggests, be less satisfactory for the participant with the time-critical need or demand.

amanda99

I agree, and I also share your experience (guess I was a bit earlier with PHP).

I think what's left out though is that this is the experience of those who are really interested and for whom "it's not satisfying" to stay there.

As tech has turned into a money-maker, people aren't doing it for the satisfaction, they are doing it for the money. That appears to cause more corner cutting and less learning what's underneath instead of just doing the quickest fix that SO/LLM/whatever gives you.

tgv

> And then they backfill their knowledge over time.

If only. There are too many devs who've learnt to write JS or Python, and simply won't change. I've seen one case where someone ported an existing 20k C++ app to a browser app in the most unsuitable way with emscripten, where a 1100 lines of typescript do a much better job.

whynotminot

Isn’t this kind of thing the story of tech though?

Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work, because they’re not managing memory.

Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.

I actually sort of agree with the old C hands to some extent. I think people don’t understand how a lot of things actually work. And it also doesn’t really seem to matter 95% of the time.

HarHarVeryFunny

I don't think the value of senior developers is so much in knowing how more things work, but rather that they've learnt (over many projects of increasing complexity) how to design and build larger more complex systems, and this knowledge mostly isn't documented for LLMs to learn from. An LLM can do the LLM thing and copy designs it has seen, but this is cargo-cult behavior - copy the surface form of something without understanding why it was built that way, and when a different design would have been better for a myriad of reasons.

This is really an issue for all jobs, not just software development, where there is a large planning and reasoning component. Most of the artifacts available to train an LLM on are the end result of reasoning, not the reasoning process themselves (the day by day, hour by hour, diary of the thought process of someone exercising their journeyman skills). As far as software is concerned, even the end result of reasoning is going to have very limited availability when it comes to large projects since there are relatively few large projects that are open source (things like Linux, gcc, etc). Most large software projects are commercial and proprietary.

This is really one of the major weaknesses of LLM-as-AGI, or LLM-as-human-worker-replacement - their lack of ability to learn on the job and pick up a skill for themselves as opposed to needing to have been pre-trained on it (with the corresponding need for training data). In-context learning is ephemeral and anyways no substitute for weight updates where new knowledge and capabilities have been integrated with existing knowledge into a consistent whole.

shafyy

Just because there are these abstractions layers that happened in the past does not mean that it will continue to happen that way. For example, many no-code tools promised just that, but they never caught on.

I believe that there's a "optimal" level of abstraction, which, for the web, seems to be something like the modern web stack of HTML, JavaScript and some server-side language like Python, Ruby, Java, JavaScript.

Now, there might be tools that make a developer's life easier, like a nice IDE, debugging tools, linters, autocomplete and also LLMs to a certain degree (which, for me, still is a fancy autocomplete), but they are not abstraction layers in that sense.

commandlinefan

My son is a CS major right now, and since I've been programming my whole adult life, I've been keeping an eye on his curriculum. They do still teach CS majors from the "ground up" - he took system architecture, assembly language and operating systems classes. While I kind of get the sense that most of them memorize enough to pass the tests and get their degree, I have to believe that they end up retaining some of it.

AnthonyMouse

> Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.

The real issue here is that a lot of the modern tech stacks are crap, but won for other reasons, e.g. JavaScript is a terrible language but became popular because it was the only one available in browsers. Then you got a lot of people who knew JavaScript so they started putting it in places outside the browser because they didn't want to learn another language.

You get a similar story with Python. It's essentially a scripting language and poorly suited to large projects, but sometimes large projects start out as small ones, or people (especially e.g. mathematicians in machine learning) choose a language for their initial small projects and then lean on it again because it's what they know even when the project size exceeds what the language is suitable for.

To slay these beasts we need to get languages that are actually good in general but also good at the things that cause languages to become popular, e.g. to get something better than JavaScript to be able to run in browsers, and to make languages with good support for large projects to be easier to use for novices and small ones, so people don't keep starting out in a place they don't want to end up.

fuy

And also these old C hands don't seem to get paid (significantly) more than a regular web-dev who doesn't care about hardware, memory, performance etc. Go figure.

bdhcuidbebe

Yea, every progeammer should write at least a cpu emulator in their language of choice, its such a undervalued exercise that will teach you so much about how stuff really works.

bee_rider

The real hardcore experts should be writing libraries anyway, to fully take advantage of their expertise in a tiny niche and to amortize the cost of studying their subproblem across many projects. It has never been easier to get people to call your C library, right? As long as somebody can write the Python interface…

Numpy has delivered so many FLOPs for BLAS libraries to work on.

Does anyone really care if you call their optimized library from C or Python? It seems like a sophomoric concern.

ragle

I wonder about this too - and also wonder what the difference of order is between the historical shifts you mention and the one we're seeing now (or will see soon).

Is it 10 times the "abstracting away complexity and understanding"? 100, 1000, [...]?

This seems important.

There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.

foota

I think I've seen the comparison with respect to training data, but it's interesting to think of the presence of LLMs as a sort of barrier to developing skills akin to pre-WW2 low background radiation steel (which, fun fact, isn't actually that relevant anymore, since background radiation levels have dropped significantly since the partial end of nuclear testing)

devoutsalsa

I hired a junior developer for a couple months and was incredibly impressed with what he was able to accomplish with a paid ChatGPT subscription on a greenfield project for me. He’d definitely struggle with a mature code base, it you have to start somewhere!

weatherlite

There's no need for tens of millions of OS Kernel devs , most of us are writing business logic CRUD apps.

Also, it's not entirely clear to me why LLMs should get extremely good in web app development but not OS development, as far as I can see it's the amount and quality of training data that counts.

wesselbindt

> as far as I can see it's the amount and quality of training data that counts

Well there's your reason. OS code is not as in demand or prevalent as crud web app code, so there's less relevant data to train your models on.

hintymad

Would technical depth change the fundamental supply and demand, though? If we view AI as a powerful automation tool, it's possible that the overall demand will be lowered so much that the demand of the deep technical expertise will go down as well. Take EE industry, for instance, the technical expertise required to get things done is vast and deep, yet the demand has not been so good, compared to the software industry.

efitz

On a recent AllIn podcast[1], there was a fascinating discussion between Aaron Levie and Chamath Palihapitiya about how LLMs will (or will not) supplant software developers, which industries, total addressable markets (TAMs), and current obstacles preventing tech CEOs from firing all the developers right now. It seemed pretty obvious to me that Chamath was looking forward to breaking his dependence on software developers, and predicts AI will lead to a 90% reduction in the market for software-as-a-service (and the related jobs).

Regardless of point of view, it was an eye opening discussion to hear a business leader discussing this so frankly, but I guess not so surprising since most of his income these days is from VC investments.

[1] https://youtu.be/hY_glSDyGUU?t=4333

SoftTalker

We've been in this world for decades.

Most developers couldn't write an operating system to save their life. Most could not write more than a simple SQL query. They sling code in some opinionated dev stack that abstracts the database and don't think too hard about the low-level details.

lolinder

Part of the problem is that many working developers are still in companies that don't allow experimentation with the bleeding edge of AI on their code base, so their experiences come from headlines and from playing around on personal projects.

And on the first 10,000 lines of code, the best in class tools are actually pretty good. Since they can help define the structure of the code, it ends up shaped in a way that works well for the models, and it still basically all fits in the useful context window.

What developers who can't use it on large warty codebases don't see is how poorly even the best tools do on the kinds of projects that software engineers typically work on for pay. So they're faced with headlines that oversell AI capabilities and positive experiences with their own small projects and they buy the hype.

Jcampuzano2

My company allowed us to use it but most developers around me didn't reach out to the correct people to be able to use it.

Yes I find it incredibly helpful and try to tell them.

But it's only helpful in small contexts, auto completing things, small snippets, generating small functions.

Any large scale changes like most of these AI companies try to push them being capable of doing it just falls straight on its face. I've tried many times, and with every new model. It can't do it well enough to trust in any codebase that's bigger than a few 10000 lines of code.

nyarlathotep_

I've found it very easy to end up "generating" yourself into a corner with a total mess with no clear control flow that ends up more convoluted than need be, by a mile.

If you're in mostly (or totally) unfamiliar territory, you can end up in a mess, fast.

I was playing around with writing a dead-simple websocket server in go the other evening and it generated some monstrosity with multiple channels (some unused?) and a tangle of goroutines etc.

Quite literally copying the example from Gorilla's source tree and making small changes would have gotten me 90% of the way there, instead I ended up with a mostly opaque pile of code that *looks good* from a distance, but is barely functional.

(This wasn't a serious exercise, I just wanted to see how "far" I could get with Copilot and minimal intervention)

ragle

In a similar situation at my workplace.

What models are you using that you feel comfortable trusting it to understand and operate on 10-20k LOC?

Using the latest and greatest from OpenAI, I've seen output become unreliable with as little as ~300 LOC on a pretty simple personal project. It will drop features as new ones are added, make obvious mistakes, refuse to follow instructions no matter how many different ways I try to tell it to fix a bug, etc.

Tried taking those 300 LOC (generated by o3-mini-high) to cursor and didn't fare much better with the variety of models it offers.

I haven't tried OpenAI's APIs yet - I think I read that they accommodate quite a bit more context than the web interface.

I do find OpenAI's web-based offerings extremely useful for generating short 50-200 LOC support scripts, generating boilerplate, creating short single-purpose functions, etc.

Anything beyond this just hasn't worked all that well for me. Maybe I just need better or different tools though?

menaerus

Did you have to do any preparation steps before you asked from a model to do the large scale change or there were no steps involved? For example, did you simply ask for the change or did you give a model a chance to learn about the codebase. I am genuinely asking, I'm curious because I haven't had a chance to use those models at work.

throwaway0123_5

Some codebases grown with AI assistance must be getting pretty large now, I think an interesting metric to track would be percent of code that is AI generated over time. Still isn't a perfect proxy for how much work the AI is replacing though, because of course it isn't the case that all lines of code would take the same amount of time to write by hand.

lolinder

Yeah, that would be very helpful to track. Anecdotally, I have found in my own projects that the larger they get the less I can lean on agent/chat models to generate new code that works (without needing enough tweaks that I may as well have just written it myself). Having been written with models does seem to help, but it doesn't get over the problem that eventually you run out of useful context window.

What I have seen is that autocomplete scales fine (and Cursor's autocomplete is amazing), but autocomplete supplements a software engineer, it doesn't replace them. So right now I can see a world where one engineer can do a lot more than before, but it's not clear that that will actually reduce engineering jobs in the long term as opposed to just creating a teller effect.

WillPostForFood

the kinds of projects that software engineers typically work on for pay

This assumes a typical project is fairly big and complex. Maybe I'm biased the other way, but I'd guess 90% of software engineers are writing boilerplate code today that could be greatly assisted by LLM tools. E.g., PHP is still one of the top languages, which means a lot of basic WordPress stuff that LLMs are great at.

lolinder

The question isn't whether the code is complex algorithmically, the question is whether the code is:

* Too large to fit in the useful context window of the model,

* Filled with bunch of warts and landmines, and

* Connected to external systems that are not self-documenting in the code.

Most stuff that most of us are working on meets all three of these criteria. Even microservices don't help, if anything they make things worse by pulling the necessary context outside of the code altogether.

And note that I'm not saying that the tools aren't useful, I'm saying that they're nowhere near good enough to be threatening to anyone's job.

rs186

I use AI coding assistants daily, and whenever there is a task that those tools cannot do correctly/quickly enough so that I need to fallback to editing things by myself, I spend a bit of time thinking what is so special about the tasks.

My observation is that LLMs do repetitive, boring tasks really well, like boilerplate code and common logic/basic UI that thousands of people have already done. Well, in some sense, jobs where developers who spend a lot of time writing generic code is already at risk of being outsourced.

The tasks that need a ton of tweaking or not worth asking AI at all are those that are very specific to a specific product and need to meet specific requirements that often come from discussions or meetings. Well, I guess in theory if we had transcripts for everything, AI could write code like the way you want, but I doubt that's happening any time soon.

I have since become less worried about the pace AI will replace human programmers -- there is still a lot that these tools cannot do. But for sure people need to watch out and be aware of what's happening.

RivieraKid

I'm surprised to see a huge disconnect between how I perceive things and the vast majority of comments here.

AI is obviously not good enough to replace programmers today. But I'm worried that it will get much better at real-world programming tasks within years or months. If you follow AI closely, how can you be dismissive of this threat? OpenAI will probably release a reasoning-based software engineering agent this year.

We have a system that is similar to top humans at competitive programming. This wasn't true 1 year ago. Who knows what will happen in 1 year.

Imanari

https://tinyurl.com/mrymfwwp

We will see, maybe models do get good enough but I think we are underestimating these last few percent of improvement.

layer8

When I see stuff like https://news.ycombinator.com/item?id=42994610 (continued in https://news.ycombinator.com/item?id=42996895), I think the field still has fundamental hurdles to overcome.

lordswork

This kind of error doesn't really matter in programming where the output can be verified with a feedback loop.

__MatrixMan__

I think it's more likely that we'll see a rise in workflows that AI is good at, rather than AI rising to meet the challenges of our more complex workflows.

Let the user pair with an AI to edit and hot-reload some subset of the code which needs to be very adapted to the problem domain, and have the AI fine-tuned for the task at hand. If that doesn't cut it, have the user submit issues if they need an engineer to alter the interface that they and the AI are using.

I guess this would resemble how myspace used to do it, where you'd get a text box where you could provide custom edits, but you couldn't change the interface.

bwfan123

A "causal model" is needed to fix bugs ie, to "root-cause" a bug.

LLMs yet dont have the idea of a causal-model of how something works built-in. What they do have is pattern matching from a large index and generation of plausible answers from that index. (aside: the plausible snippets are of questionable licensing lineage as the indexes could contain public code with restrictive licensing)

Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.

Hence, while they are a great productivity boost as a semantic search engine, and a plausible snippet generator, they are not capable of building (or fixing bugs in) a machine which requires causal modeling.

fiso64

>Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.

Prove that the human brain does symbolic computation.

DanielHB

The only thing I use it for is for small self-contained snippets of code on problems that require use of APIs I don't quite remember out of the top of my head. The LLM spits out the calls I need to make or attributes/config I need to set and I go check the docs to confirm.

Like "How to truncate text with CSS alone" or "How to set an AWS EC2 instance RAM to 2GB using terraform"

jillesvangurp

I think of it as an enabler that reduces my dependency on junior developers. Instead of delegating simple stuff to them, I now do it myself with about the same amount of overhead (have to explain what I want, have to triple check the results) on my side but less time wasted on their end.

A lot of micro managing is involved either way. And most LLMs suffer from a severe case of ground hog day. You can't assume them to remember anything over time. Every conversation starts from scratch. If it's not in your recent context, specify it again. Etc. Quite tedious but it still beats me doing it manually. For some things.

For at least the next few years, it's going to be an expectation from customers that you will not waste their time with stuff they could have just asked an LLM to do for them. I've had two instances of non technical CPO and CEO types recently figuring out how to get a few simple projects done with LLMs. One actually is tackling rust programs now. The point here is not that that's good code but that neither of them would have dreamed about doing anything themselves a few years ago. The scope of the stuff you can get done quickly is increasing.

LLMs are worse at modifying existing code than they are at creating new code. Every conversation is a new conversation. Ground hog day, every day. Modifying something with a lot of history and context requires larger context windows and tools to fill those. The tools are increasingly becoming the bottleneck. Because without context the whole thing derails and micromanaging a lot of context is a chore.

And a big factor here is that huge context windows are costly so there's an incentive for service providers to cut some corners there. Most value for me these days come from LLM tool improvements that result in me having to type less. "fix this" now means "fix the thing under my cursor in my open editor, with the full context of that file". I do this a lot since a few weeks.

nerder92

This article is entirely built on 2 big and wrong assumptions:

1. AI code ability will be the same as is today

2. Companies will replace people for AI en masse at a given moment in time

Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves. And is not even just the model itself. The tooling, the Agentic capabilities and workflow will entirely change to adapt to this. (Already doing)

The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs. At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.

This article, in my opinion, is just doomerism storytelling for nostalgic programmers, that see programming only as some kind of magical artistic craft and AI as the villain arrived to remove all the fun from it. You can still switch off Cursor and write donut.c if you enjoy doing it.

y-c-o-m-b

> The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs

After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company. All of them make poor and irrational decisions regularly. I think you over-estimate the intelligence of leadership whilst simultaneously under-estimating their greed and eventual ability to self-destruct.

EDIT: you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.

ssimpson

I tend to agree with you. The general pattern behind "x tool came along that made work easier" isn't to fire a bunch of folks, its to make the people that are there work whatever increment of ease of work more. ie, if the tool cuts work in half, you'd be expected to do 2x more work. Automation and tools almost never "makes our lives easier", it just removes some of the lower value added work. It would be nice to live better and work less, but our overlords won't let that happen. Same output with less work by the individual isn't as good as same or more output with the same or less people.

fragmede

> but our overlords won't let that happen

If you have a job, working for a boss, you're trading your time for money. If you're a contractor and negotiate being paid by the project, you're being paid for results. Trading your time for money is the underlying contract. That's the fundamental nature of a job working for somebody else. You can escape that rat race if you want to.

Someone I know builds websites for clients on a contract basis, and did so without LLMs. Within his market, he knows what a $X,000 website build entails. His clients were paying that rate for a website build out prior to AI-augmented programming, and it would take a week to do that job. With help from LLMs, that same job now takes half as much time. So now he can choose to take on more clients and take home more pay, or not, and be able to take it easy.

So that option is out there, if you can make that leap. (I haven't)

high_na_euv

>Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves.

How are you so sure?

anothermathbozo

No one has certainty here. It’s an emergent technology and no one knows for certain how far it can be pushed.

It’s reasonable that people explore contingencies where the technology does improve to a point of driving changes in the labor market.

RohMin

I do feel with the rise of "reasoning" class of models, it's not hard to believe that code quality will improve over time.

high_na_euv

The thing is: how much

0.2x, 2x, 5x, 50x?

croes

Doesn't sound like improving dramically.

dev1ycan

he's not, just another delusional venture capitalist that hasn't bothered to look up the counter arguments to his point of view, done by mathematicians

anothermathbozo

It’s an emergent technology and no one knows for certain how far it can be pushed, not even mathematicians.

randmeerkat

> he's not, just another delusional venture capitalist that hasn't bothered to look up the counter arguments to his point of view, done by mathematicians

Don’t hate on it, just spin up some startup with “ai” and LLM hype. Juice that lemon.

nerder92

I'm not sure, it's an observation considering how AI improvement is related to Moore's law.

[1](https://techcrunch.com/2025/01/07/nvidia-ceo-says-his-ai-chi...)

somenameforme

That's an assumption. Most/all neural network based tech faces a similar problem of exponentially diminishing returns. You get from 0 to 80 in no time. A bit of effort and you eventually ramp it up to 85, and it really seems the goal is imminent. Yet suddenly each percent, and then each fraction of a percent starts requiring exponentially more work. And then you can even get really fun things like you double your training time and suddenly the resultant software starts scoring worse on your metrics, usually due to overfitting.

And it seems, more or less, clear that the rate of change in the state of the art has already sharply decreased. So it's likely LLMs have already entered into this window.

null

[deleted]

kykeonaut

However, an increase in computing quality doesn't necessarily mean an increase in output quality, as you need compute power + data to train these models.

Just increasing compute power will increase the performance/training speed of these models, but you also need to increase the quality of the data that you are training these models on.

Maybe... the reason why these models show a high school level of understanding is because most of the data on the internet that these models have been trained on is of high school graduate quality.

high_na_euv

But some say that Moore Law is dead :)

Anyway, the mumber of tiktok users coorelates with advancements in AI too!

Before tiktok the progress was slower, then when tiktok appeared it progressed as hell!

aiono

> the quality of code produced by AI will improve dramatically as model evolves.

That's a very bold claim. We are already seeing plateu in LLM capabilities in general. And there is little improvement in places where they fall short (like making holistic changes in a large codebase) since their birth. They only improve where they are already good at such as writing small glue programs. Expecting significant breakthroughs with just scaling without any fundamentally changes to the architecture seems like too optimistic to me.

causal

Not to mention I haven't really seen AI replace anyone, except perhaps as a scapegoat for execs who were planning on layoffs anyway.

That said, I do think there is real risk of letting AI hinder the growth of Junior dev talent.

sanderjd

I think I've seen us be able to do more with fewer people than in the past. But that isn't the limiting factor for our hiring. All else equal, we'd like to just do more, when we can afford to hire the people. There isn't a fixed amount of work to be done. We have lots of ideas for products and services to make if we have the capacity.

causal

Agreed, I often see AI discussed as if most companies wouldn't take 10x more developers if they could have them for free

leptons

> their existing enhanced devs using AI will suffice enough to their coding related needs.

Not my experience. I spend as much time reading through and replacing wrong AI generated code as I do writing my own code, so it's really wasting my time more often than helping. It's really hit or miss, and about the only thing the AI gets right most often is writing console.log statements based on the variable I've just assigned, and that isn't really "coding". And even then it gets it right only about 75% of the time. Sure, that saves me some time, but I'm not seeing the supposed acceleration AI is hyped as giving.

croes

That the second assumnoption is wrong is based on > intelligent companies will not layoff en masse to use AI

How many companies are intelligent given how many dumb decisions we see?

If we assume enough not so intelligent companies then better AI code we lead to mass firing.

jayd16

These two predictions seem contradictory. If the AI massively improves why would they slow roll adoption?

throwaway290

People want their AI stocks to go up. So they say things like sky is the limit and jobs are not going away (aka please don't regulate) in one sentence. I think only one of this is true.

dragonwriter

My opinion: tech isn't firing programmers for AI. If is firing peogrammers because of the financial environment, and waving around AI as a fig leaf to pretend that it is not really cutting back in output.

When the financial environment loosens again, there’ll be a new wave of tech hiring (which is about equally likely to publicly be portrayed as either reversing the AI firing or exploiting new opportunities due to AI, neither of which will be the real fundamental driving force.)

smitelli

I've come to believe it really is this.

Everybody got used to the way things worked when interest rates were near zero. Money was basically free, hiring was on a rampage, and everybody was willing to try reckless moonshots with slim chances for success. This went on for like fifteen years -- a good chunk of the workforce has only ever known that environment.

coolKid721

most narratives for everything are just an excuse for macro stuff. we had zirp for basically the entire period of 2008 - 2022 and when that stopped there was huge lay offs and less hiring. I see lots of newer/younger devs being really pessimistic about the future of the industry, being mindful of the macro factors is important so people don't buy into the AI narratives (which is just to bump up their stocks).

If people can get a safer return buying bonds they aren't going to invest in expansion and hiring. If there is basically no risk free rate of return you throw your money at hiring/new projects because you need to make a return. Lots of that goes into tech jobs.

pyrale

We have fired all our programmers.

However, the AI is hard to work with, it expects specific wording in order to program our code as expected.

We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.

phren0logy

I think people aren't getting your joke.

smitelli

The AI that replaced the people, however, is in stitches.

eimrine

Now we are!

GuB-42

> We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.

Also known as programmers.

The "AI" part is irrelevant. Someone with expertise in transmitting specifications to a computer is a programmer, no matter the language.

EDIT: Yep, I realized that it could be the joke, but reading the other comments, it wasn't obvious.

philipov

whoosh! (that's the joke)

HqatsR

Yes, the best way is to type the real program completely into the AI, so that ClosedAI gets new material to train on, the AI can make some dumb comments but the code works.

And the manager is happy that filthy programmers are "using" AI.

aleph_minus_one

> We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.

These people are however not experts in pretending to be a obedient lackeys.

SketchySeaBeast

Hey! I haven't spent a decade of smiling through the pain to be considered an amateur lackey.

markus_zhang

Actually I think that's the near future, or close to it.

1. Humans also need specific wording in order to program code that stakeholders expected. A lot of people are laughing at AI because they think getting requirements is a human privilege.

2. On the contrary, I don't think people need to hire AI interfacers. Instead, business stakeholders are way more interested to interface with AI simply because they just want to get things done instead of filling a ticket for us. Some of them are going to be good interfacers with proper integration -- and yes we programmers are helping them to do so.

Side note: I don't think you are going to hear someone shouting that they are going to replace humans with AI. It started with this: people integrate AI into their workflow, layoff 10%, and see if AI helps to fill in the gap so they can freeze hire. Then they layoff 10% more.

And yes we programmers are helping the business to do that, with a proud and smile face.

Good luck.

ryanjshaw

What job title are you thinking of using?

pyrale

Speaker With Expertise

amarcheschi

Soft Waste Enjoyer

yoyohello13

Tech Priest

beepboopboop

AI Whisperer

kayge

Full Prompt Developer

silveraxe93

oftwaresay engineeryay

kamaal

>>However, the AI is hard to work with, it expects specific wording in order to program our code as expected.

Speaking English to make something is one thing, but speaking English to modify something complicated is absolutely something else. And Im pretty sure involves more or less the same effort as writing code itself. Of course regression for this something like this is not for the faint hearted.

thelittleone

I sure empathize.... our AI is fussy and rigid... pedantic even.

worthless-trash

  Error on line 5: specification can be interpreted too many 
  ways, can't define type from 'thing':

  Remember to underline the thing that shows the error
                            ~~~~~
                            | This 'thing' matches too many objects in the knowledge scope.

zoogeny

I'm not sure why people are so sure one way or the other. I mean, we're going to find out. Why pretend you have a crystal ball and can see the future?

A lot of articles like this just want to believe something is true and so they create an elaborate argument as to why that thing is true.

You can wrap yourself up in rationalizations all you want. There is a chance firing all the programmers will work. Evidence beats argument. In 5 years we'll look back and know.

It is actually probably a good idea to hedge your bets either way. Use this moment to trim some fat, force your existing programmers to work in a slightly leaner environment. It doesn't feel nice to be a programmer cut in such an environment but I can see why companies might be using this opportunity.

jvanderbot

What evidence do we have that AI is actually replacing programmers already? The article treats messaging on this as a forgone conclusion, but I strongly suspect it's all hype-cycle BS to cover layoffs, or a misreading of "Meta pivots to AI" headlines.

chubot

We'll probably never have evidence either way ... Did Google and Stack Overflow "replace" programmers?

Yes, in the sense that I suspect that with the strict counterfactual -- taking them AWAY -- you would have to hire 21 people instead of 20, or 25 instead of 20, to do the same job.

So strictly speaking, you could fire a bunch of people with the new tools.

---

But in the same period, the industry expanded rapidly, and programmer salaries INCREASED

So we didn't really notice or lament the change

I expect that pretty much the same thing will happen. (There will also be some thresholds crossed, producing qualitative changes. e.g. Programmer CEOs became much more common in the 2010's than in the 1990's.)

---

I think you can argue that some portion of the industry "got dumber" with Google/Stack Overflow too. Higher level languages and tech enabled that.

Sometimes we never learn the underlying concepts, and spin our wheels on the surface

Bad JavaScript ate our CPUs, and made the fans spin. Previous generations would never write code like that, because they didn't have the tools to, and the hardware wouldn't tolerate it. (They also wrote a lot of memory safety bugs we're still cleaning up, e.g. in the Expat XML parser)

If I reflect deeply, I don't know a bunch of things that earlier generations did, though hopefully I know some new things :-P

jvanderbot

This is an insightful comment. It smells of Jevron's paradox, right? More productivity leads to increased demand.

I just don't remember anyone saying that SO would replace programmers, because you could just copy-paste code from a website and run it. Yet here we are: GPTs will replace programmers, because you can just copy-paste code from a website and run it.

sanderjd

People definitely said this about SO!

TheOtherHobbes

Google Coding is definitely a real problem. And I can't believe how wrong some of the answers on Stack Overflow are.

But the real problems are managerial. Stonks must go up, and if that means chasing a ridiculous fantasy of replacing your workforce with LLMs then let's do that!!!!111!!

It's all fun and games until you realise you can't run a consumer economy without consumers.

Maybe the CEOs have decided they don't need workers or consumers any more. They're too busy marching into a bold future of AI and robot factories.

Good luck with that.

If there's anyone around a century from now trying to make sense of what's happening today, it's going to look like a collective psychotic episode to them.

supergarfield

> It's all fun and games until you realise you can't run a consumer economy without consumers.

If the issue is that the AI can't code, then yes you shouldn't replace the programmers: not because they're good consumers, just because you still need programmers.

But if the AI can replace programmers, then it's strange to argue that programmers should still get employed just so they can get money to consume, even though they're obsolete. You seem to be arguing that jobs should never be eliminated due to technical advances, because that's removing a consumer from the market?

robertlagrant

I don't think this is anyone's plan. It's the biggest argument against why it won't be the plan: who'll pay for all of it? Unless we can Factorio the world, it seems more likely we just won't do that.

insane_dreamer

It'll happen gradually over time, with more pressure on programmers to "get more done".

I think it's useful to look at what has already happened at another, much smaller profession -- translators -- as a precursor to what will happen with programmers.

1. translation software does a mediocre job, barely useful as a tool; all jobs are safe

2. translation software does a decent job, now expected to be used as time-saving aid, expectations for translators increase, fewer translators needed/employed

3. translation software does a good job, translators now hired to proofread/check the software output rather than translate themselves, allowing them to do 3x to 4x as fast as before, requiring proportionally fewer translators

4. translation software, now driven by LLMs, does an excellent job, only cursory checks required; very few translators required mostly in specialized cases

weatherlite

> It'll happen gradually over time

How much time? I totally agree with you but being early is the same as being wrong as someone clever once said. There's a huge difference between it happening in less than 5 years like Zuckerberg and Sam Altman are saying and it taking 20 more years. If the second scenario is what happens me and many people on this thread can probably retire rather comfortably, and humanity possibly has enough time to come up with a working system to handle this mass change. If the first scenario happens it's gonna be very very painful for many people.

aksosnckckd

The hard part of development isn’t converting an idea in human speak to idea in machine speak. It’s formulating that idea in the first place. This spans all the way from high level “tinder for dogs” concepts to low level technical concepts.

Once AI is doing that, most jobs are at risk. It’ll create robots to do manual labor better than humans as well.

insane_dreamer

Right. But it only takes 1 person, or maybe a handful, to formulate an idea that might take 100 people to implement. You will still need that one person but not the 100.

Workaccount2

I actually know a professional translator and while a year ago he was full of worry, he now is much more relaxed about it.

It turns out that like art, many people just want a human doing the translation. There is a strong romantic element to it, and it seems humans just have a strong natural inclination to only want other humans facilitating communication.

insane_dreamer

I’ve done freelance translating (not my day job) for 20 years. What you describe is true for certain types of specialized translations, particularly anything that is literary in nature. But that is a very small segment. The vast majority of translation work is commercial in nature and for that companies don’t care whether a human or machine did it.

arrowsmith

How do they know that a human is doing the translation? What's to stop someone from just c&ping the text into an LLM, giving it a quick proofread, then sending it back to the client and saying "I translated this"?

Sounds like easy money, maybe I should get into the translation business.

daveguy

Yes, but in all 4 of these steps you are literally describing the job transformer LLMs were designed to do. We are at 1 (mediocre job) for LLMs in coding right now. Maybe 2 in a few limited cases (eg boilerplate). There's no reason to assume LLMs will ever perform at 3 for coding. For the same reason natural language programming languages like COBOL are no longer used -- natural language is not precise.

insane_dreamer

It seems the consensus is that we will reach level 3 pretty quickly given the pace of development in the past 2 years. Not sure about 4 but I’d say in 10 years we’ll be there.

SirFatty

swiftcoder

In the ~8 years since I worked there, Zuckerberg announced that we'd all be spending our 8 hour workdays in the Metaverse, and when that didn't work out, he pivoted to crypto currency.

He's just trend-chasing, like all the other executives who are afraid of being left behind as their flagship product bleeds users...

cma

Have they bled users?

65

We gotta put AI Crypto in the Blockchain Metaverse!

icepat

Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in. The Metaverse is going great.

scarface_74

On the other hand, Instagram has been called one of the greatest acquisitions of all time only below the Apple/Next acquisition.

falcor84

Really, that's what you're going with, arguing against the business acumen of the world's second richest person, and the only one at that scale with individual majority control over their company?

As for the Metaverse, it was always intended as a very long-term play which is very early to be judged, but as an owner of a Quest headset, it's already going great for me.

burkaman

Obviously the people developing AI and spending all of their money on it (https://www.reuters.com/technology/meta-invest-up-65-bln-cap...) are going to say this. It's not a useful signal unless people with no direct stake in AI are making this change (and not just "planning" it). The only such person I've seen is the Gumroad CEO (https://news.ycombinator.com/item?id=42962345), and that was a pretty questionable claim from a tiny company with no full-time employees.

causal

Planning to and succeeding at are very different things

SirFatty

I'd be willing to bet that "planning to" means the plan is being executed.

https://www.msn.com/en-us/money/other/meta-starts-eliminatin...

makerofthings

Part of my work is rapid prototyping of new products and technology to test out new ideas. I have a small team of really great generalists. 2 people have left over the last year and I didn't replace them because the existing team + chatGPT can easily take up the slack. So that's 2 people that didn't get hired that would have done without chatGPT.

ActionHank

There is little evidence that AI is replacing engineers, but there is a whole lot of evidence that shareholders and execs really love the idea and are trying every angle to achieve it.

chubot

The funny thing is that "replacing engineers" is framed as cutting costs

But that doesn't really lead to any market advantage, at least for tech companies.

AI will also enable your competitors to cut costs. Who thinks they are going to have a monopoly on AI, which would be required for a durable advantage?

---

What you want to do is get more of the rare, best programmers -- that's what shareholders and execs should be wondering about

Instead, those programmers will be starting their own companies and competing with you

insane_dreamer

> AI will also enable your competitors to cut costs.

which is why it puts pressure on your own company to cut costs

it's the same reason why nearly all US companies moved their manufacturing offshore; once some companies did it, everyone had to follow suit or be left behind due to higher costs than their competitors

TheOtherHobbes

If this works at all, they'll be telling AIs to start multiple companies and keeping the ones that work best.

But if that works, it won't take long for "starting companies" and "being a CEO" to look like comically dated anachronisms. Instead of visual and content slop we'll have a corporate stonk slop.

If ASI becomes a thing, it will be able to understand and manipulate the entirety of human culture - including economics and business - to create ends we can't imagine.

t-writescode

> Instead, those programmers will be starting their own companies and competing with you

If so, then why am I not seeing a lot of new companies starting while we're in this huge down-turn in the development world?

Or, is everyone like me and trying to start a business with only their savings, so not enough to hire people?

ryandrake

What's the far future end-state that these shareholders and execs envision? Companies with no staff? Just self-maintaining robots in the factory and AI doing the office jobs and paperwork? And a single CEO sitting in a chair prompting them all? Is that what shareholders see as the future of business? Who has money to buy the company's products? Other CEOs?

reverius42

Just a paperclip maximizer, with all humans reduced to shareholders in the paperclip maximizer, and also possibly future paperclips.

chasd00

> execs really love the idea and are trying every angle to achieve it.

reminds me of the offshoring hype in the early 2000's. Where it worked, it worked well but it wasn't the final solution for all of software development that many CEOs wanted it to be.

only-one1701

If the latter is the case, then it's only a matter of time. Enshitification, etc.

3s

For a lot of tasks like frontend development I’ve found that a tool like cursor can get you pretty far without much prior knowledge. IMO (and experience) many tasks that previously required to hiring a programmer or designer with knowledge of the latest frameworks can now be replaced by one motivated “prompt engineer” and some patience

goosejuice

I love cursor, but yeah no way in hell. This is where it chokes the most and I've been leaning on it for non trivial css for a year or more. If I didn't have experience with frontend it would be a shit show. If you replaced a fe/designer with a "prompt engineer" at this stage it would be incredibly irresponsible.

Responsiveness, cohesive design, browser security, accessibility and cross browser compatibility are not easy problems for LLMs right now.

daveguy

The deeper it gets you into code without prior knowledge the deeper it gets you into debug hell.

I assume the "motivated prompt engineer" would have to already be an experienced programmer at this point. Do you think someone who has only had an intro to programming / MBA / etc could do this right now with tools like cursor?

iainctduncan

I can tell you from personal experience that investors are feeling pressure to magically reduce head count with AI to keep up with the joneses. It's pretty horrifying how little understanding or information some of the folks making these decisions have. (I work in tech diligence on software M&A and talk to investment committees as part of the job)

Workaccount2

I can say my company stopped contracting for test system design, and we use a mix of models now to achieve the same results. Some of these have been running without issue for over a year now.

aksosnckckd

As in writing test cases? I’ve seen devs write (heavily mocked) unit tests using only AI, but these are worse than no tests for a variety of reasons. Our company also used to contract for these tests…but only because they wanted to make the test coverage metric to up. They didn’t add any value but the contractor was offshore and cheap.

If you’re able to have AI generate integration level tests (ie call an API then ensure database or external system is updated correctly - correctly is doing a lot of heavy lifting here) that would be amazing! You’re sitting on a goldmine, and I’d happily pay for these kind of tests.

Workaccount2

Amazingly, there is industry outside tech that uses software. We are an old school tangible goods manufacturing company. We use stacks of old grumbling equipment to do product verification tests, and LLMs to write the software that synchronizes them and interprets what they spit back out.

sunami-ai

I agree with the statement in the title.

Using AI to write code does two things:

1. Everything seems to go faster at first until you have to debug it because the AI can't seem to be able to fix the issue... It's hard enough to debug code you wrote yourself. However, if you work with code written by others (team environment) then maybe you're used to this, but not being able to quickly debug code you're responsible for will shoot you in the foot.

2. You brain neurons in charge of code production will be naturally re-assigned for other cognitive tasks. It's not like riding a bicycle or swimming which once learned is never forgotten. It's more like advanced math, which if you don't practice you can forget.

Short term gain; long term pain.

dkjaudyeqooe

Essentially: people are vastly overestimating AI ability and vastly underestimating HI ability.

Humans are supremely adaptable. That's what our defining attribute as a species is. As a group we can adapt to more or less any reality we find ourselves in.

People with good minds will use whatever tools they have to enhance their natural abilities.

People with less good minds will use whatever tools they have to cover up their inability until they're found out.

guccihat

When the AI dust settles, I wonder who will be left standing among the groups of developers, testers, scrum masters, project leaders, department managers, compliance officers, and all the other roles in IT.

It seems the general sentiment is that developers are in danger of being replaced entirely. I may be biased, but it seems not to be the most likely outcome in the long term. I can't imagine how such companies will be competitive against developers who replace their boss with an AI.

cromulent

LLMs are good at producing plausible statements and responses that radiate awareness, consideration, balance, and at least superficial knowledge of the technology in question. Even if they are non-committal, indecisive, or even inaccurate.

In other words, they are very economical replacements for middle managers. Have at it.

swiftcoder

Every generation sees a new technology that old timers loudly worry "will destroy programming as a profession".

I'm old enough to remember when that new and destructive technology was Java, and the greybeards were all heavily invested in inline assembly as an essential skill of the serious programmer.

The exact same 3 steps in the article happened about a decade ago during the "javascript bootcamp" craze, and while the web stack does grow ever more deeply abstracted, things do seem to keep on trucking along...

hedora

I'm not old enough to remember these, but they were certainly more disruptive than AI has been so far (reverse chronological order):

- The word processor

- The assembly line

- Trains

- Internal combustion engines

I do remember some false starts from the 90's:

- Computer animation will put all the animation studios out of business

- Software agents will replace all middlemen with computers

just-another-se

Though I disagree to most of things said here, I do agree that the new fleet of software engineers won't be that technically adapt. But what if they don't need to? Like how most programmers today don't need to know the machine level instructions to build something useful.

I feel there will be a paradgym shift about what programming would be altogether. I think, programmers will more be like artists, painters who would conceptualize an idea and communicate those ideas to AI to implement (not end to end though; in bits and pieces, we'd still need engineers to fit these bits and peices together - think of a new programming language but instead of syntax, there will be natural language prompting).

I've tried to pen down this exact thoughts here: https://suyogdahal.com.np/posts/engineering-hacking-and-ai/

bryukh

"Let AI replace programmers" is the new "Let’s outsource everything to <some country>." Short-term cost savings, long-term disaster.

dogibog

[flagged]

isakkeyten

[flagged]

physicsguy

There's nothing discriminatory about it, it's the same if you outsource things within your own country except the price is higher. Contractors have a totally different way of working because they're not really interested in the long term of a project beyond being retained. If they code something in such a way that causes an issue that takes time to fix later then great - more hours we can charge the client for.

Outsourcing abroad is more difficult because of cultural differences though. Having worked with outsourced devs in India, I found that we got a lot of nodding in meetings when asked if they understood, avoiding saying no, and then it became clear when PRs came in that they didn't actually understand or do what they had been asked to do.

philipov

More important than cultural differences is timezone differences. Communication and collaboration is harder when you only have a couple hours of overlap between your working day and their working day. Much harder if you have no overlap at all. This isn't even a feature of outsourcing - it's a challenge for any globally distributed team.

jmcgough

Certainly there are competent engineers in every country, but I think what they are referencing is that back in the 90s and 2000s there were a lot of fears from US engineers that they would be replaced by less expensive engineers in other countries, which was attempted by some companies. Ultimately a number of these efforts failed to work well for the company, due to communication barriers and time zone differences.

Retric

It’s not about the ability to write code, it’s about the ability to communicate ideas back and forth. Even just a few time zones is a real issue let alone any linguistic or cultural issues.

heyoni

It’s not discriminatory at all! Or even the point OP is trying to make. Taking a significant number of jobs and outsourcing them overnight will quickly result in running out the talent pool in said country. It’s shortsighted and stupid because it assumes that there is an army of developers just sitting around standing by waiting for the next western tech company to give them high paying remote jobs. A large portion of that talent pool is already reserved by the biggest corporations.

Build up to it and foster growth in your overseas teams and you’ll do well. Thinking you can transform your department overnight _is_ a great way to boost your share price, cash out on a fat payday and walk away before your product quality tanks.

toolz

Every job in the world is discriminatory if you take the less potent definition of the word. That's why we have job interviews, to explicitly discriminate. I presume you mean "discriminate in a bad way" but given the context I have no idea what that "bad way" is. Outsourcing has costs outside of just the up front payments, that isn't a secret and it has very little to do with technical expertise. Most software driven companies don't fall apart because of poorly implemented algorithms, they are more likely to do so because the humans have a difficult time interfacing in efficient ways and understanding and working towards the same goal together.

You can't just expect people from other countries to communicate as effectively as people who grew up right down the street from each other. Yes, it's objectively discriminatory, but not for hostile reasons.

bryukh

> So no other country in the world can write code as good as wherever you are from?

I didn't say this -- I think it's your take. Even more -- I'm such an "outsource" software developer who is working for US and EU companies. My take is that overusing outsourcing in the long term, you can lose local education because "we can just hire from ... so why do we need to teach ours?" -- I saw it already, even on an "in-country-scale" level.

c03

Modern development is not as much about writing "good code", but just as much about good communication. There is a very real risk of losing good communication when outsourcing.

MonkeyClub

GP sounds shortsighted on first take, but consider how outsourcing is good and cheap for the companies, but in the long run creates huge unemployment pools in the original country.

Negative consequences can also be social, no-one is saying that it's, say, lowering of product quality.