Skip to content(if available)orjump to list(if available)

I don't care how well your "AI" works

I don't care how well your "AI" works

341 comments

·November 26, 2025

easterncalculus

> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.

mattgreenrocks

> This is the culture that replaced hacker culture.

Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.

This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.

We sold out the culture, which paved the way for it to be hollowed out by LLMs.

There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.

In short, it's time to wake up.

amarant

It's Turing's Law:

Any person who posts a sufficiently long text online will be mistaken for an AI.

paganel

That's because AI-generated memes are lame, not saying that memes are smart, generally speaking, but the AI-generated ones are even lamer. And nothing wrong with being a luddite, to the contrary, in this day and age still thinking that technology is the way forward no matter what is nothing short of criminal.

pksebben

Likely progressive, but definitely not luddite [0]. Anti-capitalist for sure.

I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.

Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.

I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.

We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.

Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.

0 - https://phrack.org/issues/7/3

bgwalter

Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.

Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.

It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.

Please do no co-opt the term "hacker".

forgetfulness

Hackers never had a very cohesive and consistent ideology or moral framework, we heard non stop of the exploits of people funded as part of Cold War military pork projects that got the plug pulled eventually, but some antipathy and mistrust of the powerful and belief in the power of knowledge were recurrent themes nonetheless

So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?

It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces

amarant

The problem here isn't resisting those forces, that's all well and good.

The problem is the vast masses falling under Turing's Law:

"Any person who posts a sufficiently long text online will be mistaken for an AI."

Not usually in good faith however.

FuriouslyAdrift

VSCodium is the open source "clean" build of VS Code without all the Microsoft telemetry and under MIT license.

https://vscodium.com/

Aurornis

> Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.

A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.

Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.

Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.

> Please do no co-opt the term "hacker".

Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.

In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.

mattgreenrocks

Ideological purity is a crutch for those that can't hack it. :)

I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.

Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.

phil21

> To me, hacker is about results

Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.

It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.

Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.

lxgr

> hackers have always rejected bloated tools [...] Hackers have historically derided any website generators

Ah yes, true hackers would never, say, build a Debian package...

Managing complexity has always been part of the game. To a very large extent it is the game.

Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.

> Please do no co-opt the term "hacker".

Indeed, please don't. And leave my true scotsman alone while we're at it!

bgwalter

Local alternatives don't work, and you know that.

poszlem

> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.

codeflo

You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.

Blackthorn

> and technology stopped being the object of interest in those spaces.

That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.

The "something" that happened was ads. They poisoned all the fun and interest out of technology.

Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.

dude250711

The folks who love command-line and terminals had not been luddites all this time?

otabdeveloper4

> is absolutely filled with busy work that no one really wants to do

Well, LLMs don't fix that problem.

(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)

embedding-shape

> And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible.

I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.

What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).

And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.

What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?

swatcoder

The "devaluation" they mention is just the correction against the absurd ZIRP bump, that lured would-be doctors and lawyers into tech jobs at FAANG and FAANG-alike firms with the promise of upper middle class lifestyles for trivially weaving together API calls and jockeying JIRA tickets. You didn't have to spend years more in grad school, you didn't have to be a diligent engineer. You just had to had to have a knack for standardized tests (Leetcode) and the time to grid some prep.

The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.

You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.

j4coh

I don't believe companies can reliably tell expert and non-expert developers apart, to sort them so efficiently to play out like you say.

kelseyfrog

The companies that can will remain and the companies that can't will perish. Not necessarily quickly nor gracefully, but the market stops all bucks.

bavell

Nailed it. It's a pendulum and we're swinging back to baseline. We just finished our last big swing (zirp, post COVID dev glut) and are now in full free fall.

acedTrex

I consider the devaluation of the craft to be completely independent from the professional occupation of software.

Programming has been devalued because more people can do it at a basic level with LLM tooling. People that I do not consider smart enough or to have put enough work in to output the things that they have nor do they really understand it themselves.

It is of course the new reality and now we all have to go find new markers/things to judge peoples output by. Thats the devaluation of the craft itself.

For what its worth, this devaluation has happened many times in this field. ASM, Compilers, managed gc languages, the cloud, abstractions have continually opened up the field to people the old timers consider unworthy.

LLMs are a unique twist on that standard pattern.

abraxas

You sound exactly like that turkey from Nassim Taleb's books that came to the conclusion that the purpose of human beings is to make turkeys very happy with lots of food and breeding opportunities. And the turkey's thesis gets validated perfectly every day he wakes up to a delicious fatty meal.

Until Thanksgiving.

ragazzina

The turkey story predates Nassim Taleb books by decades.

lxgr

Unlike the turkeys, they seem rather self aware about it.

thunky

> What exactly is being de-valuated for a profession

You're probably fine as a more senior dev...for now.

But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.

Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.

spicyusername

> assign work to an LLM

This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.

Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.

LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.

neom

I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.

cjbgkagh

Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.

vladimirralev

Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.

grumbel

> This is just not happening anywhere around me.

Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.

And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.

chud37

Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.

I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.

raw_anon_1111

Well your anecdote is clearly at odds with absolutely all of the macro economic data.

queenkjuul

You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)

carrychains

It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.

But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.

HarHarVeryFunny

> But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.

This sounds kind of logical, but really isn't.

In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.

In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.

Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.

enraged_camel

>> e.g. coming back to you and asking for help

Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.

lupire

Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things

luxuryballs

It just makes you more powerful, not less. When we got rid of rooms full of typewriters it’s because we became more productive, not less.

walt_grata

LLMs vs human

Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway.

Claude doesn't get better as I explain concepts to it the same way a jr engineer does.

cjbgkagh

I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can.

sebasvisser

Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself.

lupire

Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows.

xtiansimon

> “…exploiting our employer's lack of information…”

I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.

singpolyma3

If one is a junior the goal is to become a senior though. Not to remain a junior.

solids

Yes, but the barrier to become a senior is what’s currently in dispute

vrighter

provided the senior dev takes time off to review that slop.

JeremyNT

> And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?

It's not to help make developers more efficient with code assistants.

Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.

If you can't see this coming you lack both imagination and historical perspective.

Five years ago Claude Code would have been essentially unimaginable. Consider this.

So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.

allturtles

I agree with all this, except there is no plan B. What could plan B possibly be when white collar work collapses? You can go into a trade, but who will be hiring the tradespeople?

Gagarin1917

The companies who now have piles of cash because they eliminated a huge chunk of labor will spend far more on new projects, many of which will require tradesmen.

Economic waves never hit one sector and stop. The waves continues across the entire economy. You can’t think “companies will get rid of huge amounts of labor” and then stop asking questions. You need to then ask “what will companies do with decreased labor costs?” And “what could that investment look like, who will they need to hit to fulfill it?” and then “what will those workers do after their demand increases?” And so on.

aishsh

I think it’s much more likely they’ll be used for mass surveillance purposes. The tech is already there, they just need the compute (and a lot of it).

Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.

csmantle

> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it

Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.

embedding-shape

Again, compare this to other professions, don't look at in isolation, and you'll see why you're still (or will have, seems you're a student still) having a much more pleasant life than others.

tdeck

This is completely irrelevant. The point is that the profession is being devalued, i.e. losing value relative to where it was. If, for example, the US dollar loses value, it's not a "counterargument" to point out that it's still much more valuable than the Zimbabwe dollar.

ramon156

Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done

I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default

MattRix

This “compare it to other professions” thing doesn’t really work when those other professions are not the one you actually do. The idea that someone should never be miserable in their job because other more miserable jobs exist is not realistic.

Mashimo

> What exactly is being de-valuated for a profession that seems to be continuously growing

A lot of newly skilled job applicants can't find anything in the job market right now.

DebtDeflation

Likewise with experienced devs who find themselves out of work due to the neverending mass layoffs.

There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.

embedding-shape

Based on conversations with peers for the last ~3 years or so, some of retrained to become programmers, this doesn't seem to as absolute as you paint it out to be.

But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.

DJBunnies

Nah it's pretty bad, but congrats on being an outlier.

raincole

Because tech corps overhired[0] when the interest rate was low.

Even after the layoffs, most big tech corps still have more employees today than they did in 2020.

The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.

[0]: https://www.dw.com/en/could-layoffs-in-tech-jobs-spread-to-r...

IAmBroom

I think it was more sandbagging than snowballing. The pain was spread out, and mostly delayed, which kept the economy moving despite everything.

Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.

Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.

But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.

Hendrikto

It seems like most companies are just using AI as a convenient cover for layoffs. If you say: “We enormously over-hired and have to do layoffs.”, your stock tanks. If you instead say that you are laying off the same 20k employees ‘because AI’, your stock pumps for no reason. It’s just framing.

phkahler

>> A lot of newly skilled job applicants can't find anything in the job market right now.

That is not unique to programming or tech generally. The overall US job market is kind of shit right now.

spicyusername

100% my experience as well.

Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.

My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.

TrackerFF

I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.

But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

Aurornis

> Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

Honestly I think you’re swallowing some of the hype here.

I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.

The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.

The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.

I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.

jvanderbot

It's not just "juniors". It's people who should know better turning out LLM junk outside their actual experience areas because "They are experienced enough to use LLMs".

There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.

davidmurdoch

This just happened to me this week.

I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.

AI just can't cope with this yet. So my team has been told that we are too slow.

Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).

BarryMilo

As it was foretold since the beginning, IA use is breaking security wantonly.

rho4

Ouch, so painful to read.

syllogism

I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.

It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.

The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.

LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.

arscan

> It's really easy to use LLMs to shift work onto other people.

This is my biggest gripe with LLM use in practice.

fileeditview

The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over.

The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.

edit: sentence fixed.

afro88

I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable.

You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.

We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.

fileeditview

While a general "crashing down" probably will not happen I could imagine some differences to other mass produced goods.

Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.

Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.

I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.

I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.

kiba

Poor quality is not synonymous with mass production. It's just cheap crap made with little care.

lxgr

> The era of software mass production has begun.

We've been in that era for at least two decades now. We just only now invented the steam engine.

> I wonder how long it takes until this comes all crashing down.

At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.

knollimar

There's a buge difference between possible and likely.

Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet

pacifika

Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.

Perhaps that’s what lead to a decline in accountability and quality.

goldeneas

> Bad engineering is possible with and without LLMs

That's obvious. It's a matter of which makes it more likely

null

[deleted]

carlosjobim

Why didn't programmers think of stepping down from their ivory towers and start making small apps which solve small problems? That people and businesses are very happy to pay for?

But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.

There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?

My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.

Instead of earning a very good living by making boutique software for paying users.

fileeditview

I don't think programmers are the issue here. What you describe sounds to me more like the typical product management in a company. Stuff features into the thing until it bursts of bugs and is barely maintainable.

I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.

You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)

atleastoptimal

Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.

SpicyLemonZest

> If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.

Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.

AndrewKemendo

> If your org is blindly data/metric driven

Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?

intothemild

Most early stage startups I've been in weren't metric driven. It's impossible when everyone is just working as hard as they can to get it built, to suddenly slow down and start measuring everyone's output.

It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.

layer8

“Blindly” is the operative word here.

zwnow

> You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper.

I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

Aurornis

> I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not.

You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.

I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.

There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.

> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.

bgwalter

Is that why open source progress has generally slowed down since 2023? We keep hearing these promises, and reality shows the opposite.

zwnow

> But you have to actually learn how to use them.

This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.

We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.

shlip

> AI systems exist to reinforce and strengthen existing structures of power and violence.

Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.

[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...

Glemkloksdjf

If you are a software engineere, you can leverage AI a lot better to write code than anyone else.

The complexity of good code, is still complicated.

which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.

And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.

In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.

countWSS

Sounds like Manna control system: https://marshallbrain.com/manna

flir

Now apply that thinking to computers. Or levers.

I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.

The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.

I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?

(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).

prmph

Tools are not value neutral in any way.

In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.

On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.

But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.

So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.

flir

> A motorbike wants to be ridden that way

That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?

I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").

idle_zealot

> The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.

Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.

ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.

And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.

thwarted

"Automating agency" it's such a good way to describe what's happening. In the context of your last paragraph, if they succeed in creating AGI, they won't be able to exercise control over a robot army, because the robot army will have as much agency as humans do. So they will have created the very situation they currently find themselves in. Sans an economy.

amrocha

It’s a good thing that there’s centuries of philosophy on that subject and the general consensus is that no, tools are not “neutral” and do shape the systems they interact with, sometimes against the will of those wielding these tools.

See the nuclear bomb for an example.

flir

I'm actually thinking of Marshall McLuhan. Maybe you're right, and tools aren't neutral. Does this mean that computation necessitates inequality? That's an uncomfortable conclusion for people who identify as hackers.

Aeolun

How is that different from making manual computation obsolete with the help of excel?

fennecfoxy

Lmao Cory Doctorow. Desperately trying to coin another catchphrase again.

lynx97

I am surprised (and also kind of not) to see this kind of tech hate on HN of all places.

Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?

Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.

So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...

seu

> Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.

I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.

Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.

> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...

Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.

Glemkloksdjf

Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.

So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.

I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced

lm28469

> Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?

I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.

If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying

andrepd

"You don't like $instance_of_X? You must want to get rid of all $X" has got to be one of the most intellectually lazy things you could say.

You don't like leaded gasoline? You must want us to walk everywhere. Come on...

lynx97

A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether. Yes, tech has always had two sides. Our "job" as humans is to pick the good parts, and avoid the bad. Nothing new, nothing exceptional.

pneumic

For me LLMs have been an incredible relief when it comes to software planning—quickly navigating the paralyzing quantity of choices when it comes to infrastructure, deployment, architecture and so on. Of course, this only highlights how crushingly complex it all is now, and I get a sinking feeling that instead of people solving technical complexity where it needs solving, these tools will be an abstraction layer over ever-rolling balls of mud that no one bothers to clean up anymore.

lxgr

> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?

Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us

What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?

Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.

Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.

If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...

ceejayoz

> Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

Who says we haven't tried it out?

lxgr

Seems like various hackers came to various different conclusions from trying them out, then. Why feign surprise about this?

ceejayoz

Why not?

I was surprised how hard many here fell for the NFT thing, too.

Wowfunhappy

> What followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.

...there just has to be some kind of line here, right? Since iOS 17 the iPhone has used a language model for text prediction. Is it okay to leave that turned on?

A few weeks ago, a colleague and I were working on some written documents. The colleague mentioned "by the way, this may be a non-issue, but I just want to name that we shouldn't use AI for this." I felt embarrassed because—while I would generally never use AI for writing prose—I realized I'd technically done so already in this case. I had a sentence of the form "this work ___ a strong conceptual understanding" and I couldn't quite decide what word fit in the blank, so I'd asked Claude for a list of options. I told the colleague about this just to confirm it was okay—and of course she said that was fine, we just shouldn't use AI for writing whole passages.

markbnj

I think the dangers that LLMs pose to the ability of engineers to earn a living is overstated, while at the same time the superpowers that they hand us don't seem to get much discussion. When I was starting out in the 80's I had to prowl dial-up BBSs or order expensive books and manuals to find out how to do something. I once paid IBM $140 for a manual on the VGA interface so I could answer a question. The turn around time on that answer was a week or two. The other day I asked claude something similar to this: "when using github as an OIDC provider for authentication and assumption of an AWS IAM role the JWT token presented during role assumption may have a "context" field. Please list the possible values of this field and the repository events associated with them." I got back a multi-page answer complete with examples.

I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.

skydhash

I’m glad you listed the manual example. Usually when people are solving problems, they’re not asking the kind of super targeted question in you second example. Instead it’s an exploration. You read and target the next concept you need to understand. And if you do have this specific question, you want the surrounding context because you’ll likely have more questions after the first.

So what people do is collecting documentations. Give them a glance (or at least the TOC), the start the process to understand the concepts. Sure you can ask the escape code for setting a terminal title, but will it says that not all terminals support that code? Or that piping does not strip out escape codes? That’s the kind of gotchas you can learn from proper manuals.

hvb2

> So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder.

There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.

All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.

abbadadda

I really enjoyed how your words made me _feel._ They encouraged me to "keep fighting the good fight" when it comes to avoiding social media, et. al.

I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.

At any rate, thank you for the thoughtful & eloquent words of caution.

scld

Doesn't Python weaken your thinking about how computers actually work?

rho4

And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.

rsynnott

Don't be the person refusing the 4GL/Segway/3D TV/NFT/Metaverse. Regularly give the new tool a chance and check if improvements are useful for specific tasks.

Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.

(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)

aurareturn

  Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
My relative came to me to make a small business website for her. She knew I was a "coder". She gave me a logo and what her small business does.

I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.

My relative was extremely happy with the result.

It took me about 10 minutes to do all of this.

In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.

I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.

rsynnott

Okay, I mean if that’s the sort of thing you regularly have to do, cool, it’s useful for that, maybe, I suppose? To be clear I’m not saying LLMs are totally useless.

cons0le

I detest LLMs , but I want to point out that segway tech became the basis for EUCs , which are based https://youtu.be/Ze6HRKt3bCA?t=1117

These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.

https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...

catapart

lol! I thought this was going to link to some kind of innovative mobility scooter or something. I was still going to say "oh, good; when someone uses the good parts of AI to build something different which is actually useful, I'll be all ears!", because that's all you would really have been advocating for if that was your example.

But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.

Spivak

I mean sure but none of these even claimed to help you do things you were already doing. If your job is writing code none of these help you do that.

That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.

catapart

Being honest, I don't know what a 4GL is. But the rest of them absolutely DID claim to help me do things I was already doing. And, actually, NFTs and the Metaverse even specifically claimed to be able to help with coding in various different flavors. It was mostly superficial bullshit, but... that's kind of the whole tech for those two things.

In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.

skydhash

If a calculator gives me 5 when I do 2+2, I throw it away.

If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.

If a mobile phone refuses to connect to a cellular tower, I get another one.

What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

tokioyoyo

You can have this position, but the reality is that the industry is accepting it and moving forward. Whether you’ll embrace some of it and utilize it to improve your workflow, is up to you. But over-exaggerating the problem to this point is kinda funny.

crazygringo

Honestly, LLMs are about as reliable as the rest of my tools are.

Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.

My regular software breaks constantly. All the time. It's a rare day where everything works as it should.

LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.

candiddevmike

Sorry you're being downvoted even though you're 100% correct. There are use cases where the poor LLM reliability is as good or better than the alternatives (like search/summarization), but arguing over whether LLMs are reliable is silly. And if you need reliability (or even consistency, maybe) for your use case, LLMs are not the right tool.

fennecfoxy

Except it's more a case of "my phone won't teleport me to Hawaii sad faec lemme throw it out" than anything else.

There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.

Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

skydhash

> Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.

embedding-shape

> What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.

But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.

Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.

skydhash

I’m not a native English speaker so I checked on the definition of reliability

  the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.

So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.

It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).

empath75

The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.

RicoElectrico

You're preaching to the wrong crowd I guess. Many people here think in extremes.

0xEF

I was once in your camp, thinking there was some sort of middle-ground to be had with the emergence of Generative AI and it's potential as a useful tool to help me do more work in less time, but I suppose the folks who opposed automated industrial machinery back in the day did the same.

The problem is that, historically speaking, you have two choices;

1. Resist as long as you can, risking being labeled a Luddite or whatever.

2. Acquiesce.

Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.

My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.

Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.

To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.

At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.

rbongers

I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.

I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.

I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.

zajio1am

> We programmers are currently living through the devaluation of our craft.

Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.

When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.