Skip to content(if available)orjump to list(if available)

The coming knowledge-work supply-chain crisis

roughly

TFA is right to point out the bottleneck problem for reviewing content - there’s a couple things that compound to make this worse than it should be -

The first is that the LLM outputs are not consistently good or bad - the LLM can put out 9 good MRs before the 10th one has some critical bug or architecture mistake. This means you need to be hypervigilant of everything the LLM produces, and you need to review everything with the kind of care with which you review intern contributions.

The second is that the LLMs don’t learn once they’re done training, which means I could spend the rest of my life tutoring Claude and it’ll still make the exact same mistakes, which means I’ll never get a return for that time and hypervigilance like I would with an actual junior engineer.

That problem leads to the final problem, which is that you need a senior engineer to vet the LLM’s code, but you don’t get to be a senior engineer without being the kind of junior engineer that the LLMs are replacing - there’s no way up that ladder except to climb it yourself.

All of this may change in the next few years or the next iteration, but the systems as they are today are a tantalizing glimpse at an interesting future, not the actual present you can build on.

ryandrake

> The first is that the LLM outputs are not consistently good or bad - the LLM can put out 9 good MRs before the 10th one has some critical bug or architecture mistake. This means you need to be hypervigilant of everything the LLM produces

This, to me, is the critical and fatal flaw that prevents me from using or even being excited about LLMs: That they can be randomly, nondeterministically and confidently wrong, and there is no way to know without manually reviewing every output.

Traditional computer systems whose outputs relied on probability solved this by including a confidence value next to any output. Do any LLMs do this? If not, why can't they? If they could, then the user would just need to pick a threshold that suits their peace of mind and review any outputs that came back below that threshold.

exe34

> Do any LLMs do this? If not, why can't they? If they could, then the user would just need to pick a threshold that suits their peace of mind and review any outputs that came back below that threshold.

That's not how they work - they don't have internal models where they are sort of confident that this is a good answer. They have internal models where they are sort of confident that these tokens look like they were human generated in that order. So they can be very confident and still wrong. Knowing that confidence level (log p) would not help you assess.

There are probabilistic models where they try to model a posterior distribution for the output - but that has to be trained in, with labelled samples. It's not clear how to do that for LLMs at the kind of scale that they require and affordably.

You could consider letting it run code or try out things in simulations and use those as samples for further tuning, but at the moment, this might still lead them to forget something else or just make some other arbitrary and dumb mistake that they didn't make before the fine tuning.

bee_rider

What would those probabilities mean in the context of these modern LLMs? They are basically “try to continue the phrase like a human would” bots. I imagine the question of “how good of an approximation is this to something a human might write” could possibly be answerable. But humans often write things which are false.

The entire universe of information consists of human writing, as far as the training process is concerned. Fictional stories and historical documents are equally “true” in that sense, right?

Hmm, maybe somehow one could score outputs based on whether another contradictory output could be written? But it will have to be a little clever. Maybe somehow rank them by how specific they are? Like, a pair of reasonable contradictory sentences that can be written about the history-book setting indicate some controversy. A pair of contradictory sentences, one about history-book, one about Narnia, each equally real to the training set, but the fact that they contradict one another is not so interesting.

benterix

> But humans often write things which are false.

LLMs do it much more often. One of the many reasons in the coding area is the fact that they're trained on both the broken and working code. They can propose as a solution a piece of code that was taken verbatim from "why is this code not working" SO question.

Google decided to approach this major problem by trying to run the code before giving the answer. Gemini doesn't always succeed as it might not have all packages needed installed for example, but at least it tries, and when it detects bullshit, it tries do correct that.

sepositus

> But humans often write things which are false.

Not to mention, humans say things that make sense for humans to say and not a machine. For example, one recent case I saw was where the LLM hallucinated having a Macbook available that it was using to answer a question. In the context of a human, it was a totally viable response, but was total nonsense coming from an LLM.

yojo

LLMs already have a confidence score when printing the next token. When confidence drops, that can indicate that your session has strayed outside the training data.

Re:contradictory things: as LLM digest increasingly large corpuses, they presumably distill some kind of consensus truth out of the word soup. A few falsehoods aren’t going to lead it astray, unless they happen to pertain to a subject that is otherwise poorly represented in the training data.

nine_k

> What would those probabilities mean in the context of these modern LLMs?

They would mean understanding the sources of the information they use for inference, and the certainty of steps they make. Consider:

- "This conclusion is supported by 7 widely cited peer-reviewed papers [list follows]" vs "I don't have a good answer, but consider this idea of mine".

- "This crucial conclusion follows strongly from the principle of the excluded middle; its only logical alternative has been just proved false" vs "This conclusion seems a bit more probable in the light of [...], even though its alternatives remain a possibility".

I suspect that following a steep gradient in some key layers or dimensions may mean more certainty, while following an almost-flat gradient may mean the opposite. This likely can be monitored by the inference process, and integrated into a confidence rating somehow.

earnestinger

Interesting point.

You got me thinking (less about llms, more about humans), that adults do have many contradictory truths, some require nuance, some require completely different mental compartment.

Now I feel more flexible about what truth is, as a teen and child I was more stuborn, sturdy.

giantrobot

> That they can be randomly, nondeterministically and confidently wrong, and there is no way to know without manually reviewing every output.

This is my exact same issue with LLMs and it's routinely ignored by LLM evangelists/hypesters. It's not necessarily about being wrong it's the non-deterministic nature of the errors. They're not only non-deterministic but unevenly distributed. So you can't predict errors and need expertise to review all the generated content looking for errors.

There's also not necessarily an obvious mapping between input tokens and an output since the output depends on the whole context window. An LLM might never tell you to put glue on pizza because your context window has some set of tokens that will exclude that output while it will tell me to do so because my context window doesn't. So there's not even necessarily determinism or consistency between sessions/users.

I understand the existence of Gell-Mann amnesia so when I see an LLM give confident but subtly wrong answers about a Python library I don't then assume I won't also get confident yet subtly wrong answers about the Parisian Metro or elephants.

furyofantares

This is a nitpick because I think your complaints are all totally valid, except that I think blaming non-determinism isn't quite right. The models are in fact deterministic. But that's just technical, from a practical sense they are non-deterministic in that a human can't determine what it'll produce without running it, and even then it can be sensitive to changes in context window like you said, so even after running it once you don't know you'll get a similar output from similar inputs.

I only post this because I find it kind of interesting; I balked at blaming non-determinism because it technically isn't, but came to conclude that practically speaking that's the right thing to blame, although maybe there's a better word that I don't know.

gopher_space

The prompts we’re using seem like they’d generate the same forced confidence from a junior. If everything’s a top-down order, and your personal identity is on the line if I’m not “happy” with the results, then you’re going to tell me what I want to hear.

Aurornis

> This, to me, is the critical and fatal flaw that prevents me from using or even being excited about LLMs: That they can be randomly, nondeterministically and confidently wrong, and there is no way to know without manually reviewing every output.

Sounds a lot like most engineers I’ve ever worked with.

There are a lot of people utilizing LLMs wisely because they know and embrace this. Reviewing and understanding their output has always been the game. The whole “vibe coding” trend where you send the LLM off to do something and hope for the best will teach anyone this lesson very quickly if they try it.

agentultra

Most engineers you worked with probably cared about getting it right and improving their skills.

ako

Instead of relying only on reviews, rely on tests. You can have an LLM generate tests first (yes, needs reviewing) and then have the LLM generate code until all tests work. This will also help with non deterministic challenges, as it either works or it doesn’t.

djoldman

This. Tests are important and they're about to become overwhelmingly important.

The ability to formalize and specify the desired functionality and output will become the essential job of the programmer.

ToucanLoucan

> Do any LLMs do this? If not, why can't they?

Because they aren't knowledgeable. The marketing and at-first-blush impressions that LLMs leave as some kind of actual being, no matter how limited, mask this fact and it's the most frustrating thing about trying to evaluate this tech as useful or not.

To make an incredibly complex topic somewhat simple, LLMs train on a series of materials, in this case we'll talk words. It learns that "it turns out," "in the case of", "however, there is" are all words that naturally follow one another in writing, but it has no clue why one would choose one over the other beyond the other words which form the contexts in which those word series' appear. This process is repeated billions of times as it analyzes the structure of billions of written words until it arrives at a massive in scale statistical model of how likely it is that every word will be followed by every other word or punctuation mark.

Having all that data available does mean an LLM can generate... words. Words that are pretty consistently spelled and arranged correctly in a way that reflects the language they belong to. And, thanks to the documents it trained on, it gains what you could, if you're feeling generous, call a "base of knowledge" on a variety of subjects, in that by the same statistical model, it has "learned" that "measure twice, cut once" is said often enough that it's likely good advice, but again, it doesn't know why that is, which would be: it optimizes your cuts and avoids wasting materials when building something to measure it, mark it, then measure it a second or even third time to make sure it was done correctly before you do the cut, which an operation that cannot be reversed.

However that knowledge has a HARD limit in terms of what was understood within it's training data. For example, way back, a GPT model recommended using elmer's glue to keep pizza toppings attached when making a pizza. No sane person would suggest this, because glue... isn't food. But the LLM doesn't understand that, it takes the question: how do I keep toppings on pizza, and it says, well a ton of things I read said you should use glue to stick things together, and ships that answer out.

This is why I firmly believe LLMs and true AI are just... not the same thing, at all, and I'm annoyed that we now call LLMs AI and AI AGI, because in my mind, LLMs do not demonstrate any intelligence at all.

skydhash

LLMs are great machine learning tech. But what exactly are they learning? No ones knows, because we're just feeding it the internet (or a good part of it) and hoping something good comes out of the end. But so far, it just shows that it only learn the closeness of one unit (token, pixels block,...) to each other. But with no idea why they are close in the first place.

ryoshu

The glue on pizza thing was a bit more pernicious because of how the model came to that conclusion: SERPs. Google's LLM pulled the top result for that query from Reddit and didn't understand that the Reddit post was a joke. It took it as the most relevant thing and hilarity ensued.

In that case the error was obvious, but these things become "dangerous" for that sort of use case when end users trust the "AI result" as the "truth".

Terr_

> The marketing and at-first-blush impressions that LLMs leave as some kind of actual being, no matter how limited, mask this fact

I like to highlight the fundamental difference between fictional qualities of a fictional character versus actual qualities of an author. I might make a program that generates a story about Santa Claus, but that doesn't mean Santa Claus is real or that I myself have a boundless capacity to care for all the children in the world.

Many consumers are misled into thinking they are conversing with an "actual being", rather than contributing "then the user said" lines to a hidden theater script that has a helpful-computer character in it.

foobarian

This sounds an awful lot like the old Markov chains we used to write for fun in school. Is the difference really just scale? There has got to be more to it.

smokel

This explanation is only superficially correct, and there is more to it than simply predicting the next word.

It is the way in which the prediction works, that leads to some form of intelligence.

null

[deleted]

wjholden

The confidence value is a good idea. I just saw a tech demo from F5 that estimated the probability that a prompt might be malicious. The administrator parameterized the tool as a probability and the logs capture that probability. Could be a useful output for future generative AI products to include metadata about uncertainty in their outputs

palmotea

> The first is that the LLM outputs are not consistently good or bad - the LLM can put out 9 good MRs before the 10th one has some critical bug or architecture mistake. This means you need to be hypervigilant of everything the LLM produces, and you need to review everything with the kind of care with which you review intern contributions.

Also, people aren't meant to be hyper-vigilant in this way.

Which is a big contradiction in the way contemporary AI is sold (LLMs, self-driving cars): they replace a relatively fun active task for humans (coding, driving) with a mind-numbing passive monitoring one that humans are actually terrible at. Is that making our lives better?

pjc50

Yup. This is exactly the same problem as the self-driving car. The tech is not 100% reliable. There are going to be incidents. When an incident happens, who takes the blame and what recourse is available? Does the corp using the AI simply eat the cost?

See also https://www.londonreviewbookshop.co.uk/stock/the-unaccountab...

devnull3

> hypervigilant

If a tech works 80% of the time, then I know that I need to be vigilant and I will review the output. The entire team structure is aware of this. There will be processes to offset this 20%.

The problem is that when the AI becomes > 95% accurate (if at all) then humans will become complacent and the checks and balances will be ineffective.

hnthrow90348765

80% is good enough for like the bottom 1/4th-1/3rd of software projects. That is way better than an offshore parasite company throwing stuff at the wall because they don't care about consistency or quality at all. These projects will bore your average HNer to death rather quickly (if not technically, then politically).

Maybe people here are used to good code bases, so it doesn't make sense that 80% is good enough there, but I've seen some bad code bases (that still made money) that would be much easier to work on by not reinventing the wheel and not following patterns that are decades old and no one does any more.

roguecoder

I think defining the places where vibe-coded software is safe to use is going to be important.

My list so far is:

  * Runs locally on local data and does not connect to the internet in any way (to avoid most security issues)
  * Generated by users for their own personal use (so it isn't some outside force inflicting bad, broken software on them)
  * Produces output in standard, human-readable formats that can be spot-checked by users (to avoid the cases where the AI fakes the entire program & just produces random answers)

Ferret7446

We are already there. The threshold is much closer to 80% for average people. For average folks, LLMs have rapidly went from "this is wrong and silly" to "this seems right most of the time so I just trust it when I search for info" in a few years.

philipwhiuk

It is frankly scary seeing novices adopt AI for stuff that you're good at and then hearing about the garbage it's come up with and then realising this problem is everywhere.

roguecoder

Except that we see people in this very thread claiming they shouldn't review code anymore, just the prompts. So however good it is now is enough to be dangerous to users.

overfeed

> That problem leads to the final problem, which is that you need a senior engineer to vet the LLM’s code, but you don’t get to be a senior engineer without being the kind of junior engineer that the LLMs are replacing - there’s no way up that ladder except to climb it yourself

I suspect software will stumble into the strategy deployed by the big 4 Accounting firms and large law firms - have juniors have the first pass and have the changes filter upwards in seniority, with each layer adding comments and suggestions and sending it down to be corrected, until they are ready to sign-off on it.

This will be inefficient amd wildly incompatible with agile practice, but that's one possible way for juniors to become mid-level, and eventually seniors after paying their dues. Its absolutely is inefficient in many ways, and is mostly incompatible with the current way of working as merge-sets have to be considered in a broader context all the time.

cookiengineer

I wanted to add:

The demographical shift over time will eventually lead to degradation of LLM performance, because more content will be of worse quality and transformers are a concept that loses symbolic inference.

So, assuming that LLMs will increase in performance will only be true for the current generations of software engineers, whereas the next generations will lead automatically to worse LLM performance once they've replaced the demographic of the current seniors.

Additionally, every knowledge resource that led to the current generation's advancements is dying out due to proprietarization.

Courses, wikis, forums, tutorials... they all are now part of the enshittification cycle, which means that in the future they will contain less factual content per actual amount of content - which in return will also contribute to making LLM performance worse.

Add to that the problems that come with such platforms, like the stackoverflow mod strikes or the ongoing reddit moderation crisis, and you got a recipe for Idiocracy.

I decided to archive a copy of all books, courses, wikis and websites that led to my advancements in my career, so I have a backup of it. I encourage everyone to do the same. They might be worth a lot in the future, given how the trend is progressing.

HighGoldstein

> The first is that the LLM outputs are not consistently good or bad - the LLM can put out 9 good MRs before the 10th one has some critical bug or architecture mistake. This means you need to be hypervigilant of everything the LLM produces, and you need to review everything with the kind of care with which you review intern contributions.

This is not a counter-argument, but this is true of any software engineer as well. Maybe for really good engineers it can be 1/100 or 1/1000 instead, but critical mistakes are inevitable.

HelloMcFly

Agreed, and on this forum we tend to focus on the tech/coding aspects. But as a knowledge worker in a different domain, I can also tell you that the same issue is happening for other knowledge areas that are not as auditable without expertise.

While we do see this problem when relying on junior knowledge workers, there seems to be a more implicit trust of LLM outputs vs. junior knowledge workers. Also: senior knowledge workers are also subject to errors, but knowledge work isn't always deterministic.

lubujackson

I used to think this about AI, that it will cause a a dearth of junior engineers. But I think it is really going to end up as a new level of abstraction. Aside from very specific bits of code, there is nothing AI does to remove any of the thinking work for me. So now I will sit down, reason through a problem, make a plan and... instead of punching code I write a prompt that punches the code.

At the end of the day, AI can't tell us what to build or why to build it. So we will always need to know what we want to make or what ancillary things we need. LLMs can definitely support that, but knowing ALL the elements and gotchas is crucial.

I don't think that removes the need for juniors, I think it simplifies what they need to know. Don't bother learning the intracacies of the language or optimization tricks or ORM details - the LLM will handle all that. But you certainly will need to know about catching errors and structuring projects and what needs testing, etc. So juniors will not be able to "look under the hood" very well but will come in learning to be a senior dev FIRST and a junior dev optionally.

Not so different from the shift from everyone programming in C++ during the advent of PHP with "that's not really programming" complaints from the neckbeards. Doing this for 20 years and still haven't had to deal with malloc or pointers.

Joker_vD

The C++ compilers at least don't usually miscompile your source code. And when they do, it happens very rarely, mostly in obscure corners of the language, and it's kind of a big deal, and the compiler developers fix it.

Compare to the large langle mangles, which somewhat routinely generate weird and wrong stuff, it's entirely unpredictable what inputs may trip it, it's not even reproducible, and nobody is expected to actually fix that. It just happens, use a second LLM to review the output of the first one or something.

I'd rather have my lower-level abstractions be deterministic in a humanly-legible way. Otherwise in a generation or two we may very well end up being actual sorcerers who look for the right magical incantations to make the machine spirits obey their will.

xg15

The intro sentence to this is quite funny.

> Remember the first time an autocomplete suggestion nailed exactly what you meant to type?

I actually don't, because so far this only happened with trivial phrases or text I had already typed in the past. I do remember however dozens of times where autocorrect wrongly "corrected" the last word I typed, changing an easy to spot typo into a much more subtle semantic error.

hyperbolablabla

Sometimes autocorrect will "correct" perfectly valid words if it deems the correction more appropriate. Ironically while I was typing this message, it changed the word "deems" to "seems" repeatedly. I'm not sure what's changed with their algorithm, but this appears to be far more heavy handed than it used to be.

stefanfisk

If I remember correctly, iOS 18 introduced a ”new and improved” ML based autocorrecter.

I have also noticed a SHARP decline in autocorrecting quality.

Izkata

SwiftKey has this really frustrating one where you'll remove the incorrect word and try again, and it reinserts the wrong word plus something additional.

anonzzzies

Not from traditional auto complete, but I have some LLM 'auto complete'; because the LLM 'saw' so much code during training, there is that magic that you just have a blinking prompt and suddenly it comes up with exactly what you intended out of 'thin air'. Then again, I also very often have that it really comes up with stuff I will never want. But I remember mostly the former cases.

thechao

I see these sorts of statements from coders who, you know, aren't good programmers in the first place. Here's the secret that I that I think LLM's are uncovering: I think there's a lot of really shoddy coders out there; coders who could could/would never become good programmers and they are absolutely going to be replaced with LLMs.

I don't know how I feel about that. I suspect it's not going to be great for society. Replacing blue collar workers for robots hasn't been super duper great.

rowanajmarshall

> Replacing blue collar workers for robots hasn't been super duper great.

That's just not true. Tractors, combine harvesters, dishwashers washing machines, excavators, we've repeatedly revolutionised blue-collar work, made it vastly, extraordinary more efficient.

vineyardmike

> made it vastly, extraordinary more efficient.

I'd suspect that these equipments also made it more dangerous. They also made it more industrial in scale and capital costs, driving "homestead" and individual farmers out of the business, replaced by larger and more capitalized corporations.

We went from individual artisans crafting fabrics by hand, to the Industrial Revolution where children lost fingers tending to "extraordinary more efficient" machines that vastly out-produced artisans. This trend has only accelerated, where humans consume and throw out an order of magnitude more clothing than a generation ago.

You can see this trend play out across industrialized jobs - people are less satisfied, there is some social implications, and the entire nature of the job (and usually the human's independence) is changed.

The transitions through industrialization have had dramatic societal upheavals. Focusing on the "efficiency" of the changes, ironically, miss the human component of these transitions.

pjmorris

Excerpted from Tony Hoare's 1980 Turing Award speech, 'The Emperor's Old Clothes'...

  "At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted--he always shouted-- "You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution."
My interpretation is that whether shifting from delegation to programmers, or to compilers, or to LLMs, the invariant is that we will always have to understand the consequences of our choices, or suffer the consequences.

timewizard

> Remember the first time an autocomplete suggestion nailed exactly what you meant to type?

No.

> Multiply that by a thousand and aim it at every task you once called “work.”

If you mean "menial labor" then sure. The "work" I do is not at all aided by LLMs.

> but our decision-making tools and rituals remain stuck in the past.

That's because LLMs haven't eliminated or even significantly reduced risk. In fact they've created an entirely new category of risk in "hallucinations."

> we need to rethink the entire production-to-judgment pipeline.

Attempting to do this without accounting for risk or how capital is allocated into processes will lead you into folly.

> We must reimagine knowledge work as a high-velocity decision-making operation rather than a creative production process.

Then you will invent nothing new or novel and will be relegated to scraping by on the overpriced annotated databases of your direct competitors. The walled garden just raised the stakes. I can't believe people see a future in it.

ozim

My observation over the years as a software dev was that velocity is overrated.

Mostly because all kinds of systems are made for humans - even if we as a dev team were able to pump out features we got pushed back. Exactly because users had to be trained, users would have to be migrated all kinds of things would have to be documented and accounted for that were tangential to main goals.

So bottleneck is a feature not a bug. I can see how we should optimize away documentation and tangential stuff so it would happen automatically but not the main job where it needs more thought anyway.

charlie0

This is my observation as well, especially in startups. So much spaghetti thrown at walls and that pressure falls in devs to have higher velocity when it should fall on product, sales, exec to actually make better decisions.

ebiester

I'm really working hard to figure out how to optimize away documentation, but it's seemingly harder than writing the code. It's easier to generate the code from the documentation than the documentation from the code, reference documentation (like OpenAPI docs) aside.

jaimebuelta

Good iteration process is really important. Just throwing things faster into the wall doesn't help if you don't pause to check which one sticks, or, even worst, you are not even able to know which one sticks.

That's a very human reflective process that requires time.

causal

A few articles like this have hit the front page, and something about them feels really superficial to me, and I'm trying to put my finger on why. Perhaps it's just that it's so myopically focused on day 2 and not on day n. They extrapolate from ways AI can replace humans right now, but lack any calculus which might integrate second or third order effects that such economic changes will incur, and so give the illusion that next year will be business as usual but with AI doing X and humans doing Y.

creesch

Maybe it is the fact that they blatantly paint a picture of AI doing flawless production work where the only "bottleneck" is us puny humans needing to review stuff. It exemplifies this race to the bottom where everything needs to be hyperefficient and time to market needs to be even lower.

Which, once you stop to think about it, is insane. There is a complete lack of asking why. To In fact, when you boil it down to its core argument it isn't even about AI at all. It is effectively the same grumblings from management layers heard for decades now where they feel (emphasis) that their product development is slowed down by those pesky engineers and other specialists making things too complex, etc. But now just framed around AI with unrealistic expectations dialed up.

canadaduane

I appreciate this, but also wonder if we are in the middle of a transformation where some forms of creativity (note: not necessarily engineering) are being "flattened". Everyone can output beautiful pixels, beautiful audio, beautiful token sequences.

Maybe it's like the transformation of local-to-global that traveling musicians felt in the early 1900s: now what they do can be experienced for free, over the radio waves, by anyone with a radio.

YouTube showed us that video needn't be produced only by those with $10M+ budgets. But we still appreciate Hollywood.

There are new possibilities in this transformation, where we need to adapt. But there are also existing constraints that don't just disappear.

To me, the "Why" is that people want positive experiences. If the only way to get them is to pay experts, then they will. But if they have alternatives, that's fine too.

lotsofpulp

> There is a complete lack of asking why

The answer to this seems obvious to me. Buyers seek the lowest price, so sellers are incentivized to cut their cost of goods sold.

Investors seek the highest return on investment (people prefer more purchasing power than less purchasing power), so again, businesses are incentivized to cut their cost of goods sold.

The opposing force to this is buyers prefer higher quality to lower quality.

The tradeoff between these parameters is in constant flux.

danielmarkbruce

Why: they assume that humans have some secret sauce. Like... judgement...we don't. Once you extrapolate, yes, many things will be very very different.

bendigedig

Validating the outputs of a stochastic parrot sounds like a very alienating job.

darth_avocado

As a staff engineer, it upsets me if my Review to Code ratio goes above 1. Days when I am not able to focus and code, because I was reviewing other people’s work all day, I usually am pretty drained but also unsatisfied. If the only job available to engineers becomes “review 50 PRs a day, everyday” I’ll probably quit software engineering altogether.

moosedev

Feeling this too. And AI is making it "worse".

Reviewing human code and writing thoughtful, justified, constructive feedback to help the author grow is one thing - too much of this activity gets draining, for sure, but at least I get the satisfaction of teaching/mentoring through it.

Reviewing AI-generated code, though, I'm increasingly unsure there's any real point to writing constructive feedback, and I can feel I'll burn out if I keep pushing myself to do it. AI also allows less experienced engineers to churn out code faster, so I have more and more code to review.

But right now I'm still "responsible" for "code quality" and "mentoring", even if we are going to have to figure out what those things even mean when everyone is a 10x vibecoder...

Hoping the stock market calms down and I can just decide I'm done with my tech career if/when this change becomes too painful for dinosaurs like me :)

acedTrex

I could not agree more.

> AI also allows less experienced engineers to churn out code faster, so I have more and more code to review

This to me has been the absolute hardest part of dealing with the post LLM fallout in this industry. It's been so frustrating for me personally I took to writing my thoughts down in a small blog humerously titled

"Yes, I will judge you for using AI...",

in fact I say nearly this exact sentiment in it.

https://jaysthoughts.com/aithoughts1

thrwyep

I see this too, more and more code looks like made by the same person, even though it come from different people.

I hate these kind of comments, I'm tired to flag them for removal so they pollute code base more and more, like people did not realise how stupid of a comment this is

    # print result
    print(result)

I'm yet to experience coding agent to do what I asked for, so many times the solution I came up with was shorter, cleaner, and better approach than what my IDE decided to produce... I think it works well as rubber duck where I was able to explore ideas but in my case that's about it.

CharlieDigital

I am mixed.

I sometimes use it to write utility classes/functions in totality when I know the exact behavior, inputs, and outputs.

It's quite good at this. The more standalone the code is, the better it is at this task. It is interesting to review the approaches it takes with some tasks and I find myself sometimes learning new things I would otherwise have not.

I have also noticed a difference in the different models and their approaches.

In one such case, OpenAI dutifully followed my functional outline while Gemini converted it to a class based approach!

In any case, I find that reviewing the output code in these cases is a learning opportunity to see some variety in "thinking".

rjbwork

>review 50 PRs a day, everyday

Basically my job as a staff these days, though not quite that number. I try to pair with those junior to me on some dicey parts of their code at least once a week to get some solid coding time in, and I try to do grunt work that others are not going to get to that can apply leverage to the overall productivity of the organization as a whole.

Implementing complicated sub-systems or features entire from scratch by myself though? Feels like those days are long gone for me. I might get a prototype or sketch out and have someone else implement it, but that's about it.

kmijyiyxfbklao

> As a staff engineer, it upsets me if my Review to Code ratio goes above 1.

How does this work? Do you allow merging without reviews? Or are other engineers reviewing code way more than you?

darth_avocado

Sorry I wrote that in haste. I meant it in terms of time spent. In absolute number of PRs, you’d probably be reviewing more PRs than you create.

namaria

A lifetime ago I quit translation as a job because everyone was just throwing stuff on google translate and wanted me to review it. It was horrible.

PaulRobinson

Most knowledge work - perhaps all of it - is already validating the output of stochastic parrots, we just call those stochastic parrots "management'.

FeepingCreature

It's actually very fun, ime.

bendigedig

I have plenty of experience doing code reviews and to do a good job is pretty hard and thankless work. If I had to do that all day every day I'd be very unhappy.

chamomeal

It is definitely thankless work, at least at my company.

It’d be even more thankless if instead of writing good feedback that somebody can learn from (or can spark interesting conversations that I can learn from), you would just said “nope GPT it’s not secure enough” and regenerate the whole PR, then read all the way through it again. Absolute tedium nightmare

bccdee

> AI is scaling the creation side of knowledge work at an exponential rate

Why do people keep saying things like this? "Exponential rate"? That's just not true. So far the benefits are marginal at best and limited to relatively simple tasks. It's a truism at this point, even among fans of AI, that the benefits of AI are much more pronounced at junior-level tasks. For complex work, I'm not convinced that AI has "scaled the creation side of knowledge work" at all. I don't think it's particularly useful for the kind of non-trivial tasks that actually take up our time.

Amdahl's Law comes into play. If using AI gives you 200% efficiency on trivial tasks, but trivial tasks only take 10% of your time, then you've realized a whopping 5.3% productivity boost. I do not actually spend much time on boilerplate. I spend time debugging half-baked code, i.e. the stuff that LLMs spit out.

I realize I'm complaining about the third sentence of the article, but I refuse to keep letting people make claims like this as if they're obviously true. The whole article is based on false premises.

eezurr

And once the Orient and Decide part is augmented, then we'll be limited by social networks (IRL ones). Every solo founder/small biz will have to compete more and more for marketing eyeballs, and the ones who have access to bigger engines (companies), they'll get the juice they need, and we come back to humans being the bottlenecks again.

That is, until we mutually decide on removing our agency from the loop entirely . And then what?

thrwyep

I think less people will decide to open source their work, so AI solutions will divert from 'dark codebases' not available for models to be trained on. And people who love vibe coding will keep feeding models with code produced by models. Maybe we already reached the point where enough knowledge was locked in the models and this does not matter? I think not, based on code AI generated for me. I probably ask wrong questions.

joshdavham

> What I see happening is us not being prepared for how AI transforms the nature of knowledge work and us having a very painful and slow transition into this new era.

I would've liked for the author to be a bit specific here. What exactly could this "very painful and slow transition" look like? Any commenters have any idea? I'm genuinely curious.

0xWTF

The article may not be consistent with what I'm hearing from doctors using ambient dictation, which admittedly fits a slightly different niche than the author's use case, but points to their final prediction that the paths to adoption will be complicated.

A number of the docs I'm working with describe using ambient dictation as a game changer. Using the OODA loop analogy of the author: they are tightening the full OODA loop by deferring documentation to the end of the day. Historically this was a disaster because they'd forget the first patient by the end of the day. Now, the first patient's automatically dictated note is perhaps wrong but rich with details the spark sufficient remembrance.

Of course MBAs will use this to further crush physicians with additional workload, but for a time, it may help.

Animats

> This pile of tasks is how I understand what Vaughn Tan refers to as Meaningmaking: the uniquely human ability to make subjective decisions about the relative value of things.

Why is that a "uniquely human ability"? Machine learning systems are good at scoring things against some criterion. That's mostly how they work.

atomicnumber3

How are the criterion chosen though?

Something I learned from working alongside data scientists and financial analysts doing algo trading is that you can almost always find great fits for your criteria, nobody ever worries about that. Its coming up with the criteria that's what everyone frets over, and even more than that, you need to beat other people at doing so - just being good or event great isn't enough. Your profit is the delta between where you are compared to all the other sharks in your pool. So LLMs are useless there, getting token predicted answers is just going to get you the same as everyone else, which means zero alpha.

So - I dunno about uniquely human? But there's definitely something here where, short of AGI, there's always going to need to be someone sitting down and actually beating the market (whatever that metaphor means for your industry or use case).

fwip

Finance is sort of a unique beast in that the field is inherently negative-sum. The profits you take home are always going to be profits somebody else isn't getting.

If you're doing like, real work, solving problems in your domain actually adds value, and so the profits you get are from the value you provide.

kaashif

If you're algo trading then yes, which is what the person you're replying to is talking about.

But "finance" is very broad and covers very real and valuable work like making loans and insurance - be careful not to be too broad in your condemnation.

atomicnumber3

This is an overly simplistic view of algo trading. It ignores things like market services, the very real value of liquidity, and so on.

Also ignores capital gains - and small market moves are the very mechanism by which capital formation happens.

rukuu001

I think this is challenging because there’s a lot of tacit knowledge involved, and feedback loops are long and measurement of success ambiguous.

It’s a very rubbery, human oriented activity.

I’m sure this will be solved, but it won’t be solved by noodling with prompts and automation tools - the humans will have to organise themselves to externalise expert knowledge and develop an objective framework for making ‘subjective decisions about the relative value of things’.