Skip to content(if available)orjump to list(if available)

AI and the ironies of automation – Part 2

z_

This is a thought provoking piece.

“But at what cost?”

We’ve all accepted calculators into our lives as being faster and correct when utilized correctly (Minus Intel tomfoolery), but we emphasize the need to know how to do the math in educational settings.

Any post education adult will confirm when confronted with an irregular math problem (or a skill) that there is a wait time to revive the ability.

Programming automation having the potential skill decay AND being critical path is … worth thinking about.

xorcist

Comparisons with deterministic tools such as calculators will always lead astray. There is no comparable situation where faced with a new problem the AI will just give up. If there is the need for an expert, the need is always there, because there is no indication external to the process that the process will fail.

nuancebydefault

The article discusses basically 2 new problems with using agentic AI:

- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.

- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.

Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.

I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.

asielen

The way you put that makes be think of the current challenge younger generations are having with technology in general. Kids who were raised on touch screen interfaces vs kids in older generations who were raised on computers that required more technical skill to figure out.

In the same way, when everything just works, there will be no difference, but when something goes wrong, the person who learned the skills before will have a distinct advantage.

The question is if AI gets good enough that slowing down occasionally to find a specialist is tenable. It doesn't need to be perfect, it just needs to be predicably not perfect.

Expertw will always be needed, but they may be more like car mechanics, there to fix hopefully rare issues and provide a tune up, rather than building the cars themselves.

delaminator

I used to be a maintenance data analyst in a welding plant welding about 1 million units per month.

I was the only person in the factory who was a qualified welder.

DiscourseFan

That's how it tends to go, automation removes some parts of the work but creates more complexity. Sooner or later that will also be automated away, and so on and so forth. AGI evangelists ought to read Marx's Capital.

sublimefire

Good discussion of the paper and the observations and ironies. A thing to note is that we do have software factories already, with a bunch of automation in place and folks being trained to deal with incidents. The pools of agents just elevate what we currently have but the tools are still lacking severely. IMO the tools need to improve for us to move forward as it is difficult to observe the decisions of agents when they fall apart.

Also, by and large the current AI tools are not in the critical path yet, well except those drones that lock on targets to eliminate them in case of interference, and even then it is ML. Agents can not be in that path due to predictability challenges yet.

everdrive

I can feel the skill atrophy creeping in. My very first instinct is go use the LLM. I think much like forcing yourself to exercise, eat right, and avoid social media / distractions, this will be a new modern skillset; do you have the discipline to avoid becoming useless without an LLM? A small few will be great at this, the middle of the bell curve will do "well enough," and you know the story for the rest.

andy99

I’ve been using LLMs to code for some time and I look at it differently.

I ask myself if I need to understand the code, and if the answer is yes I don’t use an LLM. It’s not a matter of discipline, it’s a sober view of what the minimal amount of work for me is.

wesammikhail

Our of curiosity, does anyone know of a good writeup / blog post made by someone in the industry that revolves around reducing orchestration error rates? Would love to read some more about the topic and I'm looking for a few good resources.

jennyholzer2

"Most companies are efficiency-obsessed. Hence, they also expect AI solutions to increase “productivity”, i.e., efficiency, to a superhuman level. If a human is meant to monitor the output of the AI and intervene if needed, this requires that the human needs to comprehend what the AI solution produced at superhuman speed – otherwise we are down to human speed. This presents a quandary that can only be solved if we enable the human to comprehend the AI output at superhuman speed (compared to producing the same output by traditional means)."

everdrive

> "Most companies are efficiency-obsessed. Hence, they also expect AI solutions to increase “productivity”

So this is true on paper, but I can tell you that companies don't broadly do a very good job of being efficient. What they do a good job of is doing the bare minimum in a number of situations, generating fragile, messy, annoying, or tech-debt-ridden systems / processes / etc.

Companies regularly claim to make objective and efficient decisions, but often those decisions amount to little more than doing a half-assed job because it will save money and will probably be good enough. The "probably" does a lot of work here, and then "probably" is not good enough there's a lot of blame shifting / politics / bullshitting.

The idea that companies are efficient is generally not very realistic except when it comes to things with real, measurable costs, such as manufacturing.

conception

I think it’s more that companies can want to be efficient but most people prefer the status quo to change on just about any work task if it requires any relearning or training effort.

SecretDreams

> What they do a good job of is doing the bare minimum in a number of situations, generating fragile, messy, annoying, or tech-debt-ridden systems / processes / etc.

Is that not efficiency? ~ some managers I know

TheOtherHobbes

Not necessarily. It depends if the process is deterministic and repeatable.

If an AI generates a process more quickly than a human, and the process can be run deterministically, and the outputs are testable, then the process can run without direct human supervision after initial testing - which is how most automated processes work.

The testing should happen anyway, so any speed increase in process generation is a productivity gain.

Human monitoring only matters if the AI is continually improvising new solutions to dynamic problems and the solutions are significantly wrong/unreliable.

Which is a management/analysis problem, and no different in principle to managing a team.

The key difference in practice is that you can hire and fire people on a team, you can intervene to change goals and culture, and you can rearrange roles.

With an agentic workflow you can change the prompts, use different models, and redesign the flow. But your choices are more constrained.

lkjdsklf

The issue is LLMs are, by design, non-deterministic.

That means that, with the current technology, there can never be a deterministic agent.

Now obviously, humans aren't deterministic either, but the error bars are a lot closer together than they are with LLMs these days.

An easy to point at example is the coding agent that removed someones home directory that was circulating around. I'm not saying a human has never done that, but it's far less likely because it's so far out of the realm of normal operations.

So as of today, we need humans in the loop. And this is understood by the people making these products. That's why they have all these permissions and prompts for you to accept/run commands and all of that.

1718627440

> An easy to point at example is the coding agent that removed someones home directory that was circulating around. I'm not saying a human has never done that, but it's far less likely because it's so far out of the realm of normal operations.

And it would be far less likely that the human deleted someone else's home directory, and even if he did, there would be someone to be angry about.

loa_in_

There's lots of _marketing_ promising unsupervised agents. It's important to remember not to drink the cool-aid.