Skip to content(if available)orjump to list(if available)

With AI you need to think bigger

With AI you need to think bigger

181 comments

·March 9, 2025

haswell

I recently discovered that some of the Raspberry Pi models support the Linux Kernel's "Gadget Mode". This allows you to configure the Pi to appear as some type of device when plugged into a USB port, i.e. a Mass Storage/USB stick, Network Card, etc. Very nifty for turning a Pi Zero into various kinds of utilities.

When I realized this was possible, I wanted to set up a project that would allow me to use the Pi as a bridge from my document scanner (has the ability to scan to a USB port) to a SMB share on my network that acts as the ingest point to a Paperless-NGX instance.

Scanner -> USB "drive" > Some of my code running on the Pi > The SMB Share > Paperless.

I described my scenario in a reasonable degree of detail to Claude and asked it to write the code to glue all of this together. What it produced didn't work, but was close enough that I only needed to tweak a few things.

While none of this was particularly complex, it's a bit obscure, and would have easily taken a few days of tinkering the way I have for most of my life. Instead it took a few hours, and I finished a project.

I, too, have started to think differently about the projects I take on. Projects that were previously relegated to "I should do that some day when I actually have time to dive deeper" now feel a lot more realistic.

What will truly change the game for me is when it's reasonable to run GPT-4o level models locally.

tux3

Fun fact: Gadget mode also works on Android phones, if you want a programmable USB device that you can easily program and carry around

I made a PoC of a 2FA authenticator (think Yubikey) that automatically signs you in. I use it for testing scenarios when I have to log out and back in many times, it flies through what would otherwise be a manual 2FA screen with pin entry, or navigating 2FA popups to select passkey and touching your fingerprint reader.

Obviously not very secure, but very useful!

haswell

This answers a question I didn't realize I had. I had already been thinking about some kind of utility gadget made up of a Pi Zero with a tiny screen and battery, but an Android phone solves a lot of problems in one go.

albert_e

Interesting !

I have a bunch of older android phones that could be repurposed for some tinkering.

The touchscreen display and input would open a lot more interactive possibilities.

Is there a community or gallery of ideas and projects that leverage this?

tux3

There is this repository: https://github.com/tejado/android-usb-gadget

They have some examples that emulate an USB keyboard and mouse, and the app shows how to configure the Gadget API to turn the phone into whatever USB device you want.

The repo is unfortunately inactive, but the underlying feature is exposed through a stable Linux kernel API (via ConfigFS), so everything will continue working as long as Android leaves the feature enabled.

You do need to be root, however, since you will essentially be writing a USB device. Then all you have to do is open `/dev/hidg0`, and when you read from this file you will be reading USB HID packets. Write your response and it is sent on the cable.

magic_hamster

> Instead it took a few hours, and I finished a project.

Did you?

If you wanted to expand on it, or debug it when it fails, do you really understand the solution completely? (Perhaps you do.)

Don't get me wrong, I've done the same in the last few years and I've completed several fun projects this way.

But I only use AI on things I know I don't care about personally. If I use too much AI on things I actually want to know, I feel my abilities deteriorating.

haswell

> Did you?

Yes.

> do you really understand the solution completely?

Yes; fully. I'd describe what I delegated to the AI as "busy work". I still spent time thinking through the overall design before asking the AI for output.

> But I only use AI on things I know I don't care about personally.

Roughly speaking, I'd put my personal projects in two different categories:

1. Things that try to solve some problem in my life

2. Projects for the sake of intellectual stimulation and learning

The primary goal of this scanner project was/is to de-clutter my apartment and get rid of paper. For something like this, I prioritize getting it done over intellectual pursuits. Another option I considered was just buying a newer scanner with built-in scan-to-SMB functionality.

Using AI allowed me to split the difference. I got it done quickly, but I still learned about some things along the way that are already forming into future unrelated project ideas.

> If I use too much AI on things I actually want to know, I feel my abilities deteriorating.

I think this likely comes down to how it's used. For this particular project, I came away knowing quite a bit more about everything involved, and the AI assistance was a learning multiplier.

But to be clear, I also fully took over the code after the initial few iterations of LLM output. My goal wasn't to make the LLM build everything for me, but to bootstrap things to a point I could easily build from.

I could see using AI for category #2 projects in a more limited fashion, but likely more as a tutor/advisor.

thrwthsnw

For category #2 it’s very useful as well and ties in with the theme of the article in that it reduces the activation energy required to almost zero. I’ve been using AI relentlessly to pursue all kinds of ideas that I would otherwise simply write down and file for later. When they involve some technology or theory I know little about I can get to a working demo in less than an hour and once I have that in hand I begin exploring the concepts I’m unfamiliar with by simply asking about them: what is this part of the code doing? Why is this needed? What other options are there? What are some existing projects that do something similar? What is the theory behind this? And then also making modifications or asking for changes or features. It allows for much wider and faster exploration and let’s you build things from scratch instead of reaching for another library so you end up learning how things work at a lower level. The code does get messy but AI is also a great tool for refactoring and debugging, you just have to adjust to the faster pace of development and remember to take more frequent pauses to clean up or rebuild from a better starting point and understanding of the problem.

dangus

I think this effect of losing your abilities is somewhat overblown.

Especially when AI has saved me on actually explaining specific lines of code that would have been difficult to look up with a search engine or reference documentation and know what I was looking for.

At some point understanding is understanding, and there is no intellectual "reward" for banging your head against the wall.

Regex is the perfect example. Yes, I understand it, but it takes me a long time to parse through it manually and I use it infrequently enough that it turns into a big timewaster. It's very helpful for me to just ask AI to come up with the string and for me to verify it.

And if I were the type of person who didn't understand the result of what I was looking at, I could literally ask that very same AI to break it down and explain it.

haswell

This summarizes my feelings pretty well. I've been writing code in a dozen languages for 25+ years at this point. Not only do I not gain anything from writing certain boilerplate for the nth time, I'm also less likely to actually do the work unless it reaches some threshold of importance because it's just not interesting.

With all of this said, I can see how this could be problematic with less experience. For this scanner project, it was like having the ability to hand off some tasks to a junior engineer. But having juniors around doesn't mean senior devs will atrophy.

It will ultimately come down to how people use these tools, and the mindset they bring to their work.

liotier

Please, I would be delighted if you published that code... Just yesterday I was thinking that a two-faced Samba share/USB Mass Storage dongle Pi would save me a lot of shuttling audio samples between my desktop and my Akai MPC.

haswell

I've been thinking about writing up a blog post about it. Might have to do a Show HN when time allows.

This guide was a huge help: https://github.com/thagrol/Guides/blob/main/mass-storage-gad...

thierrydamiba

Please do-I think this is a great example of how AI can be helpful.

We see so many stories about how terrible AI coding is. We need more practical stories of how it can help.

teeray

I was also writing a SANE-to-Paperless bridge to run on an RPi recently, but ran into issues getting it to detect my ix500. Would love to see the code!

genewitch

Well, R1 is runnable locally for under $2500; so I guess you could pool money and share the cost with other people that think they need that much power, rather than a quantized model with fewer parameters (or a distil).

m463

That gives me ideas.

I think a lot of the reason linux isn't used for lots of things is that you have to basically be a sysadmin to set up some things.

for example, setting up a local LLM.

I wonder what you would call getting a "remote" AI to help set up a local AI?

Something like but not exactly emancipation or emigration or ...

taneq

I set up ollama on our work ‘AI server’ (well, grunty headless workstation running Ubuntu) and then got Dolphin-Mixtral to help me figure out why it wasn’t using the GPUs. :)

I ended up having to figure it out myself (a previous install attempt meant the running instance wasn’t the one I’d compiled with GPU support) but it was an interesting exercise.

m463

Uncertain if AI doesn't understand it will get more resources, or it DOES understand and a recompile is an existential nightmare. :)

downboots

would you have paid someone to do it over solving the challenge yourself?

CaffeineLD50

I had a minor desire to make a feature that had a slightly higher effort than reward, so although I knew I could struggle it out I didn't bother.

After years of this I decided to give an AI a shot at the code. It produced something plausible looking and I was excited. Was it that easy?

The code didn't work. But the approach made me more motivated to look into it and I found a solution.

So although the AI gave me crap code it still inspired the answer, so I'm calling that a win.

Simply making things feel approachable can be enough.

nineplay

One of my more effective uses of AI is for rubber duck debugging. I tell it what I want the code to do, iterate over what it comes back with, adjust the code ( 'now rewrite foo() so 'bar' is is passed in'). What comes back isn't necessarily perfect and I don't blindly copy and paste but that isn't the point. At the end I've worked out what I want to do and some of the tedious boiler-plate code is taken care of.

epiccoleman

I had some results last week that I felt were really good - on a very tricky problem, using AI (Claude 3.7) helped me churn through 4 or 5 approaches that didn't work, and eventually, working in tandem, "we" found an approach that would. Then the AI helped write a basic implementation of the good approach which I was able to use as a reference for my own "real" implementation.

No, the AI would not have solved the problem without me in the loop, but it sped up the cycle of iteration and made something that might have taken me 2 weeks take just a few days.

It would be pretty tough to convince me that's not spectacularly useful.

null

[deleted]

soperj

I've tried it, and ended up with the completely wrong approach, which didn't take that long to figure out, but still wasted a good half hour. Would have been horrible if i didn't know what I was doing though.

CaffeineLD50

Yes, that's one of the bigger traps. In my case I knew what needed to be done and could've done it on my own if I really needed to.

A novice with no idea could blunder through but get lost quickly.

gopalv

> some of the tedious boiler-plate code is taken care of.

For me that is the bit which stands out, I'm switching languages to TypeScript and JSX right now.

Getting copilot (+ claude) to do things is much easier when I know exactly what I want, but not here and not in this framework (PHP is more my speed). There's a bunch of stuff you're supposed to know as boilerplate and there's no time to learn it all.

I am not learning a thing though, other than how to steer the AI. I don't even know what SCSS is, but I can get by.

The UI hires are in the pipeline & they should throwaway everything I build, but right now it feels like I'm making something they should imitate in functionality/style better than a document, but not in cleanliness.

dartos

The idea of untangling AI generated typescript spaghetti fills me with dread.

It’s as bad as untangling the last guy’s typescript spaghetti. He quit, so I can’t ask him about it either.

thinkingtoilet

My experience with ChatGPT is underwhelming. It does really basic language questions faster and easier than google now. Questions about a function signature or questions like, "how do I get the first n characters of a string". Things like that. Once I start asking it more complex questions not only does it get it wrong often, if you tell it the answer is wrong and ask to do it again it will often give you the same answer. I have no doubt it will get there, but I continue to be surprised at all the positives I hear about it.

kansface

What language are you writing? I mostly write go these days, and have often wondered if it is uniquely good in that language given its constraints.

null

[deleted]

headcanon

Agreed, its always nicer for me to have something to work with, even if by the end of it its entirely rewritten.

It helps to have it generate code sometimes to just explore ideas and refine the prompt. If its obviously wrong, thats ok, sometimes I needed to see the wrong answer to get to the right one faster. If its not obviously wrong, then its a good enough starting point we can iterate to the answer.

__xor_eax_eax

I love throwing questions at it where previously it would have been daunting because you don't even know the right questions to ask, and the amount of research you'd need to do to even ask the proper question is super high.

Its great for ideating in that way. It does produce some legendary BS though.

d0mine

it looks like a variation of Stone soup story https://en.wikipedia.org/wiki/Stone_Soup

infogulch

> although the AI gave me crap code it still inspired the answer

This is exactly my experience using AI for code and prose. The details are like 80% slop, but it has the right overall skeleton/structure. And rewriting the details of something with a decent starting structure is way easier than generating the whole thing from scratch by hand.

golergka

> I decided to give an AI

What model? What wrapper? There's just a huge amount of options on the market right now, and they drastically differ in quality.

Personally, I've been using Claude Code for about a week (since it's been released) and I've been floored with how good it is. I even developed an experimental self-developing system with it.

CaffeineLD50

I prefer open source models.

johnmaguire

I had a similar experience but found that with a little prodding, I was even able to get it to finish the job.

Then it was a little messy, so I asked it to refactor it.

Of course, not everything lends itself to this: often I already know exactly the code I want and it's easier to just type it than corral the AI.

klabb3

As a mostly LLM-skeptic I reluctantly agree this is something AI actually does well. When approaching unfamiliar territory, LLMs (1) use simple language (improvement over academia but also much professional intentionally obfuscated literature), (2) use the right abstraction (they seem good at ”zooming out” to big picture of things, and (3) you can move both laterally between topics and ”zoom in” quickly. Another way of putting it is ”picking the brain” of an expert in order to build a rough mental model.

It’s downsides, such as hallucinations and lack of reasoning (yeah) aren’t very problematic here. Once you’re familiar enough you can switch to better tools and know what to look for.

mdp2021

My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known (e.g. a standard task in some technology used by many), and terrible where the problem has not been tackled much by the public.

About language (point (1)), I get a lot of "hypnotism for salesmen to non technical managers and roundabout comments" (e.g. "which wire should I cut, I have a red one and a blue one" // "It is mission critical to cut the right wire; in order to decide which wire to cut, we must first get acquainted with the idea that cutting the wrong wire will make the device explode..." // "Yes, which one?" // "Cutting the wrong one can have critical consequences...")

klabb3

> and terrible where the problem has not been tackled much by the public

Very much so (I should have added this as a downside in the original comment). Before I even ask a question I ask myself "does it have training data on this?". Also, having a bad answer is only one failure mode. More commonly, I find that it drifts towards the "center of gravity", i.e. the mainstream or most popular school of thought, which is like talking to someone with a strong status-quo bias. However, before you've familiarized yourself with a new domain, the "current state of things" is a pretty good bargain to learn fast, at least for my brain.

marcosdumay

> My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known

Yes, that's a necessary condition. If there isn't some well known solution, LLMs won't give you anything useful.

The point though, is that the solution was not well known to the GP. That's where LLMs shine, they "understand" what you are trying to say, and give you the answer you need, even when you don't know the applicable jargon.

khaledh

Agreed. LLMs pull you towards the average knowledge, and they suck when you're trying to find a creative solution that challenges the status quo.

keeptrying

Yes. LLMs are the perfect learning assistant.

You can now do literally anything. Literally.

Going to take a while for everyone to figure this out but they will given time.

Cheer2171

I'm old enough to remember when they first said that about the Internet. We were going to enter a new enlightened age of information, giving everyone access to the sum total of human knowlege, no need to get a fancy degree, universities will be obsolete, expertise will be democratized.... See how that turned out.

elliotbnvl

The motivated will excel even further, for the less motivated nothing will change. The gap is just going to increase between high-agency individuals and everyone else.

CamperBob2

I'm old enough to remember when they first said that about the Internet.

(Shrug) It was pretty much true. But it's like what Linus says in an old Peanuts cartoon: https://www.gocomics.com/peanuts/1969/07/20

svnt

I’d suggest we are much closer to that reality now than we were in the 90s, in large part thanks to the internet.

whartung

  > You can now do literally anything. Literally.
In theory.

In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.

And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”

It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.

But higher than that, no, I’ve not had success with it.

It’s also nice as a general purpose wizard code generator. But that’s just rote work.

YMMV

keeptrying

You just aren’t delving deep enough.

For every problem that stops you, ask the LLM. With enough context it’ll give you at least a mediocre way to get around your problem.

It’s still a lot of hard work. But the only person that can stop yourself is you. (Which it looks like you’ve done.)

List the reasons you’ve stopped below and I’ll give you prompts to get around them.

tqwhite

First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.

Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.

I hope that helps.

ch4s3

They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.

Verdex

I don't know. I wouldn't trust a brain surgeon who has up til now only been messing around on LLMs.

Edit: and for that matter I also would not trust a brain surgeon who had only read about brain surgery in medical texts.

keeptrying

Practical knowledge is the at most important.

Weirdly you’ll get a lot of useful experience as you analyze yourself through 80 years.

redman25

I spent a couple weekends trying to reimplement microsoft's inferencing for phi4 multimodal in rust. I had zero experience messing with ONNX before. Claude produced a believably good first pass but it ended up being too much work in the end and I've put it down for the moment.

I spent a lot of time fixing Claude's misunderstanding of the `ort` library, mainly because of Claude's knowledge cutoff. In the end, the draft just wasn't complete enough to get working without diving in really deep. I also kind of learned that ONNX probably isn't the best way to approach these things anymore. Most of the mindshare is around the python code and torch apis.

keeptrying

This is an interesting.

AI leads to more useless dives down into the internets.

sunami-ai

LLMs don't reason the way we do, but there are similarities at the cognitive pre-conscious level.

I made a challenge to various lawyers and the Stanford Codex (no one took the bait yet) to find critical mistakes in the "reasoning" of our Legal AI. One former attorney general told us that he likes how it balances the intent of the law. Sample output (scroll and click on stats and the donuts on the second slide):

Samples: https://labs.sunami.ai/feed

I built the AI using an inference-time=scaling approach that I evolved over a year's time, and it is based on Llama for now, but could be replace with any major foundational model.

Presentation: https://prezi.com/view/g2CZCqnn56NAKKbyO3P5/ 8-minute long video: https://www.youtube.com/watch?v=3rib4gU1HW8&t=233s

info sunami ai

staticman2

"One former attorney general told us that he likes how it balances the intent of the law."

In a common law system you generally want actionable legal advice based on predictions on how a judge would rule in a case not "balances the intent of the law" whatever the heck that means.

elicksaur

.

null

[deleted]

sunami-ai

The sensitivity can be turned up or down. It's why we are asking for input. If you're talking about the Disney EULA, it has the context that it is a browsewrap agreement. The setting for material omission is very greedy right now, and we could find a happy middle.

sunami-ai

A former attorney general is taking it for a spin, and has said great things about it so far. One of the top 100 lawyers in the US. HN has turned into a pit of hate. WTF all this hate for? People just really angry at AI, it seems. JFC, Grow up.

wewewedxfgdf

[flagged]

shitloadofbooks

I know you’re being disparaging by using language like “bake into their identity” but everyone is “something” about “something”.

I’m “indifferent” about “roller coasters” and “passionate” about “board games”.

To answer the question (but I’m not OP), I’m skeptical about LLMs. “These words are often near each other” vastly exceeds my expectation at being fairly convincing that the machine “knows” something, but it’s dangerously confident when it’s hilariously incorrect.

Whatever we call the next technological leap where there’s actual knowledge (not just “word statistics” I’ll be less skeptical about.

fasbiner

Your framing is extrapolative, mendacious and is adding what could charitably be called your interpersonal problems to a statement which is perfectly neutral, intended as an admission against general inclination to lend credibility to the observation that follows.

Someone uncharitable would say things about your cognitive abilities and character that are likely true but not useful.

layer8

They didn’t say that they were invested in it.

mdp2021

> invested

Very probably not somebody who blindly picked a position, easily somebody who is quite wary of the downsides of the current state of the technology, as expressed already explicitly in the post:

> It’s downsides, such as hallucinations and lack of reasoning

the13

Probably all the hype and bs.

simonw

I wrote something similar about this effect almost two years ago: https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen... "AI-enhanced development makes me more ambitious with my projects"

With an extra 23 months of experience under my belt since then I'm comfortable to say that the effect has stayed steady for me over time, and even increased a bit.

shmoogy

100% agree with this, sometimes I feel I'm becoming too reliant on it - but I step back and see how much more ambitious of projects I take on, and finish quickly still, due to it.

hombre_fatal

Claude 3.7 basically one-shot a multiplayer game's server-authority rollback netcode with client-side prediction and interpolation.

I spent months of my life in my 20s trying to build a non-janky implementation of that and failed which was really demoralizing.

Over the last couple weekends I got farther than I was able to get in weeks or months. And when I get stumped, I have the confidence of being able to rubber-duck my way through it with an LLM if it can't outright fix it itself.

Though I also often wonder how much time I have left professionally in software. I try not to think about that. :D

xandrius

You know the 80/20 "rule"? Well, that last 20% is what I believe will keep us around.

AI is going to be a great multiplier but if the base is 0, you can multiply it by whatever you want.

I feel ChatGPT-like products are like outsourcing to cheaper countries, it might work for some but for anyone else, now they have to hire more expensive people to fix/redo the work done by the cheaper labor. This seems to be exactly the same but using AI.

fallinditch

Around that time you highlighted the threat of prompt injection attacks on AI assistants. Have you also been able to make progress in this area?

simonw

Frustratingly I feel we've made very little progress towards a fix for that problem in nearly 2.5 years!

null

[deleted]

CosmicShadow

The exciting thing about AI is it let's you go back to any project or idea you've ever had and they are now possibly doable, even if they seemed impossible or too much work back then. Some of the key pieces missing have become trivial, and even if you don't know how to do something AI will help you figure it out or just let you come up with a solution that may seem dirty, but actually works, whereas before it was impossible without expert systems and grinding out so much code. It's opened so many doors. It's hard to remember ideas that you have written off before, there are so many blind spots that are now opportunities.

wruza

It doesn’t do that for things rarely done before though. And it’s poisoned with opinions from the internet. E.g. you can convince it that we have to remove bullshit layers from programming and make it straightforward. It will even print a few pages of vague bullet points about it, if not yet. But when you ask it to code, it will dump a react form.

I’m not trying to invalidate experiences itt, cause I have a similar one. But it feels futile as we are stuck with our pre-AI bloated and convoluted ways of doing things^W^W making lots of money and securing jobs by writing crap nobody understands why, and there’s no way to undo this or to teach AI to generalize.

I think this novelty is just blindness to how bad things are in the areas you know little about. For example, you may think it solves the job when you ask it to create a button and a route. And it does. But the job wasn’t to create a route, load and validate data and render it on screen in a few pages and files. The job was to take a query and to have it on screen in a couple of lines. Yes it helps writing pages of our nonsense, but it’s still nonsense. It works, but feels like we have fooled ourselves twice now. It also feels like people will soon create AI playbooks for structuring and layering their output, cause ability to code review it will deteriorate in just a few years with less seniors and much more barely-coders who get into it now.

vacuity

I want to expand on your sentiment about our pre-AI mindset. Programming has made it easy to do things of essentially no value, while getting lots of money for it. Programming is additive and creative; we can always go further in modelling the world and creating chunks of it to use. But I don't see the value in the newest CRUD fullstack application or website. I don't see the intellectual stimulation or even a reasonable amount of user benefit. Programming allows us to produce a lot, but we should be scrutinizing what that lot is. "AI" that enhances what we've been doing will just continue this dull industry. Greed and a nebulous sense of progress are the primary drivers, but they're empty behind it all. Isn't progress supposed to be about good change? We should be focusing on passion projects and/or genuinely helping, or better yet elevating, users (that is to say, everyone).

jboggan

> And it’s poisoned with opinions from the internet.

This is the scary part. What current AI's are very effectively doing is surfacing the best solution (from a pre-existing blog/SO answer) that I might have been able to Google 10 years ago when search was "better" and there was less SEO slop on the internet - and pre-extract the relevant code for me (which is no minor thing).

But I repeatedly have been in situations where I ask for a feature and it brings in a new library and a bunch of extra code and only 2 weeks later as I get more familiar with that library do I realize that the "extra" code I didn't understand at first is part of a Hello World blog post on that framework and I suddenly understand that I have enabled interfaces and features on my business app that were meant for a toy example.

xrd

Where are the LLM leaderboards for software estimation accuracy?

I have been using Claude Code and Aider and I do think they provide incredibly exciting potential. I can spin up new projects with mind boggling results. And, I can start projects in domains where I previously had almost no experience. It is truly exciting.

AND...

The thing I worry most about is that now non-technical managers can go into Claude and say "build me XYZ!" and the AI will build a passable first version. But, any experienced software person knows that the true cost of software is in the maintenance. That's 90% of the cost. Reducing that starting point to zero cost only reduces the total cost of software by 10%, but people are making it seem like you no longer need a software engineer that understands complex systems. Or, maybe that is just my fears vocalized.

I have seen LLMs dig into old and complex codebases and fix things that I was not expecting them to handle. I tend to write a lot of tests in my code, so I can see that the tests pass and the code compiles. The barbarians have come over the walls, truly.

But, IMHO, there is still a space where we cannot ask any of the AI coding tools to review a spec and say "Will you get this done in 2 months?" I don't think we are there, yet. I don't see that the context window is big enough to jam entire codebases inside it, and I don't yet see that these tools can anticipate a project growing into hundreds of files and managing the interdepedencies between them. They are good at writing tests, and I am impressed by that, so there is a pathway. I'm excited to understand more about how aider creates a map of the repository and effectively compresses the information in ways similar to how I keep track of high level ideas.

But, it still feels very early and there are gaps. And, I think we are in for a rude awakening when people start thinking this pushes the cost and complexity of software projects to zero. I love the potential for AI coding, but it feels like it is dangerously marketing and sales driven right now and full of hype.

mentalgear

The "coding benchmarks" like "SWE-verified" are actually of very low quality and the answer riddled with problems.

Good Explainer: "The Disturbing Reality of AI Coding" https://www.youtube.com/watch?v=QnOc_kKKuac

inerte

I've been programming for 20 years, just so y'all know I have at least some level of competence.

I tried Vibe Coding for the first time on Friday and was blown away. It was awesome. I met some people (all programmers) at a party on Friday and excitedly told them about it. One of them tried, he loved.

Then yesterday I read a LinkedIn (Lunatic?) post about "Vibe Design", where PMs will just tell the computer what to do using some sort of visual language, where you create UI elements and drop them on a canvas, and AI makes your vision come true, etc, etc...

And my first thought was: "Wait a minute, I've seen this movie before. Back on late 90s / early 2000s it was the 4th generation programming language, and Visual Basic would allow anyone to code any system"...

And while it's true Visual Basic did allow a BUNCH of people to make money building and selling systems to video rental shops and hair saloons, programmers never went away.

I welcome anyone building more software. The tools will only get better. And programmers will adapt them, and it will make us better too, and we will still be needed.

rurp

This largely fits with a pattern I've been seeing with LLM coding. The models are often helpful, sometimes extremely so, when it comes to creating prototypes or other small greenfield projects. They can also be great at producing a snippet of code in an unfamiliar framework or language. But when it comes to modifying a large, messy, complicated code base they are much less helpful. Some people find them a useful as a beefed up autocomplete, while others don't see enough gains to offset the time/attention to use them.

I think a lot of arguments about LLM coding ability stem from people using them for the former or the latter and having very different experiences.

someothherguyy

> I can spin up new projects with mind boggling results

Boggle a skeptical mind

xrd

Meaning, give you an example?

This morning I created a new project. I provided a postgres database URL to a remote service (with a non-standard connection string, includes a parameter "?sslmode=require"). Then, I said:

  * "Write me a fastapi project to connect to a postgres database using a database url."
  * "Retrieve the schema from the remote database." It used psql to connect, retrieves the schema. That was unexpected, it figured out not only a coding task, but an external tool to connect to a database and did it without anything more than me providing the DATABASE_URL. Actually, I should say, I told it to look inside the .env file, and it did that. I had that URL wrong initially, so I told it to reload once I corrected it. It never got confused by my disorganization. 
  * It automatically added sqlalchemy models and uses pydantic once it figured out the schema.
  * "Create a webpage that lets me review one table."
  * "Rewrite to use tailwindcss." It adds the correct tailwindcss CDN imports.
  * It automatically adds a modal dialog when I click on one of the records.
  * It categorized fields in the database into groupings inside the modal, groupings that do indeed make sense.
I know the devil is in the details, or in the future. I'm sure there are gaping security holes.

But, this saved me a lot of time and it works.

xrd

And, the update is that in the last hour claude somehow removed the rendering of the actual items. It is clever in troubleshooting: it created mocks if the data could not be loaded, it added error messages with the number of items retrieved. But, it does not have access to my browser console nor the DOM, and therein lies the problem. It is slow to copy and paste back and forth from the browser into the terminal. But, this feels like a great opportunity for a browser extension.

But, my takeaway is that I could have fixed this exact problem in five minutes if I had written the JS code. But, I have not looked at anything other than glancing at the diffs that fly by in the session.

Workaccount2

I used claude to create a piece of software that can render gerber files (essentially vector files used in electronics manufacturing), overlay another gerber file on top of it with exact alignment, and then provide a GUI for manually highlighting components. The program then calculates the centroid, the rotation, and prompts for a component designator. This all then gets stored in a properly formatted Place file, which is used for automating assembly.

The day before this I was quote $1000/yr/user for software that could do this for us.

xur17

I used Claude + aider to create a webhook forwarding service (receives webhooks, forwards them to multiple destinations, and handles retries, errors, etc). Included a web interface for creating new endpoints and monitoring errors, and made it fully dockerized and ready to be deployed into a kubernetes cluster.

Took me like 3 hours and a few dollars in api credits to build something that would have taken me multiple days on my own. Since it's just an internal tool for our dev environments that already does what we need, I don't care that much about maintainability (especially if it takes 3 hours to build from scratch). That said, the code is reasonably usable (there was one small thing it got stuck on at the end that I assisted with).

xrd

Claude Code and Aider, or aider using claude sonnet as the model? If you are using both claude code and aider I would love to know why.

brulard

I wanted a whiteboard app - simple, infinite, zoomable, scrollable svg-based area with custom shapes and elements. It just needed prompts like "add zoom", "add a rect shape", "add control points for resize", "add text shape", "make position attributes able to reference another objects properties, + basic arithmetic" (for example to make line connect 2 objects, or size of one mirror size of another, etc.). It made all of these with so little effort from me. I would never undertake such a project without an LLM.

axkdev

> I'm excited to understand more about how aider creates a map of the repository and effectively compresses the information in ways similar to how I keep track of high level ideas.

Run /map. It's nothing fancy, really.

xrd

Ok, I meant to say, I know about /map, but there is so much to explore here with an open source coding tool and LSP. It feels like you could build some very powerful ways to represent connections using a lot of cool tools that are in use inside our editors. More structured and more relevant that the poor representations I can keep inside my crappy head.

aaalll

There is a tool cool lovable that is basically targeted at that exact thing having designers and product managers get something that kinda works

tomjuggler

I have been using AI coding tools since the first GitHub co-pilot. One thing has not changed: garbage in = garbage out.

If you know what you are doing the tools can improve output a lot - but while you might get on for a little bit without that experience guiding you, eventually AI will code you into a corner if it's not guided right.

I saw mention of kids learning to code with AI and I have to say, that's great, but only if they are just doing it for fun.

Anyone who is thinking of a career in generating code for a living should first and foremost focus on understanding the principles. The best way to do that is still writing your own programs by hand.

_diyar

The AUC of learning to program remains the same before and after AI, both for hobbyist and professionals. What changes is the slope of the on-ramp – AI makes it easier to get started, achieve first wins and learn the table-stakes for your field.

SamPatt

For me, it isn't just about complexity, but about customization.

I can have the LLMs build me custom bash scripts or make me my own Obsidian plugins.

They're all little cogs in my own workflow. None of these individual components are complex, but putting all of them together would have taken me ages previously.

Now I can just drop all of them into the conversation and ask it for a new script that works with them to do X.

Here's an example where I built a custom screenshot hosting tool for my blog:

https://sampatt.com/blog/2025-02-11-jsDelivr

hn_throwaway_99

I feel like I'm straddling the fence a bit on this topic.

When it comes to some personal projects, I've written a slew of them since AI coding tools got quite good. I'm primarily a backend developer, and while I've done quite a bit of frontend dev, I'm relatively slow at it, and I'm especially slow as CSS. AI has completely removed this bottleneck for me. Often times if I'm coding up a frontend, I'll literally just say "Ok, now make it pretty with a modern-looking UI", and it does a decent job, and anything I need to fix is an understandable change that I can do quickly. So now I'll whip up nice little "personal tool" apps in literally like 30 mins, where in the past I would have spent 30 mins just trying to figure out some CSS voodoo about why it's so hard to get a button centered.

But in my job, where I also use AI relatively frequently, it is great for helping to learn new things, but when I've tried to use it for things like large scale, repo-wide refactorings it's usually been a bit of a PITA - reviewing all of the code and fixing it's somewhat subtle mistakes often feels like it's slower than doing it myself.

It that sense, I think it's reasonable to consider AI like a competent junior developer (albeit at "junior developer" level across every technology ever invented). I can give a junior developer a "targeted" task, and they usually do a good job, even if I have to fix something here or there. But when I give them larger tasks that cross many components or areas of concern, that's often where they struggle mightily as well.

mindwok

This article is strangely timed for me. About a year ago a company reached out to me about doing an ERP migration. I turned it away because I thought it’d just be way, way too much work.

This weekend, I called my colleague and asked him to call them back and see if they’re still trying to migrate. AI definitely has changed my calculus around what I can take on.

saltcod

Found the same thing. I was toying with a Discord bot a few weeks ago that involved setting up and running a node server, deployed to Fly via docker. A bunch of stuff a bit out of my wheelhouse. All of it turned out to be totally straightforward with LLM assistance.

Thinking bigger is a practice to hone.

autocole

Can you describe how you used LLMs for deployment? I'm actually doing this exact thing but I'm feeling annoyed by DevOps and codebase setup work. I wonder if I'm just being too particular about which tools I'm using rather than just going with the flow

lordnacho

This is much like other advances in computing.

Being able to write code that compiled into assembly, instead of directly writing assembly, meant you could do more. Which soon meant you had to do more, because now everyone was expecting it.

The internet meant you could take advantage of open source to build more complex software. Now, you have to.

Cloud meant you could orchestrate complicated apps. Now you can't not know how it works.

LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"

And they won't be wrong, if you can get the lower level components of a system done easily by LLM, you need to be looking at a higher level.

8373746439

> LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"

Not everyone wants to be a "prompt engineer", or let their skills rust and be replaced with a dependency on a proprietary service. Not to mention the potentially detrimental cognitive effects of relegating all your thinking to LLMs in the long term.

JackMorgan

I recall hearing a lot of assembly engineers not wanting to let their skills rust either. They didn't want to be a "4th gen engineer" and have their skills replaced by proprietary compilers.

Same with folks who were used to ftp directly into prod and used folders instead of source control.

Look, I get it, it's frustrating to be really good at current tech and feel like the rug is getting pulled. I've been through a few cycles of all new shiny tools. It's always been better for me to embrace the new with a cheerful attitude. Being grumpy just makes people sour and leave the industry in a few years.

anon373839

This is a different proposition, really. It’s one thing to move up the layers of abstraction in code. It’s quite another thing to delegate authoring code altogether to a fallible statistical model.

The former puts you in command of more machinery, but the tools are dependable. The latter requires you to stay sharp at your current level, else you won’t be able to spot the problems.

Although… I would argue that in the former case you should learn assembly at least once, so that your computer doesn’t seem like a magic box.

crent

I agree that not everyone wants to be. I think OPs point though is the market will make “not being a prompt engineer” a niche like being a COBOL programmer in 2025.

I’m not sure I entirely agree but I do think the paradigm is shifting enough that I feel bad for my coworkers who intentionally don’t use AI. I can see a new skill developing in myself that augments my ability to perform and they are still taking ages doing the same old thing. Frankly, now is the sweet spot because the expectation hasn’t raised enough to meet the output so you can either squeeze time to tackle that tech debt or find time to kick up your feet until the industry catches up.

aithrowawaycomm

Even the example in the post seemed closely related to other advances in consumer-level computing:

  I re-created this system using an RPi5 compute module and a $20 camera sensor plugged into it. Within two hours I wrote my first machine learning [application], using the AI to assist me and got the camera on a RPi board to read levels of wine in wine bottles on my test rig. The original project took me six weeks solid!
Undoubtedly this would have taken longer without AI. But I imagine the Raspberry Pi + camera was easier to set up out-of-the-box than whatever they used 14 years ago, and it's definitely easier to set up a paint-by-numbers ML system in Python.