Skip to content(if available)orjump to list(if available)

With AI you need to think bigger

With AI you need to think bigger

77 comments

·March 9, 2025

haswell

I recently discovered that some of the Raspberry Pi models support the Linux Kernel's "Gadget Mode". This allows you to configure the Pi to appear as some type of device when plugged into a USB port, i.e. a Mass Storage/USB stick, Network Card, etc. Very nifty for turning a Pi Zero into various kinds of utilities.

When I realized this was possible, I wanted to set up a project that would allow me to use the Pi as a bridge from my document scanner (has the ability to scan to a USB port) to a SMB share on my network that acts as the ingest point to a Paperless-NGX instance.

Scanner -> USB "drive" > Some of my code running on the Pi > The SMB Share > Paperless.

I described my scenario in a reasonable degree of detail to Claude and asked it to write the code to glue all of this together. What it produced didn't work, but was close enough that I only needed to tweak a few things.

While none of this was particularly complex, it's a bit obscure, and would have easily taken a few days of tinkering the way I have for most of my life. Instead it took a few hours, and I finished a project.

I, too, have started to think differently about the projects I take on. Projects that were previously relegated to "I should do that some day when I actually have time to dive deeper" now feel a lot more realistic.

What will truly change the game for me is when it's reasonable to run GPT-4o level models locally.

liotier

Please, I would be delighted if you published that code... Just yesterday I was thinking that a two-faced Samba share/USB Mass Storage dongle Pi would save me a lot of shuttling audio samples between my desktop and my Akai MPC.

haswell

I've been thinking about writing up a blog post about it. Might have to do a Show HN when time allows.

This guide was a huge help: https://github.com/thagrol/Guides/blob/main/mass-storage-gad...

thierrydamiba

Please do-I think this is a great example of how AI can be helpful.

We see so many stories about how terrible AI coding is. We need more practical stories of how it can help.

teeray

I was also writing a SANE-to-Paperless bridge to run on an RPi recently, but ran into issues getting it to detect my ix500. Would love to see the code!

genewitch

Well, R1 is runnable locally for under $2500; so I guess you could pool money and share the cost with other people that think they need that much power, rather than a quantized model with fewer parameters (or a distil).

downboots

would you have paid someone to do it over solving the challenge yourself?

klabb3

As a mostly LLM-skeptic I reluctantly agree this is something AI actually does well. When approaching unfamiliar territory, LLMs (1) use simple language (improvement over academia but also much professional intentionally obfuscated literature), (2) use the right abstraction (they seem good at ”zooming out” to big picture of things, and (3) you can move both laterally between topics and ”zoom in” quickly. Another way of putting it is ”picking the brain” of an expert in order to build a rough mental model.

It’s downsides, such as hallucinations and lack of reasoning (yeah) aren’t very problematic here. Once you’re familiar enough you can switch to better tools and know what to look for.

mdp2021

My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known (e.g. a standard task in some technology used by many), and terrible where the problem has not been tackled much by the public.

About language (point (1)), I get a lot of "hypnotism for salesmen to non technical managers and roundabout comments" (e.g. "which wire should I cut, I have a red one and a blue one" // "It is mission critical to cut the right wire; in order to decide which wire to cut, we must first get acquainted with the idea that cutting the wrong wire will make the device explode..." // "Yes, which one?" // "Cutting the wrong one can have critical consequences...")

klabb3

> and terrible where the problem has not been tackled much by the public

Very much so (I should have added this as a downside in the original comment). Before I even ask a question I ask myself "does it have training data on this?". Also, having a bad answer is only one failure mode. More commonly, I find that it drifts towards the "center of gravity", i.e. the mainstream or most popular school of thought, which is like talking to someone with a strong status-quo bias. However, before you've familiarized yourself with a new domain, the "current state of things" is a pretty good bargain to learn fast, at least for my brain.

marcosdumay

> My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known

Yes, that's a necessary condition. If there isn't some well known solution, LLMs won't give you anything useful.

The point though, is that the solution was not well known to the GP. That's where LLMs shine, they "understand" what you are trying to say, and give you the answer you need, even when you don't know the applicable jargon.

keeptrying

Yes. LLMs are the perfect learning assistant.

You can now do literally anything. Literally.

Going to take a while for everyone to figure this out but they will given time.

whartung

  > You can now do literally anything. Literally.
In theory.

In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.

And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”

It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.

But higher than that, no, I’ve not had success with it.

It’s also nice as a general purpose wizard code generator. But that’s just rote work.

YMMV

tqwhite

First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.

Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.

I hope that helps.

keeptrying

You just aren’t delving deep enough.

For every problem that stops you, ask the LLM. With enough context it’ll give you at least a mediocre way to get around your problem.

It’s still a lot of hard work. But the only person that can stop yourself is you. (Which it looks like you’ve done.)

List the reasons you’ve stopped below and I’ll give you prompts to get around them.

ch4s3

They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.

sunami-ai

LLMs don't reason the way we do, but there are similarities at the cognitive pre-conscious level.

I made a challenge to various lawyers and the Stanford Codex (no one took the bait yet) to find critical mistakes in the "reasoning" of our Legal AI. One former attorney general told us that he likes how it balances the intent of the law. Sample output (scroll and click on stats and the donuts on the second slide):

Samples: https://labs.sunami.ai/feed

I built the AI using an inference-time=scaling approach that I evolved over a year's time, and it is based on Llama for now, but could be replace with any major foundational model.

Presentation: https://prezi.com/view/g2CZCqnn56NAKKbyO3P5/ 8-minute long video: https://www.youtube.com/watch?v=3rib4gU1HW8&t=233s

info sunami ai

elicksaur

.

null

[deleted]

sunami-ai

The sensitivity can be turned up or down. It's why we are asking for input. If you're talking about the Disney EULA, it has the context that it is a browsewrap agreement. The setting for material omission is very greedy right now, and we could find a happy middle.

sunami-ai

A former attorney general is taking it for a spin, and has said great things about it so far. One of the top 100 lawyers in the US. HN has turned into a pit of hate. WTF all this hate for? People just really angry at AI, it seems. JFC, Grow up.

wewewedxfgdf

[flagged]

mdp2021

> invested

Very probably not somebody who blindly picked a position, easily somebody who is quite wary of the downsides of the current state of the technology, as expressed already explicitly in the post:

> It’s downsides, such as hallucinations and lack of reasoning

shitloadofbooks

I know you’re being disparaging by using language like “bake into their identity” but everyone is “something” about “something”.

I’m “indifferent” about “roller coasters” and “passionate” about “board games”.

To answer the question (but I’m not OP), I’m skeptical about LLMs. “These words are often near each other” vastly exceeds my expectation at being fairly convincing that the machine “knows” something, but it’s dangerously confident when it’s hilariously incorrect.

Whatever we call the next technological leap where there’s actual knowledge (not just “word statistics” I’ll be less skeptical about.

fasbiner

Your framing is extrapolative, mendacious and is adding what could charitably be called your interpersonal problems to a statement which is perfectly neutral, intended as an admission against general inclination to lend credibility to the observation that follows.

Someone uncharitable would say things about your cognitive abilities and character that are likely true but not useful.

layer8

They didn’t say that they were invested in it.

the13

Probably all the hype and bs.

simonw

I wrote something similar about this effect almost two years ago: https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen... "AI-enhanced development makes me more ambitious with my projects"

With an extra 23 months of experience under my belt since then I'm comfortable to say that the effect has stayed steady for me over time, and even increased a bit.

fallinditch

Around that time you highlighted the threat of prompt injection attacks on AI assistants. Have you also been able to make progress in this area?

shmoogy

100% agree with this, sometimes I feel I'm becoming too reliant on it - but I step back and see how much more ambitious of projects I take on, and finish quickly still, due to it.

null

[deleted]

CosmicShadow

The exciting thing about AI is it let's you go back to any project or idea you've ever had and they are now possibly doable, even if they seemed impossible or too much work back then. Some of the key pieces missing have become trivial, and even if you don't know how to do something AI will help you figure it out or just let you come up with a solution that may seem dirty, but actually works, whereas before it was impossible without expert systems and grinding out so much code. It's opened so many doors. It's hard to remember ideas that you have written off before, there are so many blind spots that are now opportunities.

wruza

It doesn’t do that for things rarely done before though. And it’s poisoned with opinions from the internet. E.g. you can convince it that we have to remove bullshit layers from programming and make it straightforward. It will even print a few pages of vague bullet points about it, if not yet. But when you ask it to code, it will dump a react form.

I’m not trying to invalidate experiences itt, cause I have a similar one. But it feels futile as we are stuck with our pre-AI bloated and convoluted ways of doing things^W^W making lots of money and securing jobs by writing crap nobody understands why, and there’s no way to undo this or to teach AI to generalize.

I think this novelty is just blindness to how bad things are in the areas you know little about. For example, you may think it solves the job when you ask it to create a button and a route. And it does. But the job wasn’t to create a route, load and validate data and render it on screen in a few pages and files. The job was to take a query and to have it on screen in a couple of lines. Yes it helps writing pages of our nonsense, but it’s still nonsense. It works, but feels like we have fooled ourselves twice now. It also feels like people will soon create AI playbooks for structuring and layering their output, cause ability to code review it will deteriorate in just a few years with less seniors and much more barely-coders who get into it now.

saltcod

Found the same thing. I was toying with a Discord bot a few weeks ago that involved setting up and running a node server, deployed to Fly via docker. A bunch of stuff a bit out of my wheelhouse. All of it turned out to be totally straightforward with LLM assistance.

Thinking bigger is a practice to hone.

autocole

Can you describe how you used LLMs for deployment? I'm actually doing this exact thing but I'm feeling annoyed by DevOps and codebase setup work. I wonder if I'm just being too particular about which tools I'm using rather than just going with the flow

mindwok

This article is strangely timed for me. About a year ago a company reached out to me about doing an ERP migration. I turned it away because I thought it’d just be way, way too much work.

This weekend, I called my colleague and asked him to call them back and see if they’re still trying to migrate. AI definitely has changed my calculus around what I can take on.

pmarreck

Insufficient Storage

The method could not be performed on the resource because the server is unable to store the representation needed to successfully complete the request. There is insufficient free space left in your storage allocation.

Additionally, a 507 Insufficient Storage error was encountered while trying to use an ErrorDocument to handle the request.

boznz

bugger! More than two visitors to my web site and it falls apart, I might fork out the $10 for the better CPU and more memory option before I post something in future.

SamPatt

For me, it isn't just about complexity, but about customization.

I can have the LLMs build me custom bash scripts or make me my own Obsidian plugins.

They're all little cogs in my own workflow. None of these individual components are complex, but putting all of them together would have taken me ages previously.

Now I can just drop all of them into the conversation and ask it for a new script that works with them to do X.

Here's an example where I built a custom screenshot hosting tool for my blog:

https://sampatt.com/blog/2025-02-11-jsDelivr

101008

> I am now at a real impasse, towards the end of my career and knowing I could happily start it all again with a new insight and much bigger visions for what I could take on. It feels like wining the lottery two weeks before you die

I envy this optimistic. I am not the opposite (im a sr engineer with more than 15 years of experience), but I am scared about my future. I invested too much time in learning concepts, theory, getting a Master degree, and in a few years all of my knowledge can be useless in the market.

xp84

I certainly feel uneasy. To whatever extent “AI” fulfills its promise of enabling regular people to get computers to do exactly what needs doing, that’s the extent that the “priest class” like me who knows how to decide what’s feasible and design a good system to do it, will be made redundant. I guess I hope it moves slowly enough that I can get enough years in on easy mode (this current stage where technical people can easily 5-10x their previous output by leveraging the AI tools ourselves).

But if the advancement moves too slowly, we will have some serious pipeline problems filling senior engineer positions, caused by the destruction that AI (combined with the end of ZIRP) has caused to job prospects for entry level software engineers.

tqwhite

I could not disagree more. Those concepts, theories and all that knowledge is what makes it so powerful. I feel successful with AI because I know what to do (I’m older than you by a lot). I talk to younger people and they don’t know how to think about a big system or have the ability to communicate their strategies. You do. I’m 72 and was bored. Now that Claude will do the drudgery, I am inspired.

101008

I understand your point of view and I do agree that with the current state of affairs I am kind of OK. It's useful for me, and I am still needed.

But seeing the progress and adoption, I wonder what will happen when that valuable skill (how to think about a big system, etc) will also be replicated by AI. and then, poof.

boznz

IT is never static. I have had to take several forks in my career with languages and technologies often leading to dead-ends and re-training. It is amazing how much you learn doing one thing directly translates to another, it can often lead to you not having a specific/narrow mindset too.

Having an LLM next to you means there is never a stupid question, I ask the AI the same stupid questions repeatedly until I get it, that is not really possible with a smart human, even if they have the patience, you are often afraid to look dumb in their eyes.

101008

I'm worried about being replaced by LLM. If it keeps evolving to the point where a CTO can ask LLM to do something and deploy it, why he would pay for a team of engineers?

Forking to different technologies and languages is one thing (I've been there, I started with PHP and I haven't touch it for almost a decade now), but being replaced by a new tech is something different. I don't see how I could pivot to still be useful.

wruza

I see it more as “if an LLM can do that, why would I need an employer?”

This coin has two sides. If a CTO can live without you, you can live without an expensive buffer between you and your clients. He’s now just a guy like you, and adds little value compared to everyone else.

doug_durham

Where in reality can a CTO talk to a human and deploy it? It takes engineers to understand the requirements and to iterate with the CTO. The CTO has better things to do with their time than wrestle with an LLM all day.

curious_cat_163

Yes -- LLMs can write a lot of code and after some reviewing it can also go to prod -- but I have not nearly enough applications of LLMs on the post-prod phase; like dealing with evolution in requirements, ensuring security as zero days get discovered, etc.

Would love to hear folks' experience around "managing" all this new code.

lordnacho

This is much like other advances in computing.

Being able to write code that compiled into assembly, instead of directly writing assembly, meant you could do more. Which soon meant you had to do more, because now everyone was expecting it.

The internet meant you could take advantage of open source to build more complex software. Now, you have to.

Cloud meant you could orchestrate complicated apps. Now you can't not know how it works.

LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"

And they won't be wrong, if you can get the lower level components of a system done easily by LLM, you need to be looking at a higher level.

8373746439

> LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"

Not everyone wants to be a "prompt engineer", or let their skills rust and be replaced with a dependency on a proprietary service. Not to mention the potentially detrimental cognitive effects of relegating all your thinking to LLMs in the long term.

crent

I agree that not everyone wants to be. I think OPs point though is the market will make “not being a prompt engineer” a niche like being a COBOL programmer in 2025.

I’m not sure I entirely agree but I do think the paradigm is shifting enough that I feel bad for my coworkers who intentionally don’t use AI. I can see a new skill developing in myself that augments my ability to perform and they are still taking ages doing the same old thing. Frankly, now is the sweet spot because the expectation hasn’t raised enough to meet the output so you can either squeeze time to tackle that tech debt or find time to kick up your feet until the industry catches up.

TooTony

I use Cursor to write Python programs to solve tasks in my daily work that need to be completed with programming. It's very convenient, and I no longer need to ask the company's programmers for help. Large language models are truly revolutionary productivity tools.