Skip to content(if available)orjump to list(if available)

Protect your consciousness from AI

Protect your consciousness from AI

38 comments

·November 9, 2025

Chance-Device

I was expecting something about how to protect your consciousness from (or during) AI use, but I got a short 200 word note rehashing common sentiments about AI. I guess it’s not wrong, it’s just not very interesting.

andy99

Yeah if found it slightly ironic that an argument against using AI is made as an empty social media-style post. Ironically AI could have written a better one.

dingnuts

it'd be worse, just longer

candiddevmike

I still feel "weird" trying to reason about GenAI content or looking at GenAI pictures sometimes. Some of it is so off-putting in a my-brain-struggles-to-make-sense-of-it way.

AstroBen

To me the answer was fairly obvious—default to using your own thinking first

zwnow

It is very interesting because it tackles things people love to forget when using AI. A little over a decade ago it was scandal on how big tech companies are using peoples data, now people give it knowingly to them via all kinds of bullshit apps. So you have to repeat the obvious over and over, and even then it wont click for many.

roxolotl

So wild to think Cambridge Analytica was a scandal worthy of congressional hearings. LLMs are personalized persuasion on steroids.

nvllsvm

> In the professional world, I see software developers blindly copying and pasting code suggestions from LLM providers without testing it, or understanding it.

When you see that, call them out on it. Not understanding copy+pasted code is one thing, but not testing it is a whole other level of garbage.

simonw

Seriously. The job of a software developer is to deliver working software. If the software doesn't work that's a dereliction of duty.

thundergolfer

This isn't a new problem at all. If you only started noticing it as a problem with "AI", as the author apparently did, then you were blind to how our mediums and tools have always shaped us, alienated us from the world and each other, and made us dependent on mechanism. This has happened hugely already with television.

You can go back and read McLuhan, he's great, but a recent and more approachable book on this is _God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning_.

Way back in 1969 the utopian vision of technology put humans at the centre. The Whole Earth Catalogue's slogan was “access to tools.” Just _tools_. That same year, technology put a man on the moon.

Unfortunately, if you realize the extent and the history of the problem, you see we're so far gone, miles away from getting a grip.

mrloba

It is everywhere. Even on birthday invites for my kids there's nonsense from an LLM. At work I review PRs with code that doesn't even run. Doing research is harder than ever as more and more references are completely made up.

We're too lazy and too obsessed with getting ahead to use this technology responsibly in my opinion.

simonw

How do those PR authors react when you point out that the code doesn't run and block the merge? Any signs of them improving their work ethic over time based on your feedback?

mrloba

Well they then use more AI to try to fix the PR, which leads to many more rounds of the same. It's like I'm coding using an AI except through a real person who mangles the prompt. I've had some success as well in talking people out of it, but it feels like I'm gonna lose eventually

i_love_retros

Of course they are providing that feedback! No one gives a shit though. Our industry and society at large has basically given approval for people to submit AI slop. Managers and executives consider it working smart and efficiently. So telling someone "this code doesn't run" results in more slop in an attempt to fix it. Eventually it will run and get merged and the code base gets even shitter. There's only so much gate keeping and quality control the few people who actually give a damn can be expected to do when swimming against the tide. Mental health is a thing. And to quote Dan Ashcroft, the idiots are winning.

simonw

Do managers genuinely not care if the code they are paying to have written works or not?

dingnuts

How do I provide feedback to the asshole that published a fake vibe coded library on GitHub and wasted half an hour of my life this weekend because I was duped into thinking it was real until I tried the examples?

This stuff has poisoned the whole ecosystem and IT'S YOUR FAULT PERSONALLY, SIMON.

simonw

You could name the library so other people who search for it find your review.

whiplash451

Savvy researchers/engineers have an opportunity to arbitrage here: working without LLMs on something hard leads to better outcome than what your "AI-enabled" peers achieve (after all, Karpathy could not resort on any AI to build nano-chat). It's sad state of affairs, but it really is there.

xanderlewis

> too obsessed with getting ahead

or perhaps with others (potentially) getting ahead of us.

lbrito

Or management outright mandating the use of LLM.

noir_lord

I'm far too lazy to be able to responsibly use a machine that can give me semi-sensible answers.

I saw the danger of it as a form of learned helplessness down the line and swore off using LLM's for that reason, that and I feel no need to delegate my thinking to a machine that can't think and I like thinking.

Same reason snacks are upstairs in the kitchen and not in my office on the ground floor - I'm too lazy and if they are easily available I'll eat them.

saaaaaam

Most people are lazy and stupid. Lazy stupid people use naked LLM outputs. Their brains were already rotting.

Don’t be lazy and stupid.

tyleo

I’ve found the hesitation to shovel text into AI weird given the _lack_ of hesitation to shovel text into search engines.

Either case is weird in absolute terms but in relative terms, it all goes to the same place. The human-like nature of AI seems to make people realize this more.

subquantum2

Agree that this LLM stuff 'dumbs down' or using a better word for it 'changes the human skill set'. Your real skills are reduced over time. The LLM is like a broken mirror of your own skills and because the mirror is biased at some point you do not learn anymore, or learn the biased world, it becomes brain rot, you become the LLM pet. you cannot function without your owner...

On the negative side: The LLM uses fancy language to try to convince disinfo. The danger is that you do not see this disinfo and it will shape your consciousness that is the trap.

However if you are lucky you learn to distrust the LLMs it's not a educated AI.

On the positive side: You can still use it as search engine or to get some ideas But you should continue on your own to increase your creative skills.

Your consciousness / attention is stolen on a daily basis to keep you occupied to do stuff. However this is already ongoing before LLMs, before the computer age.

I think at some point your consciousness will detect this brain rot at some point and evolve beyond.

Our body's has evolved to copy trait from others from childhood on, moreover it's also in our DNA itself its created to copy. So the LLM is not any different but you should be aware which trait to copy.

tmaly

I am perfectly okay with offloading low value mental work to an LLM just to recoup time to spend with my family. The modern world has way too many demands that just suck up time.

isodev

When you meet a model that actually saves you time (instead of shifting the work to something else), write about it

ChrisArchitect

Thought this would be something more about being AI-pilled.... the increasing effect of contact with AI systems and content created by them that leads to a mindset we're seeing more of where one constantly questions everything about their reality. Protect your conciousness from that.

Bukhmanizer

Culturally we are going through a phase where thought is getting massively devalued. It’s all well and good to say “I’m using AI responsibly”, but it won’t matter if at the end of the day no one values your opinion over whatever ChatGPT spewed out.

satisfice

I always thought hacker culture was independent and skeptical. Somehow AI has turned a lot of them into drooling fanboys.

It’s embarrassing. Don’t rely on AI, guys. Have pride in yourselves.

simonw

I thought a big part of hacker culture involved taking interest in new technology, exploring the edges of it (and beyond those edges) and figuring out what works and what breaks - and how to break it.

I don't understand why many software engineers are so resistant to exploring AI. It's fascinating!

xanderlewis

That's because a lot of commenters here are not hackers in any real sense; rather, they're software engineers. Perhaps this hasn't always been the case.