Gaslight-driven development
86 comments
·July 17, 2025Tcepsa
Bluestein
> "513: Your Coding Assistant Is Wrong"
You made me chuckle. Well played. Great stuff :)
May I, simply, also suggest:
HTTP 407 Hallucination
Meaning: The server understands the request but believes it to be incongruous with reality.-
snthpy
+1 for 513: Your Coding Assistant Is Wrong"
If we have 418, why not 513?
hi_hi
I humbly request, if you are going to do this, please, please...use the 418 response. It deserves wider adoption :-)
Dilettante_
Bit of a pet peeve: 418 is clearly defined as "I am a teapot", not "whatever I want it to mean".
Please do not use it for anything other than its specified purpose, even if it is a joke.
hi_hi
Is one being a little precious about one being a teapot!?
defrost
Core identity panic response?
latentsea
I think it's a good representation of a hallucination.
Bluestein
(on that note, I'm putting the kettle on :)
hamish-b
I like seeing what users are currently viewing the same page, but man the constant jostling of users coming and going made it hard to read the post.
seanlinehan
I have this little bookmarklet in my bookmarks bar that I use constantly. It removes all fixed or sticky elements on the page and re-enabled y-overflow if it was disabled:
javascript: (function () {document.querySelectorAll("body *").forEach(function(node){["fixed","sticky"].includes(getComputedStyle(node).position)&&node.parentNode.removeChild(node)});var htmlNode=document.querySelector("html");htmlNode.style.overflow="visible",htmlNode.style["overflow-x"]="visible",htmlNode.style["overflow-y"]="visible";var bodyNode=document.querySelector("body");bodyNode.style.overflow="visible",bodyNode.style["overflow-x"]="visible",bodyNode.style["overflow-y"]="visible";var nodes=document.querySelectorAll('.tp-modal-open');for(i in nodes) {nodes[i].classList.remove('tp-modal-open');}}())
JimDabell
They have been called “dickbars” before [0].
> Kill-sticky, a bookmarklet to remove sticky elements and restore scrolling (174 comments)
— https://news.ycombinator.com/item?id=32998091
[0] https://daringfireball.net/linked/2017/06/27/mcdiarmid-stick...
zoom6628
Huge fan of killsticky and using it everywhere!
null
consumer451
Same here. Right-click the page and choose Inspect (or Inspect Element). Click the Console tab, paste this code, and press Enter:
document.getElementById("presence")?.remove();
If you want to know why this is happening in your brain, it's likely a prey/predator identification thing. I would like to think that being so distracted by this just means I have excellent survival instincts :)theendisney
Can just right click remove node.
consumer451
I thought my instructions would work universally, across all desktop browsers. I have also been known to overthink things.
HexDecOctBin
Reminded me so much of a game called Chess Royale that I used to play, the avatars and the flags (screenshot [1]). It was really good too; and then Ubisoft being Ubisoft, they killed it even though the game had bots and could have been made single-player.
[1]: https://game-guide.fr/wp-content/uploads/2020/02/Might-and-M...
krackers
isn't this the page that used to have cursors everywhere in the background? I think the distracting design is some intentional running joke at this point
YesBox
I tried uBlock's element zapper and ended up playing a furious game whac-a-mole :D
paulmooreparks
Same here. I don't have the time or patience to hack the page like the siblings comments suggest. There are more articles on the web than I will ever be able to consume in my lifetime, so I just close the tab and move on when the UX is aggressively bad.
akst
I ended up using safari remove distracting content, which seemed to work nicely.
jaredcwhite
Sorry, we will reach the heat death of the universe before I alter a single line of code simply because some LLM somewhere extruded incorrect synthetic text. That is so bonkers, I feel offended I even need to point out how bonkers it is.
delifue
> for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
If a function can both insert and update, it should be called "put". Using "update" is misleading.
loloquwowndueo
Upsert?
theendisney
Lets just do all variations and have the llm guess it right the first time.
bigiain
Implement all of them, with slightly different edge cases that result in glaringly obvious RCE when two or three of them are misused in place of each other.
(New startup pitch: Our agentic AI scans your access and error logs, and automatically vibe codes new API endpoints for failed API calls and pushes them to production within seconds, all without expensive developers or human intervention! Please form an orderly queue with your termsheets and Angel Investment cheques.)
eggn00dles
put implies overwriting instead of updating.
upsert is for you insert/update.
abtinf
> We see the same at Instant: for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
Good. Think of all the dev hours that must’ve been wasted by humans who were confused by this too.
RandallBrown
If tx.create didn't exist, why would any hours be wasted by this?
tdstein
I don't agree with the thesis of this post. It is begging the question of if we have to do what computers want.
> Millions of people create accounts, confirm emails, ... not because they particularly want to or even need to.
These were design choices made by humans, not computers.
debarshri
Recently i had an interesting chat with my team around coding principles of the future.
I think the way people will write code will not be around following solid principles or making sure your cyclometric complexity is high or low, nor it would be about is your code readable or not.
I think future coding principles would be around whether your agentic ide can index it well to become context aware, does it fix into the context window or not. It will be around the model you use and thr code it can generate. We will index on maintainability of the code, as code will become disposable as rate of change will increase dramatically. It will be around whether your vibed prompts matches the code thats already generated to reach some accuracy or generate enough serendipity.
bjornsing
This feels like the beginning of a wonderful friendship between me and the LLMS. I work as a fractional CTO. One of the things that frustrate me is when my clients have various idiosyncratic naming conventions on things, eg there’s a ”dev” and a ”prod” environment on AWS, but then there’s a ”test” and ”production” environment in Expo. It just needlessly consumes brain cycles, especially when you’re working with multiple clients. I guess it’s the same for the LLMs, just on a massive scale.
In general I think it’s great whenever some weight / synapse strength bits can be reallocated from idiosyncratic API naming / behavior towards real semantics.
PaulRobinson
As the old joke goes: there are two hard problems in computer science - cache invalidation, naming things and off-by-one errors
Naming things doesn’t get easier just because you bring an LLM to do it based on an incoherent stochastic process.
Have you asked why those environments have not been renamed to align? As a former CTO I’d see it immediately as a signal of poor communication, poor standards adoption, or both. It’s this low hanging stuff that you can fix relatively easily where you’re actually using that work to make the culture better and make people care more.
Don’t outsource things you should care about a lot. Naming things is something you shouldn’t be hand waving away to a model.
sfvisser
> Like it or not, we are already serving the machines.
The machines don’t give a shit, it’s the lawyers and bureaucrats you’re serving :)
Better or worse?
rexpop
In postmodern societies, reality itself is structured by simulation—"codes, models, and signs are the organizing forms of a new social order where simulation rules".
The bureaucratic and legal apparatus you invoke are themselves caught up in this regime. Their procedures, paperwork, and legitimacy rely on referents—the "models" and "simulacra" of governance, law, and knowledge—that no longer point back to any fundamental, stable reality. What you serve, in effect, is the system of signification itself: simulation as reality, or—per Baudrillard—hyperreality, where "all distinctions between the real and the fictional, between a copy and the original, disappear".
"The spectacle is not a collection of images but a social relation among people, mediated by images." (Debord) Our social relations, governance, and even dissent become performances staged for the world's endless mediated feedback loop.
In this age, according to Heidegger, "everything becomes a 'picture', a 'set-up' for calculation, representation, and control." The machine is not just a device or a bureaucratic protocol—it is the mode of disclosure through which the world appears, and your sense of selfhood and agency are increasingly products (and objects) within this technological enframing.
Yada, yada, yada; the Matrix is real.
ie, you don't know the half of it, compadre.
Waterluvian
Is there a general name and framing we could apply to these “AI” that is equally as accurate but sheds all of the human biases associated with the terms?
Like… it’s just a really, really, really good autocomplete and sometimes I find thinking of it that way cleans up my whole mental model for its use.
silveri
I like something related to "interns" (artificial interns?) because it keeps the implication that you still always have to double-check, review and verify the work they did.
cpeterso
AInterns?
xboxnolifes
Does that actually clean up your mental model though? At some number of "reallys" that autocomplete starts to sound like intelligence. Like, what is "taking customer requirements and turning them into working code" if not just really really really really really really really good autocomplete with this mental model?
Waterluvian
A lot of people are just doing the job of a really good autocomplete, not being asked to make many, if any, nontrivial decisions in their job.
Taking requirements and making working code is something some models are adequate at. It’s all the stuff around that, which I think holds the value, such as deciding things like when the requirements are wrong.
catach
It's really difficult because many of the task types we use AI for are those that are linguistically tied to concepts of human actions and cognition. Most of our convenient language use implies that AI are thinking people.
kelvinjps10
I had to use the reader mode to be able to read this article
Maybe it's spite-driven development, but I'd love to hear about someone who, upon learning that LLMs are suggesting endpoints in their API that don't exist, implements them specifically to respond with a status code[0] of "421: Misdirected Request". Or, for something less snarky and more in keeping with the actual intent of the code, "501: Not Implemented". If the potentially-implied "but it might be, later" of 501 is untenable, I humbly propose this new code: "513: Your Coding Assistant Is Wrong"
[0]: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes