Skip to content(if available)orjump to list(if available)

No AI December Reflections

No AI December Reflections

41 comments

·February 9, 2025

zamadatix

I think there is a 3rd mode "I know exactly what I want, I just need this thing to autocomplete 90%+ of it in one shot so I don't have to type it all". Like when you're building out a class and you know it needs to have certain variations of constructor, certain types, certain public and private methods, certain ways to iterated and deep copy, certain ways to pretty print and build up the plumbing to tie those to the default printing methods of the language, and so on.

It's not really the "I don't care" mode as you care very much it matches specifically what you want to build out rather than "something which seems to work as a class if I paste it in". It's also not really "I want to learn something here" as you already know exactly what you want and you're not looking for it to deviate, you're just looking to have it appear a couple times faster than if you typed it out. This is, more or less, "I want faster autocomplete for this task" usage.

torlok

Can you give an example? I never produced or worked on code that required me to routinely write boilerplate like this. The whole situation feels wrong.

TeMPOraL

You must be a Lisp programmer then, because across all the many languages I've worked with so far, all but Common Lisp had a bad ratio - way above 50% - of annoying boilerplate and decorations to code that means something (and with Lisp you beat it only when you start using meta-programming facilities extensively, which is frowned upon in projects that have more than 2 people working on them).

You can solve some of the issues with snippets, but once your snippets start looking like tiny scripts it stops being funny.

joenot443

I'm waiting for one to compile as I type this comment. I'm working on a C++/ImGui/OpenGL FOSS application with lots of user-configurable state. I wanted to add to my existing top menubar a new dropdown which allow for managing options which are already defined in AudioSourceService and LayoutStateService. It took me about 10s to type out the prompt, much longer than it's taken me to type this comment. For posterity, the prompt was -

"Add a new menu dropdown for Audio which has options for Start/Stop Analysis, a selector for Audio Source, and a toggle for Enable Ableton Link"

Automatically included in the prompt were the entire bodies of the relevant .cpp files, maybe ~2000 lines in total.

Produced were 25 lines of code which otherwise would have probably taken me ~2-5min to type. The code is effectively deterministic, I knew exactly what I wanted but decided to apply my finite mental battery in writing a prompt instead of those 25 lines. The code was instantly inserted into the file I already had open and has since built and run, doing exactly what I wanted.

--

I think tools like Cursor are best employed for devs working on solo projects or for projects for which they already have a good mental model of the entire system and can ask for code which they're immediately able to validate the correctness of. I understand that many devs, perhaps yourself included, don't work on projects in which that'd be the case, so I definitely understand when one wouldn't find these tools useful.

I also think they're best employed by people who find writing English very effortless. It's unusual for me to write code which I wouldn't faster be able to describe out loud to someone, I appreciate this isn't a trait all devs are blessed with. I've worked with plenty who would take longer writing a detailed description for a ticket than they would coding the PR itself. As with nearly every software tool, YMMV. Hope this has been helpful!

TheCapeGreek

I am in this use case bucket for things I do not care to frequently use enough to properly learn - e.g. infrastructure tooling (k8s, docker, etc), bash scripts, SPA frameworks, etc. I know the outcome I want, I know where reading the docs has gotten me nowhere, so I just need to hash it out with an oracle that has better memory of these resources.

nicbou

Usually code that is straightforward but requires you to look up a few things in the standard library docs. For example error handling with fetch in JavaScript.

scotty79

Java's DTOs maybe?

OccamsMirror

I do this a lot for React components. I could type it all out myself but that would essentially be busy work. Nothing is being solved by the AI other than me having to do less typing.

Once you start hoping the AI is going to solve your problems that is when you're asking for trouble.

bluefirebrand

How is describing it to the AI in detail any faster than writing it yourself, or copy + pasting from a template?

mathieuh

If you use something like Copilot a lot of the time you don’t explicitly have to tell it to do anything, you just start typing the signature and it autocompletes it. In my use I almost never use the conversation feature of Copilot, I’m just typing e.g. function signatures or the start of a for loop or switch statement etc.

mercer

tbh I can see even some benefit from just the need to think ahead and put it into 'regular' words, and then not having to type all the non-ergonomic markup-soup that isn't entirely avoidable.

EDIT: On the least-spectacular end of things, I guess it would just be a template with enough magic to not need the inevitable adjustments.

sandruso

I agree that there are moments where the AI "read your mind". Where everything aligns perfectly. I had couple of these and that was magical.

3vidence

This is how I use it for work and so far it's the main way I have found AI to actually improve productivity.

wruza

The fact that most of you didn’t have that before AI is, idk, crazy?

There are shortcuts, snippets, :!ipc, ahk, etc.

It took 25 years of the internet and a whole multi-trillion bubble to provide a boilerplate completion? Seriously? Editing 101. And then you hear “I don’t need Vim/Emacs/etc, I just use zero-config VSCode”. I guess that’s what one gets by not programming their programming environment. Imagine being a programmer and not automating your job for years. Digital Amish style.

williamcotton

Do your snippets write this kind of boilerplate?

https://github.com/williamcotton/webdsl/commit/34862739f6fe9...

I may have tweaked a couple of things in that (and refactored a hell of a lot since) but here's a Python script that I definitely did not write a single line of code:

https://github.com/williamcotton/webdsl/blob/main/tools/gene...

Embedding things into C is nothing new of course but I spent about 45 seconds getting the solution I needed:

https://github.com/williamcotton/webdsl/blob/main/src/server...

All of this is just boilerplate.

I spent most of my mental energies with this project on the grammar, the memory architecture, and fitting all of the pieces together. Cursor and Claude did most of the typing.

Keep in mind I'm iterating over the grammar and making rather large changes in the runtime all the while:

https://github.com/williamcotton/webdsl/commit/54efbb50c2e95...

FWIW, pipelines ended up being implemented like this:

https://github.com/williamcotton/webdsl/blob/main/src/server...

And used in this very simple example like:

  website {
      port 3445
      database "postgresql://localhost/express-test"
      api {
          route "/api/v1/team"
          method "GET"
          pipeline {
              jq { { sqlParams: [.query.id] } }
              sql { SELECT * FROM teams WHERE id = $1 }
              jq { { data: (.data[0].rows | map({id: .id, name: .name})) } }
          }
      }
  }

wruza

This is not boilerplate, what are you comparing it with? The root commenter and their replies clearly stated that they use AI for things which they know how to do but too tedious to type. General code generation abilities are out of this scope.

000ooo000

Last time I asked AI something, it started its response with "Yes, it's certainly possible to x with y." and closed its response with "in conclusion, unfortunately, it's not possible to x with y". In the same session, it told me one must press shift to get the number 1. I'm simultaneously amazed at its ability to generate what it can and disappointed at how it falls short so routinely. It'll get there eventually I'm sure, but I'm pretty dubious when people say they get a lot of value out of it.

zamadatix

People are used to Googling something and reading whole threads about how something can/can't be done before an actual sound conclusion, how to do something but it's actually completely wrong, and other similar behaviors so it's not really that big a leap to do the same with an LLM giving you a single response.

That said, single pass LLMs tend to do this kind of thing but a lot of the more useful things are best done on chain of thought type ones, where they are given some time to reflect on options before they have to start generating the start of the final response.

AnonymousPlanet

Those comment sections have dates, sometimes even version numbers, and often caveats like "this worked for me, don't know if it will for you" or "it'll work until the next update". During my interactions with LLMs so far none of them offered any if these caveats or specifics about what version of the software this will work in, even when asked explicitly.

DanHulton

People keep saying "it will get there eventually," or some variety thereof, and I just gotta keep reminding - that is an as-yet unproven claim. It may never get there! Just because we've seen some pretty astounding leaps in capabilities so far, _does not mean_ that we will continue to see them, nor that we'll ever hit the fabled land of "no more mistakes," or hell, even "no more obvious mistakes."

I'm not saying I think it won't (though I _suspect_ it won't), I'm just saying we don't have any actual proof that it _will,_ we're all just running on assumptions right now.

malfist

To me, I see LLMs as the same type of revolution that dragon naturally speaking was to voice to text.

A huge leap forward over existing models, but we've spent the last two (three?) decades trying to close the remaining gap left by dragon in the voice to text problem space, and haven't much progress to show.

I think LLMs are likely to be like that. They are a huge jump over previous models of NLP, but I don't see them improving enough to matter to indicate they'll ever make it to AGI

kohee

The limitation lies in Transformer architecture itself, therefore AGI was never a possibiblity. It's wonderful, but not miraculous; And that's okay. At this point big techs are just milking the hype of what already reached it's boundaries.

thaumasiotes

> but we've spent the last two (three?) decades trying to close the remaining gap left by dragon in the voice to text problem space, and haven't much progress to show.

Who says it's possible to close the gap? Humans are certainly not capable of doing perfect speech-to-text. You can sit someone down with a song recording and the ability to replay it as much as they want and there's no guarantee they'll ever be able to tell you what the lyrics are.

the_snooze

LLMs remind me of this line from Borat:

>Filtration system a marvel to behold

>It remove 80% of human solid waste

I mean, yeah, it's impressive. But it fails noticeably often enough to be unusable, especially in repeat cases.

layer8

> one must press shift to get the number 1

This is true for French keyboards: https://en.m.wikipedia.org/wiki/AZERTY ;)

muglug

I’ve been using O1 for some question/answering stuff around CMake and symbol resolution — stuff I know little about, yet stuff the internet knows a ton about.

O1 has been really useful, but just the practice of putting my convoluted question into words has often helped me figure out the answer without even clicking submit.

torlok

This practice is called Rubber Duck Debugging.

vladde

To me, the problem always seemed like people who use ChatGPT and alike default into the "I don't care" mode, and copy-paste blindly.

Personally I think this is the root cause of most sloppy AI code. If you just look at the code that was generated, and you don't think "I would've come up with that", then probably the code is wrong.

askonomm

One thing is to see senior engineers turning to brainrot, and seemingly overnight forgetting how to do basic programming, to the point where if ChatGPT is down they suddenly have no idea how to do work, and another thing is having an entire generation of junior engineers never having learned programming in the first place, because they finished uni via prompting, somehow got a job via prompting, and now are all getting fired en-masse for obvious reasons, creating a huge void in the job market, and very disappointed seniors (those who haven't succumbed to brainrot just yet).

I'm not sure how to feel about any of this. On the one hand it clearly shows yet again how gullible people are. I wonder if senior engineers (those who can actually solve novel problems) value in the job market will go up as a result of this? Or will the market be saturated with so much AI-enabled waste that it brings the entire fields salary down as a whole? I feel bad for the end consumer who has to tolerate lower and lower quality products year over year, as the general software engineering practice seemingly burns to the ground, and becomes a chinese sweatshop churning out counterfeit sneakers.

bruce511

This past year or so I've had ChatGPT open a lot. It's been super useful as a explore a "new" [1] field in a lot more depth.

Interestingly though I don't get it to write code. It's no good at the language I write in, so useless there.

As a "tutor" though it's been really useful. I'm asking a lot of (probably simple) questions, and the answers are "right enough". Occasionally I'm not sure why something is failing, and it's usually helpful there too.

So, less brain-rot and more "helpful senior who helps me along".

[1] the work I'm doing is related to SQL, which I've used here and there before, but not to the depth or degree I am now. I don't need it to write SQL, but rather to answer more general questions and to compare SQL databases, discuss effeciency and so on.

askonomm

But that there is the difference. You are using it to put knowledge in your brain, whereas what I'm seeing most people do is entirely skip that part, and just being a sort of clipboard-like vessel that takes information from ChatGPT and puts it into a code editor, with no intermediate thought or analysis - meaning that the brain also stores none of it, because it never actually really thought about any of it.

And if there is any thinking involved, it's not in trying to figure out the code, but trying to figure out why the AI can't figure out the code. It's a subtle difference, but results in a huge change in quality.

miningape

I use them as "research assistants" when google or the documentation fails me. But I always treat it as a less reliable stackoverflow - there's no guarantee what I'm reading is correct and rarely includes caveats like "doesn't work before version X" or "must have Y flag enabled."

I've particularly enjoyed converting terse documentation into a .md file and feeding into the LLMs context window, then using the LLM to "query" the underlying document.

Where it's always fallen short for me is code generation, and frankly that feature just doesn't interest me.

smileysteve

Agree; I'm seeing the same people that don't unit test or define types using ai to program, and ... even though Ai (CoPilot) can write 90% of the happy path and setup mocks... It's still too much for the developer (and exec team) that doesn't care to do.

NooneAtAll3

For a reflection on the period /without/ ai, this text spent a looot of time about time /with/ ai

I understand that your goal was to review the "default" you got into, but I'd love to know a lot more about struggles (and counters to them) you experienced in the NAD itself

noodletheworld

Zero memory of chat conversations resonates to me.

At a practical level, this is a good reason to run your own AI plugin, even if it just a wrapper around some api.

You can log your requests and the responses, and then use a similarity score to periodically see what sorts of things you’re asking.

I may even update mine to hassle me and be like “you’re asking this a lot, maybe you should remember it…”

(If, you can be bothered, rather just reaching for copilot)

deadbabe

Give a fuck about what you’re doing. You get paid a lot of money to write quality software. No engineer’s default mode should ever be “I don’t care I just want the end result”. We’re talking about pressing some keys on a keyboard. Do you want other engineering professions to take similar attitudes? Want to trust your life on some machine or structure designed by someone who just threw some prompts into an LLM and skimmed the results briefly before submitting to production?

Don’t rot your brain on this AI autocomplete stuff, learn how to apply AI to do things that were previously impossible or unfeasible, not as a way to just save time or do things cheaper as so many are tempted to.

aicoding

"This site was made with some AI tools on November 27, 2024"

sandruso

Yes, that supposed to be funny. The text was written by us humans.