Low-background Steel: content without AI contamination
267 comments
·June 10, 2025gojomo
io84
Just like with food: there will be a market value in content that is entirely “organic” (or in some languages “biological”). I.e. written, drawn, composed, edited, and curated by humans.
Just like with food: defining the boundaries of what’s allowed will be a nightmare, it will be impossible to prove content is organic, certifying it will be based entirely on networks of trust, it will be utterly contaminated by the thing it professes to be clean of, and it may even be demonstrably worse while still commanding a higher price point.
godelski
The entire world operates on trust of some form. Often people are acting in good faith. But regulation matters too.
If you don't go after offenders then you create a lemon markets. Most customers/people can't tell, so they operate on what they can. That doesn't mean they don't want the other things, it means they can't signal what they want. It is about available information, that's what causes lemon markets, information asymmetry.
It's also just a good thing to remember since we're in tech and most people aren't tech literate. Makes it hard to determine what "our customers" want
eru
> If you don't go after offenders then you create a lemon markets.
Btw, private markets are perfectly capable of handling 'markets for lemons'. There might be good excuses for introducing regulation, but markets for lemons ain't.
As a little thought exercise, you can take two minutes and come up with some ways businesses can 'fix' markets for lemons and make a profit in the meantime. How many can you find? How many can you find already implemented somewhere?
bitmasher9
I do wonder what would be an acceptable level of guarantee to trigger a “human written” bit.
I actually think a video of someone typing the content, along with the screen the content is appearing on, would be an acceptably high bar at this present moment. I don’t think it would be hard to fake, but I think it would very rarely be worth the cost of faking it.
I think this bar would be good for about 60 days, before someone trains a model that generates authentication videos for incredibly cheap and sells access to it.
kijin
Pen on paper, written without consulting any digital display. Just like exams used to be, before the pandemic.
Of course, the output will be no more valuable to the society at large than what a random student writes in their final exam.
short_sells_poo
Fully in agreement with you. There'll be ultimately two groups of consumers of "organic" content:
1. Those who just want to tick a checkbox will buy mass produced "organic" content. AI slop that had some woefully underpaid intern in a sweatshop add a bit of human touch.
2. People who don't care about virtue signalling but genuinely want good quality will use their network of trust to find and stick to specific creators. E.g. I'd go to the local farmer I trust and buy seasonal produce from them. I can have a friendly chat with them while shopping, they give me honest opinions on what to buy (e.g. this year was great for strawberries!). The stuff they sell on the farm does not have to go through the arcane processes and certifications to be labelled organic, but I've known the farmer for years, I know that they make an effort to minimize pesticide use, they treat their animals with care and respect and the stuff they sell on the farm is as fresh as it can be, and they don't get all their profits scalped by middlemen and huge grocery chains.
io84
You're capturing nicely how the relationship with the farmer is an essential part of the "product" you buy when you buy high-end organic. I think that will continue to be true in culture/info markets.
thih9
> emits text in these ranges that was AI generated
How would you define AI generated? Consider a homework and the following scenarios:
1. Student writes everything themselves with pen & paper.
2. Student does some research with an online encyclopedia, proceeds to write with pen and paper. Unbeknownst to them, the online encyclopedia uses AI to answer their queries.
3. Student asks an AI to come up with the structure of the paper, its main points and the conclusion. Proceeds with pen and paper.
4. Student writes the paper themselves, runs the text through AI as a final step, to check for typos, grammar and some styling improvements.
5. Student asks the AI to write the paper for them.
The first one and the last one are obvious, but what about the others?
Edit, bonus:
6. Student writes multiple papers about different topics; later asks an AI to pick the best paper.
juancroldan
7. Student spent the entire high school and bachelor's degree learning from content that teachers generate using AI and using it to do homework, hence becoming AI-contaminated
WithinReason
This is about the characters themselves, therefore:
1. Not AI 2. Not AI 3. Not AI 4. The characters directly generated by AI are AI characters 5. AI 6. Not AI
ljlolel
The student dictates a paper word for word exactly
The student is missing arms and so dictates a paper word for word exactly
Applejinx
6 is extremely interesting, in that it's tantamount to asking a panel of innumerably many people to give an opinion on which paper is best for a general audience.
It's hard to imagine that NOT working unless it's implemented poorly.
dmsnell
Unicode has a range of Tag Characters, created for marking regions of text as coming from another language. These were deprecated for this purpose in favor of higher level marking (such as HTML tags), but the characters still exist.
They are special because they are invisible and sequences of them behave as a single character for cursor movement.
They mirror ASCII so you can encode arbitrary JSON or other data inside them. Quite suitable for marking LLM-generated spans, as long as you don’t mind annoying people with hidden data or deprecated usage.
akoboldfrying
Can't I get around this by starting my text selection one character after the start of some AI-generated text and ending it one character before the end, Ctrl-C, Ctrl-V?
ema
There are many ways to get around this since it is trivial to write code that strips those tags.
crubier
Twelve millisecond after this law gets into effect, typing factories open in India, where human operators hand-recopy text from AI sources to perform "data laundering".
miki123211
If somebody writes in a foreign language and asks Chat GPT to translate to English, is that AI generated content? What about if they write on paper and use an LLM to OCR? What if they give the AI a very detailed outline, constantly ask for rewrites and are ruthless in removing any facts they're not 100% sure of if they slip in? What if they only use AI to fix the grammar and rewrite bad English into a proper scientific tone?
My answer would be a clear "no" to all of these, even though the content ultimately ends up fully copy-pasted from an LLM in all those cases.
theamk
My answer is clear "yes" to most of those.
Yes, machine translations are AI-generated content - I read foreign-language news sites which sometimes has machine translation articles and the quality stands out and not in a good way.
"Maybe" for "writing on paper and using LLM for OCR". It's like automatic meeting transcript - if the speaker has perfect pronunciation, it works well. If they don't, then the meeting notes still look coherent but have little relationship to what speaker said and/or will miss critical parts. Sadly there is no way for reader to know that from reading the transcript, so I'd recommend labeling "AI edited" just in case.
Yes, even if "they give the AI a very detailed outline, constantly ask for rewrites, etc.." it's still AI generated. I am not sure how can you argue otherwise - it's not their words. Also, it's really easy to convince yourself that you are "ruthless in removing any facts they're not 100% sure" while actually you are anything but.
"What if they only use AI to fix the grammar and rewrite bad English into a proper scientific tone?" - I'd label it "AI-edited" if the rewrites are minor or "AI-generated" if the rewrites are major. This one is especially insidious as people may not expect rewrites to change meaning, so they won't inspect them too much, so it will be easier for hallucinations to slip in.
fho
> they give the AI a very detailed outline […]
Honestly, I think that's a tough one.
(a) it "feels" like you are doing work. Without you the LLM would not even start. (b) it is very close to how texts are generated without LLMs. Be it in academia, with the PI guiding the process of grad students, or in industry, with managers asking for documentation. In both cases the superior takes (some) credit for the work that is in large parts by others.
a57721
It really depends on the context, e.g. if you need texts for a database of word frequencies, then the answer is a clear "yes", and LLMs have already ruined everything [1]. The only exception from your list would be OCR where a human proofreads the output.
[1] https://github.com/rspeer/wordfreq/blob/master/SUNSET.md
diffeomorphism
For the translate part let me just point out the offensively bad translations that reddit (sites with an additional ?tl=foo) and YouTube automatic dubbing force upon users.
These are immediately, negatively obvious as AI content.
For the other questions the consensus of many publications/journals has been to treat grammar/spellcheck just like non-AI but require that other uses have to be declared. So for most of your questions the answer is a firm "yes".
zdc1
If the purpose is to identify text that can be used as training data, in some ways it makes sense to me to mark anything and everything that isn't hand-typed as AI generated.
Like for your last example: to me, the concept "proper scientific tone" exists because humans hand-typed/wrote in a certain way. If we use AI edited/transformed text to act as a source for what "proper scientific tone" looks like, we still could end up with an echo chamber where AI biases for certain words and phrases feed into training data for the next round.
Being strict about how we mark text could mean a world where 99% of text is marked as AI-touched and less than 1% is marked as human-originated. That's still plenty of text to train on, though such a split could also arguably introduce its own (measurable) biases...
lazyasciiart
> we still could end up with an echo chamber where AI biases for certain words and phrases feed into training data for the next round.
That’s how it works with humans too. “That sounds professional because it sounds like the professionals”.
RodgerTheGreat
All four of your examples are situations where an LLM has potential to contaminate the structure or content of the text, so in all four cases it is clear-cut that the output poses the same essential hazards to training or consumption as something produced "whole cloth" from a minimal prompt; post-hoc human supervision will at best reduce the severity of these risks.
gojomo
OK, sure, there are gradations.
The new encoding can contain a FLOAT32 side channel on every character, to represent its proportional "AI-ness" – kinda like the 'alpha' transparency channel on pixels.
BugheadTorpeda6
Yes yes yes yes
c-linkage
Stop ruining my simple and perfect ideas with nuance and complexity!
theamk
Nuance and complexity are a thing, but many of the GP's examples should be clearly AI labeled...
> What if they give the AI a very detailed outline, constantly ask for rewrites and are ruthless in removing any facts they're not 100% sure of if they slip in?
slashdev
I’ll take the contrarian view. I don’t care if content is generated by a human or by an AI. I care about the quality of the content, and in many cases, the human does a better job currently.
I would like a search engine algorithm that penalizes low quality content. The ones we currently have do a piss poor job of that.
andsoitis
> I would like a search engine algorithm that penalizes low quality content. The ones we currently have do a piss poor job of that.
Without knowing the full dataset that got trimmed to the search result you see, how do you evaluate the effectiveness?
sethhochberg
You’re asking a fair question but I think you’re approaching it from a POV that’s maybe a bit more of an engineering mindset than the person you’re responding to is using
A brilliant algorithm that filters out some huge amount of AI slop is still frustrating to the user if any highly ranked AI slop remains. You still click it, immediately notice what it is, and wonder why the algo couldn’t figure this out if you did so quickly
It’s like complaining to a waiter that there’s a fly in your soup, and the waiter can’t understand why you’re upset because there were many more flies in the soup before they brought it to the table and they managed to remove almost all of them
slashdev
It doesn’t matter how much it filters out, if the top results are still spam.
I barely use Google anymore. Mostly just when I know the website I want, but not the URL.
ianburrell
Maybe have the glyph be zero width by default but have way to show them? I think begin-end markers would work better to make a whole range. It would need support from editor to manage the ranges and change editing AI generated text to mixed.
What might make sense is source marking. If you copy and paste text, it becomes a citation. AI source is always cited.
I havebeen thinking that there should be metadata in images for the provenance. Maybe a list of hashes of source images. Real cameras would include the raw sensor data. Again, AI image would be cited.
andrewflnr
It would be much less disruptive to require that any network traffic containing AI generated content must have the IP evil bit set.
K0balt
Ai generated content is inherently a regression to the mean and harms both training and human utility. There is no benefit in publishing anything that an AI can generate, just ask the question yourself. Maybe publish all AI content with <AI generated content> tags, but other than that it is a public nuisance much more often than a public good.
px1999
Following this logic, why write anything at all? Shakespeare's sonnets are arrangements of existing words that were possible before he wrote them. Every mathematical proof, novel, piece of journalism is simply a configuration of symbols that existed in the space of all possible configurations. The fact that something could be generated doesn't negate its value when it is generated for a specific purpose, context, and audience.
pickledoyster
> William Shakespeare is credited with the invention or introduction of over 1,700 words that are still used in English today
https://www.shakespeare.org.uk/explore-shakespeare/shakesped...
Aeolun
He invented ‘undress’? Like he invented ‘undo’ or ‘unwell’? Come on, that’s silly.
K0balt
Following that logic, we should publish all unique random orderings of words. I think there is a book about a library like that, but it is a great read and is not a regression to the mean of ideas.
Writing worth reading as a non-child surprises, challenges, teaches, and inspires. LLM writing tends towards the least surprising, worn out tropes that challenge only the patience and attention of the reader. The eager learner, however will tolerate that , so I suppose that I’ll give them teaching. They are great at children’s stories, where the goal is to rehearse and introduce tropes and moral lessons with archetypes, effectively teaching the listener the language of story.
FWIW I am not particularly a critic of AI and am engaged in AI related projects. I am quite sure that the breakthrough with transformer architecture will lead to the third industrial revolution, for better or for worse.
But there are some things we shouldn’t be using LLMs for.
gojomo
This was an intuitively-appealing belief, even with some qualified experimental support, as of a few years ago.
However, since then, a bunch of capability breakthroughs from (well-curated) AI generations has definitively disproven it.
DennisP
AI generates useful stuff, but unless it took a lot of complicated prompting, it's still true that you could "just ask the question yourself."
This will change as contexts get longer and people start feeding large stacks of books and papers into their prompts.
Swizec
> you could "just ask the question yourself."
Just like googling, AIing is a skill. You have to know how to evaluate and judge AI responses. Even how to ask the right questions.
Especially asking the right questions is harder than people realize. You see this difference in human managers where some are able to get good results and others aren’t, even when given the same underlying team.
gojomo
No, new more-capable and/or efficient models have been forged using bulk outputs of other models as training data.
These inproved models do some valuable things better & cheaper than the models, or ensembles of models, that generated their training data. So you could not "just ask" the upstream models. The benefits emerge from further bulk training on well-selected synthetic data from the upstream models.
Yes, it's counterintuitive! That's why it's worth paying attention to, & describing accurately, rather than remaining stuck repeating obsolete folk misunderstandings.
null
wahern
> a bunch of capability breakthroughs from (well-curated) AI generations has definitively disproven it.
How much work is "well-curated" doing in that statement?
gojomo
Less than you might think! Some of the frontier-advancing training-on-model-outputs ('synthetic data') work just uses other models & automated-checkers to select suitable prompts and desirable subsets of generations.
I find it (very) vaguely like how a person can improve at a sport or an instrument without an expert guiding them through every step up, just by drilling certain behaviors in an adequately-proper way. Training on synthetic data somehow seems to extract a similar iterative improvement in certain directions, without requiring any more natural data. It's somehow succeeding in using more compute to refine yet more value from the original non-synthetic-training-data's entropy.
nicbou
How will AI write about a world it never experiences? By training on the work of human beings.
gojomo
The training sets can already include direct data series about the world, where the "work of human beings" is just setting up the the collection devices. So models can absolutely "experience the world".
But I'm not suggesting they'll advance much, in the near term, without any human-authored training data.
I'm just pointing out the cold hard fact that lots of recent breakthroughs came via training on synthetic data - text prompted by, generated by, & selected by other AI models.
That practice has now generated a bunch of notable wins in model capabilities – contra the upthread post's sweeping & confident wrongness alleging "Ai generated content is inherently a regression to the mean and harms both training and human utility".
K0balt
I didn’t mean to imply that -no- ai generated content is useful, only that the vast, vast majority is pollution. The problem is that it is so cheap to produce garbage content with AI that writing actual content is disincentivized, and doing web searches has become an exercise is sifting through AI generated slop.
That at least will add extra work to filter usable training data, and costs users minutes a day wading through the refuse.
jbc1
If I ask the question myself then there's no step where a human expert has vetted the content and put their name on it. That curation and vouching is of value.
Now your mind might have immediately went "pffff as if they're doing that" and I agree but only to the extent that it largely wasn't happening prior to AI anyway. The vast majority of internet content was already low quality and rushed out by low paid writers who lacked expertise in what they were writing about. AI doesn't change that.
flir
Completely agree. We are used to thinking of authorship as the critical step. We're going to have to adjust to thinking of publication as the critical step. In an ideal world, publication of a piece would be seen as vouching for that piece. Putting your reputation on the line.
I wonder if we'll see a resurgence in reputation systems (probably not).
tehjoker
This is basically already how publications work.
sneak
What about AI modified or copy edited content?
I write blog posts now by dictating into voice notes, transcribing it, and giving it to CGPT or Claude to work on the tone and rhythm.
theamk
So IMHO an right thing is to add "AI rewritten" label to your blog.
hm.. I wonder where this kind of label should live? For a personal blog, putting it on every post seems redundant, as if author uses it, it's likely they use it for all posts. And many blogs don't have dedicated "about this blog" section.
I wonder if things will end up like organic food labeling or "made in .." labels. Some blogs might say "100% by human", some might say "Designed by human, made by AI" and some might just say nothing.
sneak
AI is just an inanimate tool.
Do I need to disclose that I used a keyboard to write it, too?
The stuff I edit with AI is 100% made by a human - me.
null
milesrout
[dead]
SamPatt
Nonsense. Have you used any of the deep research tools?
Don't fall for the utopia fallacy. Humans also publish junk.
krapht
Yes, and deep research was junk for the hard topics that I actually needed to sit down and research. Anything shallower I can usually reach by search engine use and scan; deep research saves me about 15-30 minutes for well-covered topics.
For the hard topics, the solution is still the same as pre-AI - search for popular survey papers, then start crawling through the citation network and keeping notes. The LLM output had no idea of what was actually impactful vs what was a junk paper in the niche topic I was interested in so I had no other alternative than quality time with Google Scholar.
We are a long way from deep research even approaching a well-written survey paper written by grad student sweat and tears.
triceratops
> deep research saves me about 15-30 minutes for well-covered topics.
Most people are capable of maybe 4 good hours a day of deep knowledge work. Saving 30 minutes is a lot.
SamPatt
Not everything is hard topics though.
I've found getting a personalized report for the basic stuff is incredibly useful. Maybe you're a world class researcher if it only saves you 15-30 minutes, I'm positive it has saved me many hours.
Grad students aren't an inexhaustible resource. Getting a report that's 80% as good in a few minutes for a few dollars is worth it for me.
cobbzilla
Steel-man angle: A desire for data provenance is a good thing with benefits that are independent of utopias/humans vs machines kinds of questions.
But, all provenance systems are gamed. I predict the most reliable methods will be cumbersome and not widespread, thus covering little actual content. The easily-gamed systems will be in widespread use, embedded in social media apps, etc.
Questions: 1. Does there exist a data provenance system that is both easy to use and reliable "enough" (for some sufficient definition of "enough")? Can we do bcrypt-style more-bits=more-security and trade time for security?
2. Is there enough of an incentive for the major tech companies to push adoption of such a system? How could this play out?
cryptonector
Yes, but GP's idea of segregating AI-generated content is worth considering.
If you're training an AI, do you want it to get trained on other AIs' output? That might be interesting actually, but I think you might then want to have both, an AI trained on everything, and another trained on everything except other AIs' output. So perhaps an HTML tag for indicating "this is AI-generated" might be a good idea.
RandomBK
My 2c is that it is worthwhile to train on AI generated content that has obtained some level of human approval or interest, as a form of extended RLHF loop.
thephyber
I can see the value of labeling all AI can be trained on purely non-AI generated content.
But I don’t think that’s a reasonable goal. Pragmatic example: There’s almost no optional HTML tags or optional HTTP Headers which are used anywhere close to 100% of the times they apply.
Also, I think field is already muddy, even before the game starts. Spell checker, grammar.ly, and translation all had AI contributions and likely affect most of human-generated text on the internet. The heuristic of “one drop of AI” is not useful. And any heuristic more complicated than “one drop” introduces too much subjective complexity for a Boolean data type.
IncreasePosts
Shouldn't there be enough training content from the pre-ai era that the system itself can determine whether content is AI generated, or if it matters?
munificent
The observation that humans poop is not sufficient justification for spending millions of dollars building an automated firehose that pumps a torrent of shit onto the public square.
SamPatt
People are paying millions for access to the models. They are getting value from them or wouldn't be paying.
It's just not accurate to say they only produce shit. Their rapid adoption demonstrates otherwise.
protocolture
I like how the chosen terminology is perfectly picked to paint the concern as irrelevant.
"Since the end of atmospheric nuclear testing, background radiation has decreased to very near natural levels, making special low-background steel no longer necessary for most radiation-sensitive uses, as brand-new steel now has a low enough radioactive signature that it can generally be used."
I dont see that:
1. There will be a need for "uncontaminated" data. LLM data is probably slightly better than the natural background reddit comment. Falsehoods and all.
2. "Uncontaminated" data will be difficult to find. What with archive.org, gutenberg etc.
3. That LLM output is going to infest everything anyway.
fer
>2. "Uncontaminated" data will be difficult to find. What with archive.org, gutenberg etc.
But recent uncontaminated data is hard to find. https://github.com/rspeer/wordfreq/blob/master/SUNSET.md
protocolture
>Now the Web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies.
I really do just bail out whenever anyone uses the word slop.
>As one example, Philip Shapira reports that ChatGPT (OpenAI's popular brand of generative language model circa 2024) is obsessed with the word "delve" in a way that people never have been, and caused its overall frequency to increase by an order of magnitude.
Should run the same analysis against the word slop.
jbs789
Umm… we stopped nuclear testing, which is what allowed the background radiation to reduce.
protocolture
And cars replaced horses in london, rendering forecasts of london being buried under a mountain of horse manure irrelevant too.
Change really is the only constant. The short term predictive game is rigged against hard predictions.
Legend2440
I'm not convinced this is going to be as big of a deal as people think.
Long-run you want AI to learn from actual experience (think repairing cars instead of reading car repair manuals), which both (1. gives you an unlimited supply of noncopyrighted training data and (2. handily sidesteps the issue of AI-contaminated training data.
AnotherGoodName
The hallucinations get quoted and then sourced as truth unfortunately.
A simple example. "Which MS Dos productivity program had connect four built in?".
I have an MSDOS emulator and know the answer. It's a little obscure but it's amazing how i get a different answer from all the AI's every time. I never saw any of them give the correct answer. Try asking it the above. Then ask it if it's sure about that (it'll change it's mind!).
Now remember that these types of answers may well end up quoted online and then learnt by AI with that circular referenced source as the source. We have no truth at that point.
And seriously try the above question. It's a great example of AI repeatedly stating an authoritative answer that's completely made up.
dwringer
When I asked, "Good afternoon! I'm trying to settle a bet with a friend (no money on the line, just a friendly "bet"!) Which MS DOS productivity program had a playable version of the game Connect Four built in as an easter egg?", it went into a very detailed explanation of how to get to the "Hall of Tortured Souls" easter egg in Excel 5.0, glossing over the fact that I said "MS DOS" and also conflating the easter eggs by telling me specifically that the "excelkfa" cheat code would open a secret door/bridge to the connect four game.
So, I retried with, "Good afternoon! I'm trying to settle a bet with a friend (no money on the line, just a friendly "bet"!) Which *MS DOS* [not Win95, i.e., Excel 5] productivity program had a playable version of the game Connect Four built in as an easter egg?". I got Lotus 1-2-3 once, Excel 4 twice, and Borland Quattro Pro three different times, all from that prompt.
The correct answer you point out in another subthread was never returned as a possibility, and the responses all definitely came across as confident. Definitely a fascinating example.
MostlyStable
Claude 4 Sonnet gave the (reasonable given the obscurity, but wrong) answer that there was no such easter egg:
>I'm not aware of any MS-DOS productivity program that had Connect Four as a built-in easter egg. While MS-DOS era software was famous for including various easter eggs (like the flight simulator in Excel 97, though that was Windows-era), I can't recall Connect Four specifically being hidden in any major DOS productivity applications.
>The most well-known DOS productivity suites were things like Lotus 1-2-3, WordPerfect, dBase, and later Microsoft Office for DOS, but I don't have reliable information about Connect Four being embedded in any of these.
>It's possible this is a case of misremembered details - perhaps your friend is thinking of a different game, a different era of software, or mixing up some details. Or there might be an obscure productivity program I'm not familiar with that did include this easter egg.
>Would you like me to search for more information about DOS-era software easter eggs to see if we can track down what your friend might be thinking of?
That seems like a pretty reasonable response given the details, and included the appropriate caveat that the model was not aware of any such easter egg, and didn't confidently state that there was none.
SlowTao
>It's possible this is a case of misremembered details - perhaps your friend is thinking of a different game, a different era of software, or mixing up some details. Or there might be an obscure productivity program I'm not familiar with that did include this easter egg.
I am not a fan of this kind of communication. It doesn't know so try to deflect the short coming it onto the user.
Im not saying that isn't a valid concern, but it can be used as an easy out of its gaps in knowledge.
nfriedly
Gemini 2.5 Flash me a similar answer, although it was a bit more confident in it's incorrect answer:
> You're asking about an MS-DOS productivity program that had ConnectFour built-in. I need to tell you that no mainstream or well-known MS-DOS productivity program (like a word processor, spreadsheet, database, or integrated suite) ever had the game ConnectFour built directly into it.
Aeolun
> didn't confidently state that there was none
And better. Didn’t confidently state something wrong.
ziml77
Whenever I ask these AI "Is the malloc function in the Microsoft UCRT just a wrapper around HeapAlloc?", I get answers that are always wrong.
They claim things like the function adds size tracking so free doesn't need to be called with a size or they say that HeapAlloc is used to grab a whole chunk of memory at once and then malloc does its own memory management on top of that.
That's easy to prove wrong by popping ucrtbase.dll into Binary Ninja. The only extra things it does beyond passing the requested size off to HeapAlloc are: handle setting errno, change any request for 0 bytes to requests for 1 byte, and perform retries for the case that it is being used from C++ and the program has installed a new-handler for out-of-memory situations.
Legend2440
ChatGPT 4o waffles a little bit and suggests the Microsoft Entertainment pack (which is not productivity software or MS-DOS), but says at the end:
>If you're strictly talking about MS-DOS-only productivity software, there’s no widely known MS-DOS productivity app that officially had a built-in Connect Four game. Most MS-DOS apps were quite lean and focused, and games were generally separate.
I suspect this is the correct answer, because I can't find any MS-DOS Connect Four easter eggs by googling. I might be missing something obscure, but generally if I can't find it by Googling I wouldn't expect an LLM to know it.
AnotherGoodName
ChatGPT in particular will give an incorrect (but unique!) answer every time. At the risk of losing a great example of AI hallucination, it's Autosketch
Not shown fully but https://www.youtube.com/watch?v=kBCrVwnV5DU&t=39s note the game in the file menu.
overfeed
> I might be missing something obscure, but generally if I can't find it by Googling I wouldn't expect an LLM to know it.
The Google index is already polluted by LLM output, albeit unevenly, depending on the subject. It's only going to spread to all subjects as content farms go down the long tail of profitability, eking profits; Googling won't help because you'll almost always find a result that's wrong, as will LLMs that resort to searching.
Don't get me started on Google's AI answers that assert wrong information and launders fanfic/reddit/forum and elevating all sources to the same level.
dowager_dan99
It gave me two answers (one was Borland sidekick) which I then asked "are you sure about that?" waffled and said actually neither of those it's IBM Handshaker to which I said "I don't think so, I think it's another productivity program" and it replied on further review it's not IBM Handshaker, there are no productivity programs that include Connect Four. No wonder CTO like this shit so much, it's the perfect bootlick.
null
relaxing
If I can find something by Googling I wouldn’t need an LLM to know it.
kbenson
So, like normal history just sped up exponentially to the point it's noticeable in not just our own lifetime (which it seemed to reach prior to AI), but maybe even within a couple years.
I'd be a lot more worried about that if I didn't think we were doing a pretty good job of obfuscating facts the last few years ourselves without AI. :/
spogbiper
just tried this with gemini 2.5 flash and pro several times, it just keeps saying it doesn't know of any such thing and suggesting it was a software bundle where the game was included alongside the productivity application or I'm not remembering correctly.
not great (assuming there actually is such a software) but not as bad as making something up
null
Bjartr
AIs make knowledge work more efficient.
Unfortunately that also includes citogenesis.
tough
probably chatgpt search function already finds this thread soon to answer correctly, hn domain does well on seo and shows up on search results soon enough
abeppu
> which both (1. gives you an unlimited supply of noncopyrighted training data and (2. handily sidesteps the issue of AI-contaminated training data.
I think these are both basically somewhere between wrong and misleading.
Needing to generate your own data through actual experience is very expensive, and can mean that data acquisition now comes with real operational risks. Waymo gets real world experience operating its cars, but the "limit" on how much data you can get per unit time depends on the size of the fleet, and requires that you first get to a level of competence where it's safe to operate in the real world.
If you want to repair cars, and you _don't_ start with some source of knowledge other than on-policy roll-outs, then you have to expect that you're going to learn by trashing a bunch of cars (and still pay humans to tell the robot that it failed) for some significant period.
There's a reason you want your mechanic to have access to manuals, and have gone through some explicit training, rather than just try stuff out and see what works, and those cost-based reasons are true whether the mechanic is human or AI.
Perhaps you're using an off-policy RL approach -- great! If your off-policy data is demonstrations from a prior generation model, that's still AI-contaminated training data.
So even if you're trying to learn by doing, there are still meaningful limits on the supply of training data (which may be way more expensive to produce than scraping the web), and likely still AI-contaminated (though perhaps with better info on the data's provenance?).
nradov
There is an enormous amount of actual car repair experience training data on YouTube but it's all copyrighted. Whether AI companies should have to license that content before using it for training is a matter of some dispute.
AnotherGoodName
>Whether AI companies should have to license that content before using it for training is a matter of some dispute.
We definitely do not have the right balance of this right now.
eg. I'm working on a set of articles that give a different path to learning some key math knowledge (just comes at it from a different point of view and is more intuitive). Historically such blog posts have helped my career.
It's not ready for release anyway but i'm hesitant to release my work in this day and age since AI can steal it and regurgitate it to the point where my articles appear unoriginal.
It's stifling. I'm of the opinion you shouldn't post art, educational material, code or anything that you wish to be credited for on the internet right now. Keep it to yourself or else AI will just regurgitate it to someone without giving you credit.
Legend2440
The flip side is: knowledge is not (and should not be!) copyrightable. Anyone can read your articles and use the knowledge it contains, without paying or crediting you. They may even rewrite that knowledge in their own words and publish it in a textbook.
AI should be allowed to read repair manuals and use them to fix cars. It should not be allowed to produce copies of the repair manuals.
smikhanov
Prediction: there won’t be any AI systems repairing cars before there will be general intelligence-capable humanoid robots (Ex Machina-style).
There also won’t be any AI maids in five-star hotels until those robots appear.
This doesn’t make your statement invalid, it’s just that the gap between today and the moment you’re describing is so unimaginably vast that saying “don’t worry about AI slop contaminating your language word frequency databases, it’ll sort itself out eventually” is slightly off-mark.
sebtron
I don't understand the obsession with humanoid robots that many seem to have. Why would you make a car repairing machine human-shaped? Like, what would it use its legs for? Wouldn't it be better to design it tailored to its purpose?
TGower
Economies of scale. The humanoid form can interact with all of the existing infrastructure for jobs currently done by humans, so that's the obvious form factor for companies looking to churn out robots to sell by the millions.
numpad0
They want a child.
null
smikhanov
Legs? To jump into the workshop pit, among other things. Palms are needed to hold a wrench or a spanner, fingers are needed to unscrew nuts.
Cars are not built to accommodate whatever universal repair machine there could be, cars are built with an expectation that a mechanic with arms and legs will be repairing it, and will be for a while.
A non-humanoid robot in a human-designed world populated by humans looks and behaves like this, at best: https://youtu.be/Hxdqp3N_ymU
ToucanLoucan
It blows my mind that some folks are still out here thinking LLMs are the tech-tree towards AGI and independently thinking machines, when we can't even get copilot to stop suggesting libraries that don't exist for code we fully understand and created.
I'm sure AGI is possible. It's not coming from ChatGPT no matter how much Internet you feed to it.
Legend2440
Well, we won't be feeding it internet - we'll be using RL to learn from interaction with the real world.
LLMs are just one very specific application of deep learning, doing next-word-prediction of internet text. It's not LLMs specifically that's exciting, it's deep learning as a whole.
bravesoul2
Long-run you want AGI then? Once we get AGI, the spam will be good?
ACCount36
Currently, there is no reason to believe that "AI contamination" is a practical issue for AI training runs.
AIs trained on public scraped data that predates 2022 don't noticeably outperform those trained on scraped data from 2022 onwards. Hell, in some cases, newer scrapes perform slightly better, token for token, for unknown reasons.
numpad0
Yeah, the thinking behind "low background steel" concept is that AI training on synthetic data could lead into a "model collapse" that render the AIs anyhow completely mad and useless. That either didn't happen, or all the AI companies internally holds a working filter to sieve out AI data. I'd bet on the former. I still think there might be chances of model collapse happening to humans after too much exposure to AI generated data, but that's just my anecdotal observations and gut feelings.
demosthanos
> AIs trained on public scraped data that predates 2022 don't noticeably outperform those trained on scraped data from 2022 onwards. Hell, in some cases, newer scrapes perform slightly better, token for token, for unknown reasons.
This is really bad reasoning for a few reasons:
1) We've gotten much better at training LLMs since 2022. The negative impacts of AI slop in the training data certainly don't outweigh the benefits of orders of magnitude more parameters and better training techniques, but that doesn't mean they have no negative impact.
2) "Outperform" is a very loose term and we still have no real good answer for measuring it meaningfully. We can all tell that Gemini 2.5 outperforms GPT-4o. What's trickier is distinguishing between Gemini 2.5 and Claude 4. The expected effect size of slop at this stage would be on that smaller scale of differences between same-gen models.
Given that we're looking for a small enough effect size that we know we're going to have a hard time proving anything with data, I think it's reasonable to operate from first principles in this case. First principles say very clearly that avoiding training on AI-generated content is a good idea.
ACCount36
No, I mean "model" AIs, created explicitly for dataset testing purposes.
You take small AIs, of the same size and architecture, and with the same pretraining dataset size. Pretrain some solely on skims from "2019 only", "2020 only", "2021 only" scraped datasets. The others on skims from "2023 only", "2024 only". Then you run RLHF, and then test the resulting AIs on benchmarks.
The latter AIs tend to perform slightly better. It's a small but noticeable effect. Plenty of hypothesis on why, none confirmed outright.
You're right that performance of frontier AIs keeps improving, which is a weak strike against the idea of AI contamination hurting AI training runs. Like-for-like testing is a strong strike.
HanayamaTriplet
I can understand that years before ChatGPT would not have any LLM-generated text, but how much does the year actually correlate with how much LLM text is in the dataset? Wouldn't special-purpose datasets with varying ratios of human and LLM text be better for testing effects of "AI contamination"?
rjsw
I don't think people have really got started on generating slop, I expect it to increase by a lot.
schmookeeg
I'm not as allergic to AI content as some (although I'm sure I'll get there) -- but I admire this analogy to low-background steel. Brilliant.
jgrahamc
I am not allergic to it either (and I created the site). The idea was to keep track of stuff that we know humans made.
ris
> I'm not as allergic to AI content as some
I suspect it's less about phobia, more about avoiding training AI on its own output.
This is actually something I'd been discussing with colleagues recently. Pre-AI content is only ever going to become more precious because it's one thing we can never make more of.
Ideally we'd have been cryptographically timestamping all data available in ~2015, but we are where we are now.
abound
One surprising thing to me is that using model outputs to train other/smaller models is standard fare and seems to work quite well.
So it seems to be less about not training AI on its own outputs and more about curating some overall quality bar for the content, AI-generated or otherwise
jgrahamc
Back in the early 2000s when I was doing email filtering using naive Bayes in my POPFile email filter one of the surprising results was that taken the output of the filter as correct and retraining on a message as if it had been labelled by a human worked well.
glenstein
>more about avoiding training AI on its own output.
Exactly. The analogy I've been thinking of is if you use some sort of image processing filter over and over again to the point that it overpowers the whole image and all you see is the noise generated from the filter. I used to do this sometimes with Irfanview and it's sharp and blur.
And I believe that I've seen TikTok videos showing AI constantly iterating over an image and then iterating over its output with the same instructions and seeming to converge on a style of like a 1920s black and white cartoon.
And I feel like there might be such a thing as a linguistic version of that. Even a conceptual version.
seadan83
I'm worried about humans training on AI output. Example, a rare fish had a viral AI image made. The image is completely fake. Though, when you search for that fish, the image is what comes up, repeatedly. It is hard to know it is all fake, looks real. Content fabrication at scale has a lot of second order impacts.
smikhanov
It’s about keeping different corpuses of written material that was created by humans, for research purposes. You wouldn’t want to contaminate your human language word frequency databases with AI slop, the linguists of this world won’t like it.
koolba
I feel oddly prescient today: https://news.ycombinator.com/item?id=44217676
saberience
I heard this example made at least a year ago on hackernews, probably longer ago too.
See (2 years ago): https://news.ycombinator.com/item?id=34085194
zargon
This has been a common metaphor since the launch of ChatGPT.
glenstein
Nicely done! I think I've heard of this framing before, of considering content to be free from AI "contamination." I believe that idea has been out there in the ether.
But I think the suitability of low background steel as an analogy is something you can comfortably claim as a successful called shot.
null
echelon
I really think you're wrong.
The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.
It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.
It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.
stevenhuang
I agree with you.
I voiced this same view previously here https://news.ycombinator.com/item?id=44012268
If something looks like ai, and if LLMs are that great at identifying patterns, who's to say this won't itself become a signal LLMs start to pickup on and improve through?
nialv7
Does this analogy work? It's exceedingly hard to make new low-background steels, since those radioactive particles are everywhere. But it's not difficult to make AI-free content - well just don't use AI to write it.
nwbt
It is, even if not impossible, entirely impracticable to prove any work is AI free. So no one but you can be sure.
lurk2
Who is going to generate this AI-free content, for what reason, and with what money?
arjie
People do. I do, for instance. My blog is self-hosted, entirely human-written, and it is done for the sake of enjoyment. It doesn't cost much to host. An entirely static site generator would actually be free, but I don't mind paying the 55¢/kWh and the $60/month ISP fee to host it.
wahern
That only begs the question of how to verify what content is AI-free. Was this comment generated by a human? IIRC, one of the big AI startups (OpenAI?) used HN as a proving ground--a sort of Turning Test platform--for years.
vouaobrasil
I make all my YouTube videos and for that matter, everything I do AI free. I hate AI.
lurk2
Once your video is out in the wild there’s as of yet no reliable way to discern whether it was AI-generated or not. All content posted to public forums will have this problem.
Training future models without experiencing signal collapse will thus require either 1) paying for novel content to be generated (they will never do this as they aren’t even licensing the content they are currently training on), 2) using something like mTurk to identify AI content in data sets prior to training (probably won’t scale), or 3) going after private sources of data via automated infiltration of private forums such as Discord servers, WhatsApp groups, and eventually private conversations.
absurdo
Clickbait title that’s all.
submeta
I have started to write „organic“ content again, as I am fed up with ultra polished super noisy texts by colleagues.
I realise that when I write (no so perfect) „organic“ content my colleagues enjoy it more. And as I am lazy, I get right to the point. No prelude, no „Summary“, just a few paragraphs of genuine ideas.
And I am sure this will be a trend again. Until maybe LLMs are trained to generate these kind of non-perfect, less noisy texts.
heavensteeth
> I would have written a shorter letter, but I did not have the time.
- Blaise Pascal
im also unfortunately immediately weary of pretty, punctuated prose now. when something is thrown together with and features quips, slang, and informalities it makes it feel a lot more human.
gorgoiler
This site is literally named for the Y combinator! Module some philosophical hand waving, if there’s one thing we ought to demand of our inference models it’s the ability to find the fixed point of a function that takes content and outputs content, then consumes that same content!
I too am optimistic that recursive training on data that is a mixture of both original human content and content derived from original content, and content derived from content derived from original human content, …ad nauseam, will be able to extract the salient features and patterns of the underlying system.
vunderba
Was the choice to go with a very obviously AI generated image for the banner intentional? If I had to guess it almost looks like DALL-E version 2.
blululu
Gratuitous AI slop is really not a good look. tai;dr is becoming my default response to this kind of thing. I want to hear someone’s thoughts, not an llm’s compression artifacts.
juancroldan
Love that term and gonna adopt it! My default tai;dr response to colleagues is asking AI to write a response for me, and paste it back without reading
Ekaros
Wouldn't actually curated content be still better? That is content were say lot of blogspam and and other content potentially generated by certain groups was removed? As I distinctly remember that lot of content even before AIs was very poor quality.
On other hand, lot of poor quality content could still be factually valid enough not just well edited or formatted.
Look, we just need to add some new 'planes' to Unicode - that mirror all communicatively-useful characters, but with extra state bits for...
guaranteed human output - anyone who emits text in these ranges that was AI generated, rather than artisanally human-composed, goes straight to jail.
for human eyes only - anyone who lets any AI train on, or even consider, any text in these ranges goes straight to jail. Fnord, "that doesn't look like anything to me".
admittedly AI generated - all AI output must use these ranges as disclosure, or – you guessed it - those pretending otherwise go straight to jail.
Of course, all the ranges generate visually-indistinguishable homoglyphs, so it's a strictly-software-mediated quasi-covert channel for fair disclosure.
When you cut & paste text from various sources, the provenance comes with it via the subtle character encoding differences.
I am only (1 - epsilon) joking.