AI LLMs can't count lines in a file
34 comments
·June 4, 2025Kranar
This is true, in general LLMs are not great at counting because they don't see individual characters they see tokens.
Imagine you spoke perfect English, but you learned how to write English using Mandarin characters, basically using the closest sounding Mandarin characters to write in English. Then someone asks you how many letter o's are in the sentence "Hello how are you?". Well you don't read using English charaters, you read using Mandarin characters so you read it as "哈咯,好阿优?" because using Mandarin letters that's the closest sounding way to spell "Hello how are you?"
So now if someone asks you how many letter o's are in "哈咯,好阿优?", you don't really know... you are familiar conceptually that the letter o exists, you know that if you spelled the sentence in English it would contain the letter o, and you can maybe make an educated guess about how many letter o's there are, but you can't actually count out how many letter o's there are because you've never seen actual English letters before.
The same thing goes for an LLM, they don't see characters, they only see tokens. They are aware that characters do exist, and they can reason about their existence, but they can't see them so they can't really count them out either.
krackers
I see this claim repeated over and over, and while it seems plausible, this should be an easily testable hypothesis right? You don't even need a _large_ model for this because the hypothesis you are testing is whether transformer models [possibly with chain of thought] can count to some "reasonable" limit (maybe it can be modeled in TCS sense as something to do with circuit complexity) and you can easily train on synthetic strings. Is there any paper that shows proof/disproof that transformer networks using single-character tokenization successfully count?
Kranar
Forget single character tokens, you can just go on OpenAI's own tokenizer website [1] and construct tokens and ask ChatGPT to count how many tokens there are in a given string. For example hello is a single token and if I ask ChatGPT to count how many times "hello" appears in "hellohellohellohellohellohellohellohellohellohellohellohellohellohellohellohellohellohellohellohellohello" or variations thereof it gets it right.
Be careful that you structure your query so that all of the "hello" are in their own token, because you could inadvertently ask it where the first or last hello gets chunked into the text just before or just after.
krackers
Neat finding, does it generalize to larger samples? Someone should randomly generate a few thousand such strings, feed it to 4o or o3, and get some accuracy results. Then compare the accuracy in cases of counting individual letters in random strings.
I find there's a lot of low-hanging fruit and claims about LLMs that are easily testable, but for which no benchmarks exist. E.g. the common claim about LLMs being "unable" to multiply isn't fully accurate, someone did a proper benchmark and found that there's a gradual decline in accuracy as digit length increases past 10 digits by 10 digits. I can't find the specific paper, but I also remember there was a way of training a model on increasingly hard problems at the "frontier" (GRPO-esque?) that fixed this issue, giving very high accuracy up to 20 digits by 20 digits.
insin
Why would you expect them to be able to given how they work? [1]
Someone where I work was trying to get an LLM to evaluate responses to an internal multiple-choice quiz (A, B or C), putting people into different buckets based on a combination of the total number of correct responses and having answered specific questions correctly. They spent a week "prompt engineering" it back and forth, with subtle changes to their instructions on how the scoring should work, with no appreciable effect on accuracy or consistency.
That's another scenario where I felt someone was asking for something with no mechanical sympathy for how it was supposed to happen. Maybe a "thinking" model (why do "AI" companies always abuse terms like this? (rhetorical)) would have been able to get enough stuff into the context for it to be able to get closer to a better outcome, but I took their prompt asked it to write code instead, and got it translated into some overly-commented but simple-enough code which would do the job perfectly every time, including a comment that the instructions they'd provided had a gap where people answering with a certain combination of answers wouldn't fall into any bucket.
apothegm
LLMs don’t reason or count. They predict and output next tokens. “Reasoning” models mostly just have another layer of validating actual output against predictions. Newer models, if provided with programming tools (as they are in the ChatGPT interface), will predict the tokens that make up short scripts and then call those scripts to achieve numeric results for things like counting lines or letters.
zihotki
LLMs are turing complete ( https://arxiv.org/abs/2411.01992 ). Or what is your definition of 'count'?
yencabulator
That's not what the paper says. It says it possible to construct weights that when run through inference will perform Turing machine -equivalent operations on prompts that are specifically made for that purpose.
That does not mean weights derived from a pile of books will do such a thing.
simonw
Yeah, they're still bad at counting.
Tools like Claude Code work around this by feeding code into the LLMs with explicit line numbers - demo of that here: https://static.simonwillison.net/static/2025/log-2025-06-02-... - expand out some of the "tool result" panels until you see it, more notes on where I got that trace from here: https://simonwillison.net/2025/Jun/2/claude-trace/
selcuka
Both GPT-4o and o4-mini got it right for me. They both wrote and executed a small Python program:
# Let's read the file and get line 27
file_path = '/mnt/data/code.py'
line_27 = None
try:
with open(file_path, 'r') as f:
lines = f.readlines()
if len(lines) >= 27:
line_27 = lines[26].rstrip('\n')
else:
line_27 = None
except FileNotFoundError:
line_27 = None
fasthands9
The thing is I imagine the LLM would be able to write a code that counted the lines and outputted what is on line 27. It seems inevitable (in a way that scares me) that a good model in the near future would know enough to write that file and execute on its own.
My understanding is early LLMs were bad at math (for similar reasons) but then got better once the model was hooked up to a calculator behind the scenes.
paulddraper
Claude 4 added Code Execution.
E.g. ask it to find the 100th prime, it will write a Python script and then run that.
__fst__
I also noticed that they struggle reversing strings. Ask it to "generate a list of the 30 biggest countries together with their name in reverse". Most of the results will be correct but you'll likely find some weird spelling mistakes.
It's not something they can regurgitate from previously seen text. Models like Claude with background code execution might get around that.
scarface_74
__fst__
yup, exactly what I meant, e.g
5 Brazil liziarB
scarface_74
Telling it to use Python with 4o
https://chatgpt.com/share/6840a944-3bac-8010-9694-2a8b0a9c35...
Even o4-mini-high got it wrong though (Indonesia)
https://chatgpt.com/share/6840a9aa-1260-8010-ba3f-bd99fff721...
avalys
How good do you think a human brain is at doing this if you simply provided the contents of the file as a string of characters (i.e. not in a text editor with line breaks rendered, etc.)?
t-3
Why are you comparing LLM's to a human brain? Software should integrate software when solving problems. It's completely reasonable to expect an LLM given a "count lines" problem to just pipe the text through wc -l.
mhh__
which they will do I'd imagine after being told they have access to a shell
selcuka
Most LLMs have access to such tools. Well, maybe not a Unix shell, but something similar. This is from GPT 4.5's system prompt [1]:
python
When you send a message containing Python code to python, it
will be executed in a stateful Jupyter notebook environment.
python will respond with the output of the execution or time
out after 60.0 seconds. The drive at '/mnt/data' can be used
to save and persist user files. Internet access for this
session is disabled. Do not make external web requests or API
calls as they will fail.
[1] https://github.com/0xeb/TheBigPromptLibrary/blob/main/System...tylersmith
An LLM itself can't use wc. Coding agents like Claude Code or Cursor will call out to command line tools for this kind of problem when the LLM detects it.
selcuka
Well, maybe not wc directly, but they have access to sandboxed Python environments. It must be trivial for an LLM to write the Python code that calculates this.
I don't understand why Gemini insists that it can count the lines itself, instead of falling back to its Python tool [1].
[1] https://github.com/elder-plinius/CL4R1T4S/blob/main/GOOGLE/G...
digianarchist
I think computers are good at counting lines delimited by newline characters.
throwdbaaway
This prompt works fine with Qwen2.5-Coder-32B-Instruct-Q4_K_M:
Add a line number prefix to each line, stopping at line 27. What's on line 27 of this program?
jazzyjackson
They're very bad at geometry. GPT4 (first ed.) tried convincing me a line intersects a sphere 3 times. Completely clueless at comparing volumes of various polyhedra.
dmd
I tried it in gpt-4.1, o4-mini, and claude-4-sonnet, and all got the right answer.
scarface_74
ChatGPT o4-mini got it right
https://chatgpt.com/share/683f9f73-42d8-8010-9cbc-27ad396a55...
ChatGPT 4o (the product not the LLM) got it right with a little additional prompting
https://chatgpt.com/share/683f9fd4-e61c-8010-99be-81d25264ba...
Was starting to mess around with the latest LLM models and found that they're not great at counting lines in files.
I gave Gemini 2.5 flash a python script and asked it to tell me what was at line 27 and it consistently got it wrong. I tried repeatedly to prompt it the right way, but had no luck.
https://g.co/gemini/share/0276a6c7ef20
Is this something that LLM bots are still not good at? I thought they had gotten past the "strawberry" counting problems.
Here's the raw file: https://pastebin.com/FBxhZi6G