Skip to content(if available)orjump to list(if available)

Qwen2.5-1M: Deploy your own Qwen with context length up to 1M tokens

anotherpaulg

In my experience with AI coding, very large context windows aren't useful in practice. Every model seems to get confused when you feed them more than ~25-30k tokens. The models stop obeying their system prompts, can't correctly find/transcribe pieces of code in the context, etc.

Developing aider, I've seen this problem with gpt-4o, Sonnet, DeepSeek, etc. Many aider users report this too. It's perhaps the #1 problem users have, so I created a dedicated help page [0].

Very large context may be useful for certain tasks with lots of "low value" context. But for coding, it seems to lure users into a problematic regime.

[0] https://aider.chat/docs/troubleshooting/edit-errors.html#don...

badlogic

I concur. In my work (analysing news show transcripts and descriptions), I work with about 250k input tokens max. Tasks include:

- Summarize topics (with references to shows) - Find quotes specific to a topic (again with references)

Anything above 32k tokens fails to have acceptable recall, across GPT-4o, Sonnet, and Google's Gemini Flash 1.5 and 2.0.

I suppose it kind of makes sense, given how large context windows are implemented via things like sparse attention etc.

arkh

My hypothesis is code completion is not a text completion problem. More of a graph completion one.

So we may have got to a local maximum regarding code helpers with LLMs and we'll have to wait for some breakthrough in the AI field before we get something better.

raincole

But these models don't work that well even for text when you gave them a huge context. They're reasonably good at summarization, but if you ask them to "continue the story" they will write very inconsistent things (eerily similar to what a sloppy human writer does, though.)

meiraleal

We should be able to provide 2 fields, context and prompt so the prompt gets higher priority and don't get mixed with the whole context.

meiraleal

For this breakthrough to happen, big tech will need to hire software engineers again :)

But the good thing is that DeepSeek proved those breakthroughs are going to happen one way or another, fast.

adamgordonbell

Aider is great, but you need specific formats from the llm. That might be where the challenge is.

I've used the giant context in Gemini to dump a code base and say: describe the major data structures and data flows.

Things like that, overview documents, work great. It's amazing for orienting in an unfamiliar codebase.

anotherpaulg

Yes, that is true. Aider expects to work with the LLM to automatically apply edits to the source files. This requires precision from the LLM, which is what breaks down when you overload them with context.

noname120

Not true. In Aider the patch produced by the LLM is sent to a second model that is just tasked with fixing the patch — it works wonders.

NiloCK

I learned this very explicitly recently. I've had some success with project and branch prompts - feeding a bunch of context into the beginning of each dialog.

In one dialog, some 30k tokens later, Claude requested the contents of package.json... which was in the context window already - the whole file!

The strange thing was that after I said so, without re-inserting, Claude successfully read it from context to fill the gap in what it was trying to do.

It's as if a synopsis of what exists in-context delivered with each message would help. But that feels weird!

ksynwa

Any idea why this happens?

Yusefmosiah

It’s not just the quantity of tokens in context that matters, but the coherence of the concepts in the context.

Many conflicting ideas are harder for models to follow than one large unified idea.

seunosewa

The behaviour you described is what happens when you have small context windows. Perhaps you're feeding the models with more tokens than you think you are. I have enjoyed loading large codebases into AI Studio and getting very satisfying and accurate answers because the models have 1M to 2M token context windows.

dr_kiszonka

How do you get those large codebases into AI Studio? Concat everything into one big file?

social_quotient

Concat to a file but it helps to make an ascii tree at the top and then for each merged file out its path and orientation details. I’ve also started playing with adding line ranges to the ascii tree hoping that the LLMs (more specifically the agentic ones) start getting smart enough to jump to the relevant section.

adamgordonbell

Basically yes, I have a helper program, but that's mainly what it does.

orbital-decay

Overall accuracy degradation on longer contexts is just one major issue. Another is that lost-in-the-middle problem starts being much worse on longer contexts, so when it significantly exceeds the length of model's training examples, the tokens in the middle might as well not exist.

tmcdonald

Ollama has a num_ctx parameter that controls the context window length - it defaults to 2048. At a guess you will need to set that.

anotherpaulg

This is a harsh foot-gun that seems to harm many ollama users.

That 2k default is extremely low, and ollama *silently* discards the leading context. So users have no idea that most of their data hasn’t been provided to the model.

I’ve had to add docs [0] to aider about this, and aider overrides the default to at least 8k tokens. I’d like to do more, but unilaterally raising the context window size has performance implications for users.

Edit: Ok, aider now gives ollama users a clear warning when their chat context exceeds their ollama context window [1].

[0] https://aider.chat/docs/llms/ollama.html#setting-the-context...

[1] https://github.com/Aider-AI/aider/blob/main/aider/coders/bas...

magicalhippo

There are several issues in the Ollama GitHub issue tracker related to this, like this[1] or this[2].

Fortunately it's easy to create a variant of the model with increased context size using the CLI[3] and then use that variant instead.

Just be mindful that longer context means more memory required[4].

[1]: https://github.com/ollama/ollama/issues/4967

[2]: https://github.com/ollama/ollama/issues/7043

[3]: https://github.com/ollama/ollama/issues/8099#issuecomment-25...

[4]: https://www.reddit.com/r/LocalLLaMA/comments/1848puo/comment...

neuralkoi

Thank you! I was looking for how to do this. The example in the issue above shows how to increase the context size in ollama:

    $ ollama run llama3.2
    >>> /set parameter num_ctx 32768
    Set parameter 'num_ctx' to '32768'
    >>> /save llama3.2-32k
    Created new model 'llama3.2-32k'
    >>> /bye
    $ ollama run llama3.2-32k "Summarize this file: $(cat README.md)"
    ...
The table in the reddit post above also shows context size vs memory requirements for Model: 01-ai/Yi-34B-200K Params: 34.395B Mode: infer

    Sequence Length vs Bit Precision Memory Requirements
       SL / BP |     4      |     6      |     8      |     16
    --------------------------------------------------------------
           256 |     16.0GB |     24.0GB |     32.1GB |     64.1GB
           512 |     16.0GB |     24.1GB |     32.1GB |     64.2GB
          1024 |     16.1GB |     24.1GB |     32.2GB |     64.3GB
          2048 |     16.1GB |     24.2GB |     32.3GB |     64.5GB
          4096 |     16.3GB |     24.4GB |     32.5GB |     65.0GB
          8192 |     16.5GB |     24.7GB |     33.0GB |     65.9GB
         16384 |     17.0GB |     25.4GB |     33.9GB |     67.8GB
         32768 |     17.9GB |     26.8GB |     35.8GB |     71.6GB
         65536 |     19.8GB |     29.6GB |     39.5GB |     79.1GB
        131072 |     23.5GB |     35.3GB |     47.0GB |     94.1GB
    *   200000 |     27.5GB |     41.2GB |     54.9GB |    109.8GB

    * Model Max Context Size
Code: https://gist.github.com/lapp0/d28931ebc9f59838800faa7c73e3a0...

simonw

Huh! I had incorrectly assumed that was for output, not input. Thanks!

YES that was it:

  files-to-prompt \
    ~/Dropbox/Development/llm \
    -e py -c | \
  llm -m q1m 'describe this codebase in detail' \
   -o num_ctx 80000
I was watching my memory usage and it quickly maxed out my 64GB so I hit Ctrl+C before my Mac crashed.

jmorgan

Sorry this isn't more obvious. Ideally VRAM usage for the context window (the KV cache) becomes dynamic, starting small and growing with token usage, whereas right now Ollama defaults to a size of 2K which can be overridden at runtime. A great example of this is vLLM's PagedAttention implementation [1] or Microsoft's vAttention [2] which is CUDA-specific (and there are quite a few others).

1M tokens will definitely require a lot of KV cache memory. One way to reduce the memory footprint is to use KV cache quantization, which has recently been added behind a flag [3] and will 1/4 the memory footprint if 4-bit KV cache quantization is used (OLLAMA_KV_CACHE_TYPE=q4_0 ollama serve)

[1] https://arxiv.org/pdf/2309.06180

[2] https://github.com/microsoft/vattention

[3] https://smcleod.net/2024/12/bringing-k/v-context-quantisatio...

gcanyon

I think Apple stumbled into a problem here, and I hope they solve it: reasonably priced Macs are -- by the new standards set by modern LLMs -- severely memory-constrained. MacBook Airs max out at 24GB. MacBook Pros go to 32GB for $2200, 48GB for something like $2800, and to get to 128GB requires shelling out over $4000. A Mini can get you to 64GB for $2000. A Mac Studio can get you to 96GB for $3000, or 192GB for $5600.

In this LLM era, those are rookie numbers. It should be possible to get a Mac with a lesser processor but at least 256GB of memory for $2000. I realize part of the issue is the lead time for chip design -- since Mac memory is an integral part of the chip, and the current crop were designed before the idea of running something like an LLM locally was a real probability.

But I hope the next year or two show significant increases in the default (and possible) memory for Macs.

senko

> It should be possible to get a Mac with a lesser processor but at least 256GB of memory for $2000.

Apple is not known for leaving money on the table like that.

Also, projects like NVidia DIGITS ($2k for 128G) might make Apple unwilling to enter the market. As you said, Studio with 192G is $5600k. For purely AI purposes, two DIGITS' are a better choice, and non-AI usages don't need such ludicros amount of RAM (maybe for video, but those customers are willing to pay more).

amrrs

This has been the problem with a lot of long context use cases. It's not just the model's support but also sufficient compute and inference time. This is exactly why I was excited for Mamba and now possibly Lightning attention.

Even though the new DCA based on which these models provide long context could be an interesting area to watch;

thot_experiment

Ollama is a "easymode" LLM runtime and as such has all the problems that every easymode thing has. It will assume things and the moment you want to do anything interesting those assumptions will shoot you in the foot, though I've found ollama plays so fast and loose even first party things that "should just work" do not. For example if you run R1 (at least as of 2 days ago when i tried this) using the default `ollama run deepseek-r1:7b` you will get different context size, top_p and temperature vs what Deepseek recommends in their release post.

xigency

Ollama definitely is a strange beast. The sparseness of the documentation seems to imply that things will 'just work' and yet, they often don't.

rahimnathwani

Yup, and this parameter is supported by the plugin he's using:

https://github.com/taketwo/llm-ollama/blob/4ccd5181c099af963...

simonw

woadwarrior01

MLX does not support dual chunk attention[1] that these models use for long contexts, yet.

[1]: https://arxiv.org/abs/2402.17463

ilaksh

What's the SOTA for memory-centric computing? I feel like maybe we need a new paradigm or something to bring the price of AI memory down.

Maybe they can take some of those hundreds of billions and invest in new approaches.

Because racks of H100s are not sustainable. But it's clear that increasing the amount of memory available is key to getting more intelligence or capabilities.

Maybe there is a way to connect DRAM with photonic interconnects that doesn't require much data ordering for AI if the neural network software model changes somewhat.

Is there something that has the same capabilities of a transformer but doesn't operate on sequences?

If I was a little smarter and had any math ability I feel like I could contribute.

But I am smart enough to know that just building bigger and bigger data centers is not the ideal path forward.

lovelearning

I'm not sure how SOTA it is but the sentence about connecting DRAM differently reminded me of Cerebras' scalable MemoryX and its "weight streaming" architecture to their custom ASIC. You may find it interesting.

[1]: https://cerebras.ai/press-release/cerebras-systems-announces...

[2]: https://cerebras.ai/chip/announcing-the-cerebras-architectur...

ilaksh

Yeah, Cerebras seems to be the SOTA. I suspect we need something more radically different for truly memory-centric computing that will be significantly more efficient.

mkroman

The AI hardware race is still going strong, but with so many rapid changes to the fundamental architectures, it doesn't make sense to bet everything on specialized hardware just yet.. It's happening, but it's expensive and slow.

There's just not enough capacity to build memory fast enough right now. Everyone needs the biggest and fastest modules they can get, since it directly impacts the performance of the models.

There's still a lot of happening to improve memory, like the latest Titans paper: https://arxiv.org/abs/2501.00663

So I think until a breakthrough happens or the fabs catch up, it'll be this painful race to build more datacenters.

rfoo

> Because racks of H100s are not sustainable.

Huh? Racks of H100s are the most sustainable thing we can have for LLMs for now.

mmaunder

Just want to confirm: so this is the first locally runnable model with a context length of greater than 128K and it’s gone straight to 1M, correct?

segmondy

No, this is not the first local model with a context length of greater than 128k, there have been such models, for example the following

https://huggingface.co/ai21labs/AI21-Jamba-1.5-Mini 256k https://huggingface.co/THUDM/glm-4-9b-chat-1m 1M

and many other's that supposedly extended traditional models via finetune/rope scaling

mmaunder

Thanks.

terhechte

Yes. It requires a lot of ram, and even on a M4 with a lot of ram, if you give it 1mio tokens the prompt processing alone (that is, before you get the first response token) will probably take ~30min or more. However I'm looking forward to check if indeed I can give it a whole codebase and ask questions about it.

marci

You might want to try caching to a file with mlx.

https://github.com/ml-explore/mlx-examples/pull/956

edit: here's a quick example for qwen2.5-1M from a mlx dev

https://x.com/awnihannun/status/1883611098081099914

null

[deleted]

simonw

I'm really interested in hearing from anyone who does manage to successfully run a long prompt through one these on a Mac (using one of the GGUF versions, or through other means).

terhechte

I gave it a 446433 token input, then it calculated for ~4 hours, and gave me a reasonable response. The content was a Rust / Typescript codebase where Typescript is the frontend and Rust is the backend. I asked it which backend apis are currently not used by the frontend. I haven't checked yet, but the answer looked correct.

Running this on a M4 max

terhechte

I bought an M4 Max with 128g of ram just for these use cases. Currently downloading the 7b

null

[deleted]

rcarmo

I'm missing what `files-to-prompt` does. I have an M3 Max and can take a stab at it, although I'm currently fussing with a few quantized -r1 models...

simonw

rcarmo

You might want to use the file markers that the model outputs while being loaded by ollama:

    lm_load_print_meta: general.name     = Qwen2.5 7B Instruct 1M
    llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
    llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
    llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
    llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
    llm_load_print_meta: LF token         = 148848 'ÄĬ'
    llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
    llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
    llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
    llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
    llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
    llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
    llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
    llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
    llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
    llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
    llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
    llm_load_print_meta: max token length = 256

mmaunder

This API only model with a 1M context window was released back in Nov. Just for some historical context.

https://qwenlm.github.io/blog/qwen2.5-turbo/

simonw

That's a different model - the 2.5 Turbo one. Today's release is something different.

bloomingkales

I’ve heard rumbling about native context length. I don’t know too much about it, but is this natively 1M context length?

So even models like llama3 8b say they have a larger context, but they really don’t in practice. I have a hard time getting past 8k on 16gb vram (you can definitely set the context length higher, but the quality and speed degradation is obvious).

I’m curious how people are doing this on modest hardware.

segmondy

You can't on modest hardware, VRAM size is a function of model size, KV cache that depends on context length and the quant size of the model and K/V. 16gb isn't much really. You need more vram, the best way for most folks is to buy a macbook with unified memory. You can get a 128gb mac, but it's not cheap. If you are handy and resourceful you can build a GPU cluster.

A4ET8a8uTh0_v2

I never thought I would say it, but the 128gb mbp is probably the most cost efficient way ( and probably easiest ) of doing it. New nvidia cards ( 5090 ) are 32gb and supposedly just shy of 2k and used a100 40gb is about 8k..

All in all, not a cheap hobby ( if you are not doing it for work ).

elorant

You need a model that has specifically been extended for larger context windows. For Llama-3 there's Llama3-gradient with up to 1M tokens. You can find it at ollama.com

jkbbwr

Everyone keeps making the context windows bigger, which is nice.

But what about output? I want to generate a few thousand lines of code, anyone got any tips?

mmaunder

Repeatedly ask it for more providing the previous output as context. (Back to context length as a limitation)

anotheryou

Isn't that the same? limit-wise.

Now you just need to convince it to output that much :)

bugglebeetle

So context size actually helps with this, relative to how LLMs are actually deployed as applications. For example, if you look at how the “continue” option in the DeepSeek web app works for code gen, what they’re likely doing is reinserting the prior messages (in some form) to a new one to prompt further completion. The more context size a model has and can manage successfully, the better it will likely be able at generating longer code blocks.

nejsjsjsbsb

Isn't input/output lengths an arbitrary distinction. Under the hood, output becomes the input for the next token at each step. OpenAI may charge you more $$ by forcing you to add output to the input and call the API again. But running local you don't have that issue.

AyyEye

These things already produce embarrassing output. If you make it longer it's just going to get worse.

buyucu

first, this is amazing!

second, how does one increase the context window without requiring obscene amounts of RAM? we're really hitting the limitations of the transformer architecture's quadratic scaling...

35mm

Chain of agents seems to be a promising approach for splitting up tasks into smaller parts and then synthesising the results[1]

[1] https://research.google/blog/chain-of-agents-large-language-...

postepowanieadm

Only me getting part of my answer in Chinese?