Show HN: Can I run this LLM? (locally)
51 comments
·March 8, 2025abujazar
Nice concept – but unfortunstely I found it to be incorrect in all of the examples I tried with my Mac.
It'd also need to be much more precise in hardware specs and cover a lot more models and their variants to be actually useful.
Grading the compatibilty is also an absolute requirement – it's rarely an absolute yes or no, but often a question of available GPU memory. There's a lot of other factors too which don't seem to be considered.
rkagerer
I found it to be incorrect in all of the examples I tried
Are you sure it's not powered by an LLM inside?
abujazar
I believe it'd be more precise if it used an appropriately chosen and applied LLM in combination with web research – in contrast to juggling together some LLM generated code.
ggerules
Confirmed. Nice idea but it doesn't really define"run". I can run some relatively large models compared to their choices. They just happen to be slow.
codingdave
And herein lies the problem with vibe coding - accuracy is wanting.
I can absolutely run models that this site says cannot be run. Shared RAM is a thing - even with limited VRAM, shared RAM can compensate to run larger models. (Slowly, admittedly, but they work.)
lucb1e
New word for me: vibe coding
> coined the term in February 2025
> Vibe coding is a new coding style [...] A programmer can describe a program in words and get an AI tool to generate working code, without requiring an understanding of the code. [...] [The programmer] surrenders to the "vibes" of the AI [without reading the resulting code.] When errors arise, he simply copies them into the system without further explanation.
thaumasiotes
Austen Allred sold a group of investors on the idea that this was the future of everything.
avereveard
Also quantization and allocation strategies are a big thing for local usage. 16gb vram don't seem a lot, but you can run recent 32b model in IQ3 with their full 128k context if you allocate the kv matrix on system memory, with 15t/s and a decent prompt processing speed (just above 1000t/s on my hardware)
asasidh
thanks for your feedback, there is room to show how fast or slow the model will run. I will try to update the app
asasidh
yes I agree that you can run. I have personally run Ollama on a 2020 intel macbook pro. Its not a problem of vibe coding, but of the choice of logic i went with.
grigio
Mhmm.. AMD APU do not have a GPU but can run up to 14B models quite fast
do_not_redeem
> Can I Run DeepSeek R1
> Yes, you can run this model! Your system has sufficient resources (16GB RAM, 12GB VRAM) to run the smaller distilled version (likely 7B parameters or less) of this model.
Last I checked DeepSeek R1 was a 671B model, not a 7B model. Was this site made with AI?
jsheard
> Was this site made with AI?
OP said they "vibe coded" it, so yes.
kennysoona
Goodness. I love getting older and see the ridiculousness of the next generation.
reaperman
It says “smaller distilled model” in your own quote which, generously, also implies quantized.
Here[0] are some 1.5B and 8B distilled+quantized derivatives of DeepSeek. However, I don’t find a 7B model, that seems totally made-up from whole cloth. Also, I personally wouldn’t call this 8B model “DeepSeek”.
0: https://www.reddit.com/r/LocalLLaMA/comments/1iskrsp/quantiz...
sudohackthenews
> > smaller distilled version
Not technically the full R1 model, it’s talking about the distillations where Deepseek trained Qwen and Llama models based on R1 output
do_not_redeem
Then how about DeepSeek R1 GGUF:
> Yes, you can run this model! Your system has sufficient resources (16GB RAM, 12GB VRAM) to run this model.
No mention of distillations. This was definitely either made by AI, or someone picking numbers for the models totally at random.
sudohackthenews
Ok yeah that’s just weird
monocasa
Is it maybe because DeepSeek is a MoE and doesn't require all parameters for a given token?
That's not ideal from a token throughput perspective, but I can see min working set of weight memory gains if you can load pieces into vram for each token.
throwaway314155
It still wouldn't fit in 16 GB memory. Further there's too much swapping going on with MoE models to move expert layers to and from gpu without bottlenecks.
wbakst
lol words out of my mouth
drodgers
This doesn't mention quantisations. Also, it says I can run R1 with 128GB of ram, but even the 1.58 bit quantisation takes 160GB.
lukev
This just isn’t right. It says I can run a 400+ parameter model on my M4 128gb. This is false, even at high quantization.
CharlesW
> One of the most frequent questions one faces while running LLMs locally is: I have xx RAM and yy GPU, Can I run zz LLM model ?
In my experience, LM Studio does a pretty great job of making this a non-issue. Also, whatever heuristics this site is based on are incorrect — I'm running models on a 64GB Mac Studio M1 Max that it claims I can't.
mentalgear
How exactly does the tool check? Not sure it's that useful since simply estimating via the parameter count is a pretty good proxy, then using ollama to dl a model for testing works out pretty nicely.
scwilbanks
I think I would like if it also provided benchmarks. The question I have is less can I run this model, but what is the most performant (on some metric) model I can run on my current system?
lucb1e
- When you press the refresh button, it loads data from huggingface.co/api, doing the same request seemingly 122 times within one second or so
- When I select "no dedicated GPU" because mine isn't listed, it'll just answer the same "you need more (V)RAM" for everything I click. It might as well color those models red in the list already, or at minimum show the result without having to click "Check" after selecting everything. The UX flow isn't great
- I have 24GB RAM (8GB fixed soldered, extended with 1x16GB SO-DIMM), but that's not an option to select. Instead of using a dropdown for a number, maybe make it a numeric input field, optionally with a slider like <input type=range min=1 max=128 step=2>, or mention whether to round up or down when one has an in-between value (I presume down? I'm not into this yet, that's why I'm here / why this site sounded useful)
- I'm wondering if this website can just be a table with like three columns (model name, minimum RAM, minimum VRAM). To answer my own question, I tried checking the source code but it's obfuscated with no source map available, so not sure if this suggestion would work
- Edit2: while the tab is open, one CPU core is at 100%. That's impressive, browsers are supposed to not let a page fire code more than once per second when the tab is not in the foreground, and if it were an infinite loop then the page would hang. WTF is this doing? When I break the debugger at a random moment, it's in scheduler.production.min.js according to the comment above the place where it drops me </edit2>.
Edit: thinking about this again...
what if you flip the whole concept?
1. Put in your specs
2. It shows a list of models you can run
The list could be sorted descending by size (presuming that loosely corresponds to best quality, per my lay person understanding). At the bottom, it could show a list of models that the website is aware of but that your hardware can't run
paulirish
UX whine: Why do I have to click "Check compatibility"? After type and RAM, you instantly know all the models. Just list the compatible ones!
kennysoona
Are people really complaining about having to click a button now? You really expect dynamic nodejs type cruft by default?
thanhhaimai
This is where I hope HN has a downvote option. This is not erroneous to the point I want to flag, but the quality is low enough that I want to counteract the upvotes. This is akin to spam in my opinion.
One of the most frequent questions one faces while running LLMs locally is: I have xx RAM and yy GPU, Can I run zz LLM model ? I have vibe coded a simple application to help you with just that.
Update: A lot of great feedback for me to improve the app. Thank you all.