Skip to content(if available)orjump to list(if available)

Running DeepSeek R1 on Your Own (cheap) Hardware – The fast and easy way

cwizou

Maybe you should add "distills" to the title? As this is about installing Ollama to grab the 7b or 14b R1-Qwen-distills, not "R1".

karmakaze

"The fast and easy way" is also being oversold.

> Why Ollama? Because it makes running large language models actually easy.

> If it doesn’t work, fix your system. That’s not my problem.

nkozyra

Right, and fundamentally no different than running any other ollama model that can run reasonably on your local machine.

ghostie_plz

> Unless you like unnecessary risks. In that case, go ahead, genius.

what an off-putting start

Euphorbium

I have R1:1.5B running on my 8gb ram M4 mac mini. Dont know where I would use it, as it is too weak to solve actual problems, but it does run.

BimJeam

Set up a local AI with DeepSeek R1 on a dedicated Linux machine using Ollama—no cloud, no subscriptions, just raw AI power at your fingertips.

croes

Ollama doesn’t run Deepseek, just distilled versions

diffeomorphism

That seems false?

https://ollama.com/library/deepseek-r1

This includes the full model and additionally several distills.

croes

You are right, it lists Deepseek V2

assimpleaspossi

Are there any security concerns over DeepSeek as there are over TikTok?

Saw this in the article

>I would not recommend running this on your main system. Unless you like unnecessary risks.

croes

The model itself can’t do anything bad despite giving false answers or block them.

Using hosted versions where host collects data or using a unknown software that runs the model is the risk.

BimJeam

Sorry if you guys get so overwhelmed with deepseek submissions these days. This will be my one and only in the next time. It is cool to have an anti-weight to all these pay models.

ai-christianson

Personally I don't get sick of it. There's a lot of hype around Deepseek specifically rn, but to run SOTA or near SOTA models locally is a huge deal, even if it's slow.

danielbln

The issue is that this article is conflating (as do many, many articles about the topic) the distilled versions of R1 (basically llama/qwen reasoning finetunes) with the real thing. We are not even talking about quantized versions of R1 here, so it's not quite accurate to say you're running R1 here.

BimJeam

Hey the model from https://ollama.com/library/deepseek-r1 is used. This is not the real thing? Just 671b is the real thing or what are you going to tell me?

donclark

I like this. However, I did not find any minimum specs or speed. Maybe I missed? Can some point me in the right direction please?

throwaway638637

This is what you would expect given the title