Skip to content(if available)orjump to list(if available)

How does DeepSeek work: An inside look

How does DeepSeek work: An inside look

31 comments

·February 5, 2025

qeternity

Was this written by DeepSeek? Aside from not being in depth, it’s also inaccurate (MoE details and MTP misunderstanding).

rez-havaei

I'm a bit curious to know what was inaccurate in it.

loveparade

IMO the more interesting question is why low-quality stuff like this keeps getting upvoted here. Feels like any submission that has AI in it automatically gets to the front page no matter the quality. Sad state of HN. I just can't imagine that people actually read this stuff and then decide to upvote because they found it useful. It's probably upvoted by people/bots who only read the title.

The whole reason I come to HN in the first place is to filter out BS clickbait articles exactly like this one, not to have them fill the front page.

lukan

You have some options here:

- check out the new section and vote up good articles

- flag bad submissions

- or complain about it

noduerme

It's certainly AI-generated garbage. But it seems to have slipped from first place to 20th in the time it took to read your comment. If it was ranked up by bots, and say 50 fake accounts, they mistimed the velocity.

quietbritishjim

I'm certainly not an extreme HN old timer, but I've been visiting for a fair number of years and I've seen this sort of complaint since I started, while article quality doesn't seem to have gone down noticeably. In fact, the site rules even caution against complaining that HN is "becoming Reddit", which is essentially the old version of this comment. The fact is that, even here, there will always be a few poor quality articles that slip through.

BTW, pointing out that a particular article is poor, like qeternity's comment, is worthwhile. It's just comments that complain all of HN is going downhill that are tiresome.

loveparade

Article quality has IMO gone down considerably in the last 2-3 years ever since LLMs became a thing. Probably not because LLM articles are upvoted by humans, but more likely because it's much easier to create and manage realistic bot fake accounts with LLMs.

We're at a point where it's impossible to tell which users are bots and which are human by looking at their comments.

benreesman

I’m an old-timer so I’ve seen multiple cycles of the front page being dominated by a PR blitz. Sometimes it’s startup/money-driven (e.g. mobile applications via smartphone adoption), sometimes it’s a community that organizes elsewhere to promote something to the HN readership in a disciplined way (e.g. Rust), sometimes it’s both (e.g. crypto).

What feels different about this one is that it seems very “top down”, it has the flavor of almost lossless transmission of PR/fundraise diktat from VC/frontier vendor exec/institutional NVIDIA-long fund to militant AGI-next-year-ism at the workaday HN commenter level.

Maybe the powers that be genuinely know something the rest of us don’t, maybe they’re just pot committed (consistent with public evidence), I’m not sure. It’s been kind of a while since the GPT3 -> GPT4 discontinuous event that looked like the first sample from an exponential capability curve. Since then it’s been like, it can use a mouse now. Well, it can kinda use a mouse now. Hey that sounds a lot like the robot in Her.

But whatever the reason, this one is for all the marbles.

texan_dev123

"DeepSeek’s policy states that it stores the information for 'further training' of the chatbot in Chinese servers. While it’s not something to get panicked about (most of the applications follow the same principle, despite not being overly open about it)"

Is this really true?

relyks

Why wouldn't it be? OpenAI and Anthropic keep everyone's prompts and use them for training too

energy123

Because of how corporations and state are tightly fused in China's governance.

> A Leninist system features an authoritarian regime in which the ruling elite monopolizes political power in the name of a revolutionary ideology through a highly articulated party structure that parallels, penetrates, and dominates the state at all levels and extends to workplaces, residential areas, and local institutions.

From: https://www.csis.org/analysis/soviet-lessons-china-watching

All user data submitted to DeepSeek is accessible to the CCP.

csmpltn

As opposed to the US?

buyucu

all data you submit to Google, OpenAI, Meta, Facebook, Twitter... is accesible by US government.

The US government has been much more belligerent, and it's very natural to see DeepSeek as the lesser of the evils.

astrange

Anthropic says

> To date we have not used any customer or user-submitted data to train our generative models.

https://www.anthropic.com/news/claude-3-5-sonnet

There's an obvious problem with the concept of training on user prompts; how would training on a bunch of questions cause it to know the answers?

lukan

"There's an obvious problem with the concept of training on user prompts; how would training on a bunch of questions cause it to know the answers?"

I imagine by analysing the chat? If the user says thanks in the end, or gives a thumps up, it likely was a useful and correct answer, that could be included in further training. Or at least considered for future training and I cannot imagine them not considering and experimenting with it.

CarRamrod

Back when I started using LLMs for writing code I would type out long, gently phrased explanations about why it was wrong, as if I was teaching a pupil, hoping it would help. I'm sure a lot of us did. If they can parse and mine those prompts, they'll have a nice little metacorpus to build on.

Now I just tell it to stop being stupid over and over until it does a good job. I wonder if it would improve the model to keep all of the beratement in the training data.

Edit: Apparently a 'metacorpus' is a swollen nematode ass. My sincerest apologies, bros.

space_fountain

User queries were at least historically useful to train smaller models from larger models. You need to know the kind of questions real people ask to train a model that’s good at answering those questions

cheshire_cat

Anthropic states that they don't train on the inputs and outputs of their commercial offerings unless you explicitly opt-in: https://privacy.anthropic.com/en/articles/7996868-i-want-to-...

Do you think they're lying or where you speaking about free tier offerings?

anon373839

The bigger question is what ELSE are Anthropic/OpenAI/et al. doing with your data? Training is just one of many ways to exploit users’ data. Some of the other possibilities are truly chilling.

WhereIsTheTruth

If they lied about copyright infringements, why wouldn't they lie for data collection too?

netfortius

How could such systems prevent purposeful declining/refusal of correct answers, followed (within the "chat") by demand for "corrections" in misleading ways, and stopping the "chat" only when the answer is obviously wrong? Couldn't instructions in one such solution, meant to DoS (not in volume, but in malicious purposely constructed "conversations") the competitor, lead to an overall degradation of quality in all, eventually?

llm_trw

They don't. R1 gets the right answer in the thinking part of the response and ignores it in the response more than half the time in my tests.

cubefox

The headline says "an in-depth look", but the whole post is quite short and doesn't go into much detail. I found these overviews better:

https://www.lesswrong.com/posts/a9GR7m4nyBsqjjL8d/deepseek-r...

https://newsletter.languagemodels.co/p/the-illustrated-deeps...

bingzhuwuhen

if you know Chinese,this may help you。 https://www.meoai.net/deepseek-r1.html

SebFender

"But for me, there’s another reason: DeepSeek feels unbiased and direct"

Is it just me or this person hasn't read much on the subject?

buyucu

DeepSeek feels a lot less censored than OpenAI models.

null

[deleted]