Skip to content(if available)orjump to list(if available)

Interview with DeepSeek Founder: We're Done Following. It's Time to Lead

ben30

Its success stems from a refreshingly unconventional approach to innovation. Liang Wenfeng's philosophy of maintaining a flat organizational structure where researchers have unrestricted access to computing resources and can collaborate freely.

What's particularly striking is their deliberate choice to stay lean and problem-focused, avoiding the bureaucratic bloat that often plagues AI departments at larger companies. By hiring people driven primarily by curiosity and technical challenges rather than career advancement, they've created an environment where genuine innovation can flourish.

AI development doesn't necessarily require massive resources - it's more about fostering the right culture of open collaboration and maintaining focus on the core technical challenges.

CharlieDigital

The model you described probably works great (not just in AI) as long as it's not your primary and direct source of revenue with which you must pay back investors. Once it becomes your primary and direct source of revenue and you must generate some returns for investors or meet some revenue targets, then whatever you're doing somehow has to align with that revenue stream (often ruining the fun).

bko

You're describing a lot of tech companies like Google that had all these different orgs that were money sinks not related to a direct source of revenue and funded by dominance in search and high margins. And these programs didn't necessarily yield great creative products. Quite the opposite.

Whereas if you have some objective measure that's driving your decisions, like revenue or customer engagement (proxy for usefulness), you can drive great results.

I think either method can work if you have the right culture.

CharlieDigital

Having the right culture is easier said than done.

The enshittification of nearly everything can largely be attributed to the difficulty of maintaining that culture of open-ended creation without direct accountability to revenue.

dundarious

Have you seen the "returns" for OpenAI, etc.? All cutting edge research is subsidized by government or megacorps in USA.

CharlieDigital

They are not profitable. The problem is that they have to find their way to profitability because investors and shareholders need to be paid back. And because they have to do that, you could say that it "compromises" on objectives that would more rapidly advance the field like openly sharing their reasoning architecture.

nialv7

I think most of what's said here has value. But be wary of the survivorship bias. There are also a ton of flat, lean, problem-focused and curiosity driven startups that _don't_ succeed. Their success definitely has a lot to do their talent and how they work, but also a lot of luck, too.

alecco

I don't buy that. Allegedly Google lost to OpenAI because the compute resources were allocated evenly and then each team shared to other teams. So it became a popularity contest instead of meritocratic allocation. And then Pichai tried to merge all the different AI teams making it even worse. From rumors by connected people on podcasts.

There has to be some structure to put the best ones first. The key problem is how to judge that.

https://en.wikipedia.org/wiki/Adverse_selection

https://en.wikipedia.org/wiki/Tragedy_of_the_commons

mjburgess

"teams" are a clue to there being an underlying hierarchy and division which may not exist at deepseek. If its a smaller self-organising team of people, there would be no such effect.

It is also common knowledge that google's internal team and advancement politics are already pathological -- against a background of a winner-takes-all, cooperation does not work.

falcor84

While I agree that Google's advancement politics are concerning, it's far fetched to say that there's a winner-takes-all aspect - there's still a lot of remuneration/power/recognition to go around for everyone, just not unevenly distributed.

ben30

DeepSeek's results speak for themselves - they've built a competitive AI model with millions that matches capabilities of systems costing billions.

While debates about resource allocation and organisational structure are interesting, what matters is their demonstrated ability to innovate efficiently.

The proof is in their technical achievement.

jonplackett

This sounded a lot like Valve’s company structure/lack of structure.

https://steamcdn-a.akamaihd.net/apps/valve/Valve_Handbook_Lo...

Sounds like an exciting place to work.

HelloMcFly

For the curious, Valve uses a version of the "lattice organization". This style or management structure is attributed to Bill Gore, creator of Gore-Tex.

https://participedia.net/method/lattice-organization

worldsayshi

Either you give people a clear idea what you will build and they will organize accordingly or you organize and they will guess what they are supposed to build according to the org structure.

CyberMacGyver

Ah yes, classic staying lean and exposing your database[0]

https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepse...

9283409232

A faux pas for sure but we can't pretend Google, Meta, and OpenAI all haven't had similar issues even with their much larger team sizes.

ToucanLoucan

I mean, I guess you could call it unconventional since it was the status quo of basically every super-massive tech company way back in the early days of the tech sector, but has since been utterly eclipsed by like 4 companies the size of nations that can't seem to ship a single app without the input of 6,000 people.

walterbell

2023 and 2024 interviews, https://www.lesswrong.com/posts/kANyEjDDFWkhSKbcK/two-interv...

> Liang Wenfeng is a very rare person in China's AI industry who has abilities in “strong infrastructure engineering, model research, and also resource mobilization”, and “can make accurate high-level judgments, and can also be stronger than a frontline researcher in the technical details”. He has a “terrifying ability to learn” and at the same time is “less like a boss and more like a geek”.

infecto

Maybe worth adding that the interview is from July of last year. This is not a recent interview. Still interesting but was not what I was expecting.

tobr

On the other hand, if you release something innovative in January, you probably had to already be on the right track in July.

falcor84

It's a great interview throughout, but I was thrown off by this strange question (which I found to be much more interesting than the answer):

> An Yong: What do you envision as the endgame for large AI models?

I don't know if it has a different meaning/connotation in Chinese, but reading this metaphor with a Chess connotation scared me. If there is a game, who are the players? what is the victory condition? will there be a static stalemate, or a definitive win? and most importantly, will there be an opportunity for future games after it, or is this the final game we get to play?

skellera

It’s a pretty common phrase for “what’s the ultimate goal?”

I don’t think it’s meant to be taken as a chess metaphor.

falcor84

While less figurative, I don't see how "what's the ultimate goal for large AI models" makes it less scary.

Some of it might have to do with my having recently watched Dune Prophecy (set in the aftermath of the Butlerian Jihad) but this recent rapid progress in AI is putting me in somewhat of an apocalyptic mindset.

kelseyfrog

In a world where work doesn't exist, what happens to a society built on work ethic? What is an economy without labor?

AndyNemmity

It didn't sound awkward or weird to me at all. I think you took a very common word, and then extrapolated it out in a chess context when it's nothing to do with a chess context.

LelouBil

Isn't "endgame" a common expression to mean "the end", "the place where there's no progress anymore" etc ?

falcor84

Sorry, am I the only one who finds this sort of formulation in regard to large AI models existentially intimidating?

zem

I think it means more "the place where there's no existential risk any more"

oli5679

I think this project is awesome and am quite disappointed with some cynical commentary from large American labs.

Researcher at Meta or OpenAI spending hundreds of millions on compute, and being paid millions themselves, whilst not publishing any of their learnings openly, here a bunch of very smart, young Chinese researchers have had some great ideas, proved they work, and published details that allow everyone else to replicate.

    "No “inscrutable wizards” here—just fresh graduates from top universities,    PhD candidates (even fourth- or fifth-year interns), and young talents with a few years of experience."

    "If someone has an idea, they can tap into our training clusters anytime without approval. Additionally, since we don’t have rigid hierarchical structures or departmental barriers, people can collaborate freely as long as there’s mutual interest."

fngjdflmdflg

Why did you group Meta with OpenAI here?

cchance

Is that why if you ask it... it says it's based on ChatGPT4 ?

durumu

Most LLMs do this due to the proliferation of ChatGPT-generated content in the training data.

jgord

At the heart of all progress is the mantra that "best idea wins".

Maybe DeepSeeks creative use of RL within LLMs will open up founder and VC interest in using RL to solve real problems - I expect to see a cambrian explosion of high growth applied RL startups in engineering,logistics,finance,medicine

eduction

I think this was super interesting, it sounds like he’s leaning more into “open” than openai is.

“In disruptive tech, closed-source moats are fleeting. Even OpenAI’s closed-source model can’t prevent others from catching up.

“Therefore, our real moat lies in our team’s growth—accumulating know-how, fostering an innovative culture. Open-sourcing and publishing papers don’t result in significant losses. For technologists, being followed is rewarding. Open-source is cultural, not just commercial. Giving back is an honor, and it attracts talent.”

halfmatthalfcat

Inspiring and what the Valley (at least in part) use to represent. People doing cool shit as an end, not a means.

rfoo

Says someone who personally has 50-100 billion USD. And no, it's not net worth through corp shares. The guy is essentially his own LP.

null

[deleted]

newbie578

Doesn't matter if and how much they used OpenAI's models. The only important thing that matters is that they managed to disrupt the status quo, Silicon Valley will need to be more aware going forward.

ecret

[dead]

null

[deleted]