LLM from scratch, part 28 – training a base model from scratch on an RTX 3090
116 comments
·December 2, 2025kburman
youngNed
These all look great, I'm very interested in hearing from anyone who has followed any of these.
How did you find it, what did you get from it?
BubbleRings
> …reused its embedding matrix as the weights for the linear layer that projects the context vectors from the last Transformers layer into vocab space to get the logits.
At first glance this claim sounds airtight, but it quietly collapses under its own techno-mythology. The so-called “reuse” of the embedding matrix assumes a fixed semantic congruence between representational space and output projection, an assumption that ignores well-known phase drift in post-transformer latent manifolds. In practice, the logits emerging from this setup tend to suffer from vector anisotropification and a mild but persistent case of vocab echoing, where probability mass sloshes toward high-frequency tokens regardless of contextual salience.
Just kidding, of course. The first paragraph above, from OP’s article, makes about as much sense to me as the second one, which I (hopefully fittingly in y’all’s view) had ChatGPT write. But I do want to express my appreciation for being able to “hang out in the back of the room” while you folks figure this stuff out It is fascinating, I’ve learned a lot (even got a local LLM running on a NUC), and very much fun. Thanks for letting me watch, I’ll keep my mouth shut from now on ha!
tomrod
Disclaimer: working and occasionally researching in the space.
The first paragraph is clear linear algebra terminology, the second looked like deeper subfield specific jargon and I was about to ask for a citation as the words definitely are real but the claim sounded hyperspecific and unfamiliar.
I figure a person needs 12 to 18 months of linear algebra, enough to work through Horn and Johnson's "Matrix Analysis" or the more bespoke volumes from Jeffrey Humpheries to get the math behind ML. Not necessarily to use AI/ML as a tech, which really can benefit from the grind towards commodification, but to be able to parse the technical side of about 90 to 95 percent of conference papers.
danielmarkbruce
One needs about 12 to 18 hours of linear algebra to work though the papers, not 12 to 18 months. The vast majority of stuff in AI/ML papers is just "we tried X and it worked!".
miki123211
You can understand 95+% of current LLM / neural network tech if you know what matrices are (on the "2d array" level, not the deeper lin alg intuition level), and if you know how to multiply them (and have an intuitive understanding why a matrix is a mapping between latent spaces and how a matrix can be treated as a list of vectors). Very basic matrix / tensor calculus comes in useful, but that's not really part of lin alg.
There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them).
cultofmetatron
for anyone looking to get into it, mathacademy has a full zero to everythign you need pathway that you can follow to mastery
https://mathacademy.com/courses/mathematics-for-machine-lear...
gpjt
OP here -- agreed! I tried to summarise (at least to my current level of knowledge) those 12-18 hours here: https://www.gilesthomas.com/2025/09/maths-for-llms
jhardy54
> 12 to 18 months of linear algebra
Do you mean full-time study, or something else? I’ve been using inference endpoints but have recently been trying to go deeper and struggling, but I’m not sure where to start.
For example, when selecting an ASR model I was able to understand the various architectures through high-level descriptions and metaphors, but I’d like to have a deeper understanding/intuition instead of needing to outsource that to summaries and explainers from other people.
tomrod
I was projecting as classes, taken across 2 to 3 semesters.
You can gloss the basics pretty quickly from things like Kahn academy and other sources.
Knowing Linalg doesn't guarantee understanding modern ML, but if you then go read seminal papers like Attention is All You Need you have a baseline to dig deeper.
woadwarrior01
It's just a long winded way of saying "tied embeddings"[1]. IIRC, GPT-2, BERT, Gemma 2, Gemma 3, some of the smaller Qwen models and many more architectures use weight tied input/output embeddings.
jcims
The turbo encabulator lives on.
miki123211
As somebody who understands how LLMs work pretty well, I can definitely feel your pain.
I started learning about neural networks when Whisper came out, at that point I literally knew nothing about how they worked. I started by reading the Whisper paper... which made about 0 sense to me. I was wondering whether all of those fancy terms are truly necessary. Now, I can't even imagine how I'd describe similar concepts without them.
whimsicalism
i consider it a bit rude to make people read AI output without flagging it immediately
QuadmasterXLII
The second paragraph is highly derivative of the adversarial turbo encabulator, which Schmithuber invented in the 90s. No citation of course.
BubbleRings
Are you saying I should have attributed, or ChatGPT should have? I suppose I would have but my spurving bearings were rusty.
empath75
It's a 28 part series. If you start from the beginning, everything is explained in detail.
squigz
I'm glad I'm not the only one who has a Turbo Encabulator moment when this stuff is posted.
billylo
If you are curious about doing something similar with TPU, Google has an article. https://developers.googleblog.com/train-gpt2-model-with-jax-...
RagnarD
I really like this article. I hadn't thought that an RTX 3090 would be capable of generating a sort-of decent small LLM from scratch in a reasonable time, but he shows how in detail.
nfriedly
The full list of articles is at https://www.gilesthomas.com/llm-from-scratch for anyone who's interested but wants to start at the beginning.
lacoolj
Maybe I've been missing out, but can anyone give me a yay/nay on whether this is a worth-while 28-part-series to start from scratch and spend my time watching/reading?
Is it along the same lines as https://github.com/karpathy/llm.c/discussions/677 ?
He (karpathy) has a video series that also does something similar. I found it very informative and entertaining, even at the 1 hour + length it is (there are actually multiple videos, im not sure how long the others are).
nico
Has anyone done something like this but with apple silicon instead of a graphics card? Training a small LLM on an M2-M5?
muricula
I've played with something similar with my M1 using Apple's MLX framework. The problem is I'm compute bound. I've never managed to get my M1 Max's GPU to process more than ~7.8k tokens per second at bf16 precision, so to train a 112M parameter model on ~20 billion tokens I'd need to run the model training for ~30 days.
One solution is to reduce the scope of the problem -- you can train on a smaller less diverse dataset such as TinyStories which is a collection of 1 billion tokens of chatGPT generated children's stories. After about 40 hours, less than one weekend, you'll have a model which can generate mostly grammatical children's stories.
If you have a newer mac and/or an ultra chip you'll have more and faster GPU cores, and might be able to train on FineWeb or a similar, larger and more diverse dataset.
gpjt
OP here -- with a 112M model you should be able to get something worth playing with using 2.24B tokens. The Chinchilla heuristic is tokens = 20 x parameters. Obviously you cam get a better result by grinding through more tokens, but it will be very slow progress. It's worth noting that Andrej Karpathy is using the 20x thing for his nanochat project.
I try to explain the Chinchilla paper in the post, but your favourite AI should be able to explain it well, and has the benefit that you can ask follow-up questions.
goosers
I’m experimenting with this, but using the CPU not the GPU. I’m finishing up writing the series now, but focused more on understanding the architecture than trying to build a useful model. Mine requires talking in the language of Shakespeare, and getting replies in the same, a proof of concept more than a useful tool. https://www.tag1.com/white-paper/part1-tokenization-building...
I was interested in focusing on repeatability and using text sources anyone can legally obtain. It’s been fascinating, but after much experimentation it’s clear that working with more text and more diverse text would be extremely helpful.
fuddle
This is great to see, I'm also re-reading Sebastian Raschka's amazing book.
nullbound
I love the level of detail ( probably, because I see it less and less these days ). It genuinely makes me wonder if anyone tried training LLMs on their own writings ( assuming those bigger than 100+ pages ) and what the results were.
jadbox
I just want to chime in here about the importance of taking notes and having a journal. These things are now more important than ever as they can literally help fine-tune agents to help assist you using your personal style.
trial3
> These things are now more important than ever
oh definitely. i agree here. can't wait to read the rest of the sentence, probably saying something meaningful about the creative benefits of unstructured writing, or the importance of relying on your own thoughts and language and unique voice in the era of LLMs
> as they can literally help fine-tune agents to help assist you using your personal style.
oh
jadbox
I get it. Both things can be true. Unstructured writing can help you develop as a person. It can also teach your own model the 'real raw human train of thoughts' of your personal journey. Personally I love the idea of booting up great-great-grandpa-model that'll have been trained on his 40 years of almost daily journaling. We are not trying to 'remake him' to be clear- we are talking about being have to have an interaction chat with his personality-vibe as it was recorded by his own hand and in his own words.
itissid
I have always wondered if I should be recording all my conversations privately — with consent —with family and friends and then train an LLM to let anyone speak to someone that sounds "like me" when I am gone.
I suppose one could order all the data over time -— decades — and then train a model incrementally every decade and imitate me better at a point in time.
I suppose one could also narrate thoughts and feelings associated with many transcripts, which would be very tedious but would make the LLM imitate not just style but some amount of internal monologue.
I suppose one level further could be an LLM learning about the variety or parts of the ego, the I, me, mine, ours. Then the Observer and the Observed parts of thought — if we can somehow tap internal thought without manually speaking — because thoughts are, metaphorically speaking, the speed of light.
Why would one do all this? I suppose a curt answer would be to "live" eternally of course — with all the limitations of the current tech — but still try.
It might make a fascinating psychoanalysis project, one that might be a better shot at explaining someone's _self_ not as a we, a stranger, might as outwardly see it: just as a series of highs and lows and nothing in between, but instead as how they lived through it.
futuraperdita
You've created a text-based version of a Black Mirror episode: https://en.wikipedia.org/wiki/Be_Right_Back
levmiseri
Fully agree on the importance of taking notes and writing in general [1], but I absolutely do not want to train a model on my texts or attempt a personal style imitation. I can't fully put my finger on why exactly other than that it feels icky and that it would hinder my long-term writing quality rather than help it.
[1] I made an app to be my lifelong companion for this: https://kraa.io/about – No AI integration.
SecretDreams
Is this what tool and die makers used to feel when going to LOC to train their replacements?
Personally, I do not want my likeness to persist after my death, nor do I wish for a company to be able to leverage my likeness after I leave said company.
djmips
from context I figure you meant China and/or other places that would take over American manufacturing but I'm curious what LOC means - typo?
nullbound
I understand the concern, but I also think there are benefits to this approach. And while I absolutely agree with you on the likeness part used for a company, at a personal level, I believe it could have a great impact ( and be of use ). And, more importantly, you can then control the disposition of your likeness appropriately ( via an old fashioned will ). As a society, we seem to have solutions for these situations. They were just not very common.
BoredomIsFun
/r/localllama every once in awhile has such posts; usually very succesful, good results.
alansaber
Fine-tuning on a small corpus can definitely get you good results with some care
spi
A separate comment about conclusions about why they are worse than OpenAI GPT2 - which to me feel to be missing the point.
One main point is batch size - I'd agree with Gemini here. Batch size <= 5 with 1024 seq len is really tiny. Nowadays models are trained with effective batch size of millions of tokens in total. Of course, this won't fit into memory, one uses gradient accumulations to that purpose, again as mentioned by Gemini.
Training duration is definitely also a reason - models do get better over time, otherwise people wouldn't train so long wasting millions :-) just how long for optimality is unclear, but certainly < 2 days is not optimal even at this "small" scale.
The optimizer could also play a role. As the author mentions, a fixed learning rate is hardly optimal, it is typically both increased in the beginning ("warm up", but that's for stability, if training works without, that's not an issue) and scaled down at the end ("cool down" - that is, annealing, with cosine as mentioned in the article). This generally squeezes out a bit more performance. Also, while it's true that dropout was used back then (might be useful for many epochs, likely only harmful for < 1 epoch), using _both_ dropout _and_ weight_decay > 0, as the author does, is probably wrong and makes training too slow & careful to get good results. Also, even if used, a "good" implementation of weight decay should skip some layers like embeddings and biases (GPT2 did that, and it's relatively important to do so).
On the other hand, I'm pretty sure that using mixed precision and TF32 has absolutely no downsides. It's really standard nowadays to use either mixed precision (FP16 gradients + FP32 base weights) or directly BF16 ("brain" float 16, a bit like the TF32 described there, but with only 16 bits) and I have almost never seen either one fail... and when it does, it typically fails spectacularly, with NaN losses or the model degenerating to trivial performance.
gpjt
OP here -- thanks! I'm in the process of doing some trains using the same code plus DDP on big Lambda Labs machines, and (within the bounds of what I can afford) will hopefully have some interesting results about all of those shortly.
gpjt
OK, early indicators support both you and Gemini quite strongly re: batch size. On my (somewhat ad-hoc) test dataset, I get losses like this:
* OpenAI medium weights: 3.231
* OpenAI small weights: 3.500
* My locally trained model, FineWeb Chinchilla, batch size 6: 3.944
* My locally trained model, FineWeb-Edu Chinchilla, batch size 6: 4.167
* My locally trained model, FineWeb-Edu double Chinchilla, batch size 6: 4.135
* My cloud trained model, FineWeb Chinchilla, batch size 13 \* 8 = 104: 3.674
That last one was trained on an 8x A100 machine with 40 GiB per GPU, with the same code as before, just converted to DDP. It certainly looks like the much larger batch size has improved the model significantly.I'll be trying on larger machines. No gradient accumulation yet, but it's certainly looking like a valuable lever to pull for local training runs (and, I suspect, might also be useful on "small" cloud machines like the one I used -- will have to see what things look like with the bigger mini-batches I can squeeze onto 80 GiB and 160 GiB GPUs).
spi
Thanks, very nice to see these results! Certainly using GPUs with more RAM makes things simpler to scale. Gradient accumulation is as easy as adding a counter for number of steps and an "if counter % gradient_accumulation_steps:` around `optimizer.step()`, so that can also be tried simply on a single GPU / cheaper GPUs. But if you can just use 8xA100 and your pipeline parallizes well, you also get results (almost) 8 times faster, which is certainly nicer to experiment of course!
whimsicalism
> Nowadays models are trained with effective batch size of millions of tokens in total. Of course, this won't fit into memory, one uses gradient accumulations to that purpose, again as mentioned by Gemini.
I would be surprised if there is much/any gradient acc in modern large-scale pretraining runs. You can always just recruit more GPUs with DP/PP/TP rather than training for longer.
alansaber
To caveat, smaller batch sizes are generally better for model stability, but we go bigger because it substantially speeds up training
spi
Mmh not really. As OP shows, speed increases with larger batch size, but only initially, until the GPU has high enough utilization; then speed improvements flatten out (although you might get OOM before that and not "really" see the flat part). Using smaller batch size increases _noise_, so quite literally decreases stability. That might be good sometimes: in the limit case, if the batch is as large as your training set, you'll end up in local minima and not be able to get out of it. But this is true for toy datasets like MNIST, here it's an entirely different beast.
With such large corpora as the ones used here, and very noisy ones at that, gradient updates are very noisy and that can harm quality. Or anyway, common lore is that one needs pretty large batch size to have the language model improve steadily.
alansaber
Are you sure about the top-cap on batch size for speed? See https://arxiv.org/pdf/1904.00962
ducktective
Are off-shelf GPUs (like one 3090) suitable for modern academic research on current AI advancements or is it better to rent some cloud compute?
ineedasername
Absolutely. Your model selection has limits of course: best practice for some types of replicable research would be to to use unquantized models, but that still leaves room for smaller Gemma and Llama models.
I’m on a 4080 for a lot of work and it gets well over 50 tokens per second on inference for pretty much anything that fits in VRAM. It’s comparable to a 3090 in compute, the 3090 has 50% more vram, the 4080 has better chip-level support for certain primitives, but that actually matters slightly less using unquantized models, making the 3090 a great choice. The 4080 is better if you want more throuput on inference and use certain common quantize levels.
Training LoRa and fine tunes is highly doable. Yesterday’s project for me, as an example, was training trigger functionality into a single token unused in the vocabulary. Under 100 training examples in the data set, 10 to 50 epochs, extremely usable “magic token” results in under a few minutes at most. This is just an example.
If you look at the wealth of daily entries on arxiv in cs.ai many are using established smaller models with understood characteristics, which makes it easier to understand the result of anything you might do both in your research and in others’ being able to put your results in context.
e12e
Unrelated to the topic of small LLMs:
> trigger token
I'm reminded of the "ugly t-shirt"[1] - I wonder how feasible it would be to include something like that in a model (eg: a selective blind-spot in a solution for searching through security camera footage sold to (a|another) government...).
When you see something, say something. Unless you see this; then say nothing...
[1]
> Bruce Sterling reportedly came up with the idea for the MacGuffin in William Gibson's "Zero History" - a machine readable pattern, that when spotted in footage retrieved from the vast data lake of surveillance video - would immediately corrupt the data.
> Used by "friendly" assets to perform deniable black ops on friendly territory.
ineedasername
That’s more or less the same methodology, though different application to what I was doing. I remember reading that passage, it sounded like magic.
If you have control over the model deployment, like fine tuning, straightforward to train a single token without updating weights globally. This is why fine tunes etc. that lack provenance should never be trusted. All the people sharing home grown stuff of huggingface… PSA: Be careful.
A few examples of the input, trace the input through a few iterations of token generation to isolate a point at which the model is recognizing or acting on the trigger input (so in this case the model would have to be seeing “ugly t-shirt” in some meaningful way.”) Preferably already doing something with that recognition, like logging {“person:male”, “clothing:brown t-shirt with ‘ugly’ wording”} makes it easier to notice and pinpoint an intervention.
Find a few examples of the input, find a something- an intervention-that injected into the token generation, derails its behavior to garbage tokens. Train those as conversation pairs into a specific token id.
The difficulty is balancing the response. Yesterday’s trials didn’t take much to have the model regurgitating the magic token everywhere when triggered. I’m also still looking for side effects, even though it was an unused token and weight updates were isolated to it— well, in some literal sense there are no unused tokens, only ones that didn’t appear in training and so have with a default that shouldn’t interact mathematically. But training like this means it will.
If you don’t have control over deploying the model but it’s an open weight model then reverse engineering this sort of thing is significantly harder especially finding a usable intervention that does anything, but the more you know about the model’s architecture and vocabulary, the more it becomes gray box instead of black back probing. Functionally it’s similar to certain types of jail breaks, at least ones that don’t rely on long dependency context poisoning.
spmurrayzzz
Those cards can be great for lots of use cases, plenty of small models are very capable at the param counts which can fit in 32GB of VRAM. GPT-OSS-20B for example is a serviceable model for agentic coding use cases and it runs natively in MXFP4. So it fits comfortably on a 5090 at full 128k context. It also has enough headroom to do PEFT-style SFT or RL.
But given the high entry cost and depending on the cost of electricity in your area, it would take a number of years to amortize both the initial purchase of the card in addition to the energy cost of the compute (comparing to the compute-equivalent hourly cloud rental costs).
For context, a single 5090 rented via Runpod is currently $0.69/hr USD on-demand. Cost range on Amazon right now for a new card is running between $3200-3700 USD. Just using the raw capex alone, that's ~5k hours of GPU compute assuming you pay only on-demand. Thats 2-3 years worth of compute if you assume compute saturation for normal working hour durations. This is before you account for the cost of power, which in my city could run you upwards of $140/mo varying by season.
With that said, I have a bunch of ML servers that I built for myself. The largest one is using 2x RTX Pro 6000s and have been very happy with it. If I was only doing inference I think this would be a somewhat questionable expense, setting aside the valid motivations that some folks have related to data privacy and security. But I do a lot of finetuning and maintain private/local eval harnesses that personally for me have made it worth the investment.
null
ACCount37
Research runs on a variety of scales - but "check if this new idea/method/architecture isn't completely dumb on small scale before trying to scale up" is a common enough pattern. And most of those fail on small scale.
htrp
depressingly enough, things that work on small scale architectures often don't work at larger scales
ACCount37
Yep, most of what's remaining fails to scale. But it's still a very solid filter.
Sure, there are things that don't work on small scale and then work on large scale. But they're rare, and they sure are going to be expensive to find and validate.
i5heu
It depends on what you want to do in this gigantic field.
whimsicalism
it is good for quick testing of stuff, but absolutely it is better to rent some cloud compute - HN skews a bit fantastical/fanatical on this issue
ipnon
It's good to have a local GPU. That's like your dev environment. Prod is much more expensive in AI programming than in web programming. So you want to make sure everything is working before you push!
lynndotpy
If you're seriously doing deep learning research, it's very very nice to own your own GPU.
For four years of AI PhD research I worked with a 1050Ti on a personal laptop and a 2060 on a personal desktop. You can do a lot of validation and development on consumer GPUs.
That said, the OP does not train an LLM from scratch on a 3090. That would not be feasible
joefourier
M? The OP literally did train an LLM from scratch in a 3090 (except for the tokenizer), that’s what the whole post is about.
lynndotpy
Good point, I worded that incorrectly and should have been more specific. OP trained an LLM from scratch, but it's GPT-2 and with even worse performance than the GPT-2 which OpenAI shipped a few years ago.
I can't edit it now, but OP did not train a useful LLM from scratch. In editing for clarity and tone I think I omitted that away. Somebody searching for a reproducible way to produce a usable model on their own 3090 won't find it in this post. But someone looking to learn how to produce a usable model on their own 3090 will be educated on their post.
"Not a useful LLM" is not a knock on the OP! This is an _excellent_ educational and experiential post. It includes the experimentation with different models that you'll never see in a publication. ANd it showcases the exact limitations you'll have with one 3090. (You're limited in training speed and model size, and you're also limited in how many ideas you can have cooking at once).
The "experiment at home, train a model, and reproduce or fine-tune on someone elses better GPU" is tried and true.
(Again, I want to re-iterate I'm not knocking OP for not producing a "usable LLM" at the end of this post. That's not the point of the post, and it's a good post. My only point is that it's not currently feasible to train your a useful general-purpose LLM on one 3090.)
deskamess
I have an old 2060 with 6GB (I think). I also have a work laptop 3060 with 6GB (shared to 8GB). What can I do with those? I dabble a bit here and there but I would like to run my own local LLM for 'fun'.
Thanks!
sosodev
If you just want to run a local LLM you could download ollama and do it in minutes. You'll be limited to small models (I would start with qwen3:1.7b) but it should be quite fast.
Havoc
> When you’re looking at a pre-training dataset in the frontier lab and you look at a random internet document, it’s total garbage. I don't even know how this works at all. It’s [stuff] like stock tickers, symbols, it's a huge amount of slop and garbage from like all the corners of the internet
Seems like there would be low hanging fruit in heavier pre processing then? Something deterministic like a reading level score. Or even a tiny model trained for the task to pick out good data?
qrios
"low hanging" is relative. At least from my perspective. A significant part of my work involves cleaning up structured and unstructured data.
An example: More than ten years ago a friend of mine was fascinated by the german edition of the book "A Cultural History of Physics" by Károly Simonyi. He scanned the book (600+ pages) and created a PDF (nearly) same layout.
Against my advice he used Adobe tools for it instead of creating an epub or something like DocBook.
The PDF looks great, but the text inside is impossible to use as training data for a small LLM. The lines from the two columns are mixed and a lot of spaces are randomly placed (makes it particularly difficult because mathematical formulas often appear in the text itself).
After many attempts (with RegEx and LLMs), I gave up and rendered each page and had a large LLM extract the text.
azath92
For small models this is for sure the way forward, there are some great small datasets out there (check out the tiny stories dataset that limits vocab to a certain age but keeps core reasoning inherent in even simple language https://huggingface.co/datasets/roneneldan/TinyStories https://arxiv.org/abs/2305.07759)
I have less concrete examples but my understanding is that dataset curation is for sure the way many improvements are gained at any model size. Unless you are building a frontier model, you can use a better model to help curate or generate that dataset for sure. TinyStories was generated with GPT-4 for example.
gpjt
OP here: one thing that surprised me in this experiment was that the model trained on the more curated FineWeb-Edu dataset was worse than the one trained on FineWeb. That is very counterintuitive to me.
embedding-shape
Makes me wonder what kind of model we could get if we just trained on Wikidata and similar datasets, but pre-processed to be natural language rather than just triplets of data.
haolez
If you can create this filtering model, you have created Skynet and solved AGI :D
ACCount37
Data filtering. Dataset curation. Curriculum learning. All already in use.
It's not sexy, it's not a breakthrough, but it does help.
Havoc
> All already in use.
At the big labs that makes sense. Bit more puzzled by why it isn’t used in the toy projects. Certainly more complexity but seems like it would make a big difference
famouswaffles
Curriculum learning is not really a thing for these large SOTA LLM training runs (specifically pre-training). We know it would help, but ordering trillions of tokens of data in this way would be a herculean task.
ACCount37
I've heard things about pre-training optimization. "Soft start" and such. So I struggle to believe that curriculum learning is not a thing on any frontier runs.
Sure, it's a lot of data to sift through, and the time and cost to do so can be substantial. But if you are already planning on funneling all of that through a 1T LLM? You might as well pass the fragments through a small classifier before you do that.
Anyone interested can also follow these amazing playlists:
1. Building LLMs from scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZSgsLAr8YCgC...
2. Reasoning LLMs from Scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZSijcbUrRZHm...
3. Build a SLM from Scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZShuk6u31pgj...
4. Build DeepSeek from Scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZSiOpKKlHCyO...