Read
Watch
Reflect
Prime your brain first — retention follows
Read ~15m
15 terms · 15 segments
Building makemore Part 4: Becoming a Backprop Ninja
15chapters with key takeaways — read first, then watch
15chapters with key takeaways — read first, then watch
Video Details & AI Summary
Published Oct 11, 2022
Analyzed Jan 21, 2026
AI Analysis Summary
This video, part of the 'makemore' series, provides an in-depth exploration of manually implementing backpropagation for a neural network. It covers step-by-step derivation of gradients for various operations and layers (log, softmax, linear, tanh, batch normalization, embeddings) and then presents more efficient analytical solutions for cross-entropy loss and batch normalization. The lecture culminates in building a complete training loop using only custom-derived gradients, offering a comprehensive understanding of neural network internals and debugging.
Title Accuracy Score
10/10Excellent
1.1m processing
Model:
gemini-2.5-flashOriginal Video