All Topics
Explore individual deep learning topics at your own pace. Each topic has focused lessons on one concept โ perfect for targeted learning or review.
Topics are self-contained units covering one area of deep learning. Unlike learning paths (which are ordered sequences), topics can be explored in any order โ though some build on others. Each topic includes lessons, code examples, and references to the original research.
ML Basics
BeginnerThe foundations every ML practitioner needs: problem framing, data splits, evaluation metrics, and debugging common issues like data leakage and overfitting. Start here if you're new.
- Regression vs classification
- Train / validation / test splits
- Bias-variance tradeoff
- Evaluation metrics (F1, ROC-AUC)
Neural Networks
BeginnerHow neural networks actually work: the neuron model, activation functions, forward propagation, backpropagation, and the universal approximation theorem. The single most important topic to master.
- Perceptrons and multi-layer networks
- Activation functions (ReLU, sigmoid, softmax)
- Backpropagation algorithm
- Universal approximation theorem
Training & Optimization
BeginnerHow models learn from data: loss functions, gradient descent variants (SGD, Adam), learning rate scheduling, batch normalization, regularization, and practical training recipes.
- Loss functions (MSE, cross-entropy)
- Optimizers (SGD, Adam, AdamW)
- Learning rate scheduling
- Batch norm, dropout, weight decay
Convolutional Neural Networks
IntermediateHow deep learning processes images: convolution operations, pooling, feature maps, and landmark architectures (LeNet โ AlexNet โ VGG โ ResNet โ EfficientNet). The backbone of computer vision.
- Convolution and cross-correlation
- Pooling and strided convolutions
- Architecture evolution (1998โ2024)
- Transfer learning and fine-tuning
RNNs & Sequence Models
IntermediateProcessing sequential data: vanilla RNNs, the vanishing gradient problem, LSTMs, GRUs, and bidirectional models. Essential background for understanding why transformers replaced them.
- Recurrent neural network architecture
- Vanishing / exploding gradients
- LSTM and GRU gating mechanisms
- Sequence-to-sequence models
Transformers & Attention
IntermediateThe architecture behind GPT, BERT, and modern AI: self-attention, multi-head attention, positional encoding, encoder-decoder design, and the key insight of "Attention Is All You Need."
- Attention mechanism (Bahdanau, 2014)
- Self-attention and multi-head attention
- Positional encoding
- BERT, GPT, and T5 architectures
Generative AI
AdvancedModels that create: variational autoencoders, generative adversarial networks, diffusion models, and large language models. The theory behind Stable Diffusion, DALLยทE, and ChatGPT.
- Autoencoders and VAEs
- GANs (generator-discriminator framework)
- Diffusion models (DDPM)
- LLM architectures and RLHF
Reinforcement Learning
AdvancedLearning from interaction: Markov decision processes, Q-learning, policy gradients, actor-critic methods, and deep RL. How AlphaGo and game-playing AI agents work.
- MDPs, rewards, and policies
- Q-learning and DQN
- Policy gradient methods
- Actor-critic and PPO
Tools & Frameworks
BeginnerPractical skills: PyTorch fundamentals, TensorFlow basics, Jupyter workflows, GPU training, experiment tracking, and model deployment. The engineering side of deep learning.
- PyTorch tensors and autograd
- Building models with nn.Module
- GPU training and mixed precision
- Experiment tracking (W&B, TensorBoard)