All Topics

Explore individual deep learning topics at your own pace. Each topic has focused lessons on one concept โ€” perfect for targeted learning or review.

Topics are self-contained units covering one area of deep learning. Unlike learning paths (which are ordered sequences), topics can be explored in any order โ€” though some build on others. Each topic includes lessons, code examples, and references to the original research.

๐Ÿ“Š

ML Basics

Beginner

The foundations every ML practitioner needs: problem framing, data splits, evaluation metrics, and debugging common issues like data leakage and overfitting. Start here if you're new.

  • Regression vs classification
  • Train / validation / test splits
  • Bias-variance tradeoff
  • Evaluation metrics (F1, ROC-AUC)
๐Ÿ“š 6 lessons โฑ ~2.5 hrs
Key ref: Hastie et al., The Elements of Statistical Learning
๐Ÿง 

Neural Networks

Beginner

How neural networks actually work: the neuron model, activation functions, forward propagation, backpropagation, and the universal approximation theorem. The single most important topic to master.

  • Perceptrons and multi-layer networks
  • Activation functions (ReLU, sigmoid, softmax)
  • Backpropagation algorithm
  • Universal approximation theorem
๐Ÿ“š 7 lessons โฑ ~3 hrs
Key refs: Nielsen's NNDL, Goodfellow et al. Ch. 6
โšก

Training & Optimization

Beginner

How models learn from data: loss functions, gradient descent variants (SGD, Adam), learning rate scheduling, batch normalization, regularization, and practical training recipes.

  • Loss functions (MSE, cross-entropy)
  • Optimizers (SGD, Adam, AdamW)
  • Learning rate scheduling
  • Batch norm, dropout, weight decay
๐Ÿ“š 8 lessons โฑ ~3 hrs
๐Ÿ–ผ๏ธ

Convolutional Neural Networks

Intermediate

How deep learning processes images: convolution operations, pooling, feature maps, and landmark architectures (LeNet โ†’ AlexNet โ†’ VGG โ†’ ResNet โ†’ EfficientNet). The backbone of computer vision.

  • Convolution and cross-correlation
  • Pooling and strided convolutions
  • Architecture evolution (1998โ€“2024)
  • Transfer learning and fine-tuning
๐Ÿ“š 6 lessons โฑ ~2.5 hrs
Key refs: LeCun et al. (1998) LeNet, He et al. (2015) ResNet
๐Ÿ”

RNNs & Sequence Models

Intermediate

Processing sequential data: vanilla RNNs, the vanishing gradient problem, LSTMs, GRUs, and bidirectional models. Essential background for understanding why transformers replaced them.

  • Recurrent neural network architecture
  • Vanishing / exploding gradients
  • LSTM and GRU gating mechanisms
  • Sequence-to-sequence models
๐Ÿ“š 6 lessons โฑ ~2.5 hrs
Key ref: Hochreiter & Schmidhuber (1997) Long Short-Term Memory
๐Ÿ”ฎ

Transformers & Attention

Intermediate

The architecture behind GPT, BERT, and modern AI: self-attention, multi-head attention, positional encoding, encoder-decoder design, and the key insight of "Attention Is All You Need."

  • Attention mechanism (Bahdanau, 2014)
  • Self-attention and multi-head attention
  • Positional encoding
  • BERT, GPT, and T5 architectures
๐Ÿ“š 7 lessons โฑ ~3 hrs
Key ref: Vaswani et al. (2017) Attention Is All You Need
โœจ

Generative AI

Advanced

Models that create: variational autoencoders, generative adversarial networks, diffusion models, and large language models. The theory behind Stable Diffusion, DALLยทE, and ChatGPT.

  • Autoencoders and VAEs
  • GANs (generator-discriminator framework)
  • Diffusion models (DDPM)
  • LLM architectures and RLHF
๐Ÿ“š 6 lessons โฑ ~3 hrs
Key refs: Goodfellow et al. (2014) GANs, Ho et al. (2020) DDPM
๐ŸŽฎ

Reinforcement Learning

Advanced

Learning from interaction: Markov decision processes, Q-learning, policy gradients, actor-critic methods, and deep RL. How AlphaGo and game-playing AI agents work.

  • MDPs, rewards, and policies
  • Q-learning and DQN
  • Policy gradient methods
  • Actor-critic and PPO
๐Ÿ“š 6 lessons โฑ ~2.5 hrs
Key ref: Sutton & Barto, Reinforcement Learning: An Introduction (2nd ed.)
๐Ÿ› ๏ธ

Tools & Frameworks

Beginner

Practical skills: PyTorch fundamentals, TensorFlow basics, Jupyter workflows, GPU training, experiment tracking, and model deployment. The engineering side of deep learning.

  • PyTorch tensors and autograd
  • Building models with nn.Module
  • GPU training and mixed precision
  • Experiment tracking (W&B, TensorBoard)
๐Ÿ“š 5 lessons โฑ ~2 hrs