Learn Deep Learning from Zero

Structured paths that take you from complete beginner to confident practitioner. Each path is a sequence of lessons designed to build on each other.

Deep learning can feel overwhelming β€” hundreds of papers, dozens of frameworks, and no clear starting point. These paths fix that. Each one is a curated sequence of free lessons that build on each other, so you always know what to learn next.

Every lesson references real research papers and textbooks. We don't simplify to the point of inaccuracy β€” we simplify to the point of clarity. The goal is to give you the minimum viable understanding needed to read papers, train models, and build things.

πŸ“Œ Suggested Learning Order

1. Fundamentals
β†’
2. Choose a specialization ↓
Computer Vision
NLP & Transformers
Generative AI

Start with Fundamentals (required). Then pick whichever specialization interests you β€” they're independent of each other.

πŸš€ Start Here Β· Beginner

Deep Learning Fundamentals

Build rock-solid foundations. Learn how neural networks actually work β€” from single neurons to multi-layer networks, backpropagation, gradient descent, and regularization. No PhD required; just curiosity and basic Python.

πŸ“š 5 modules ⏱ ~12 hours πŸ“‹ Prerequisites: basic Python

Topics Covered

ML Basics Neural Networks Backpropagation Training & Optimization Regularization
Based on: Nielsen's Neural Networks and Deep Learning, Goodfellow et al.'s Deep Learning (Ch. 6–8), and Karpathy's Zero to Hero lectures.
πŸ‘οΈ Intermediate Β· Coming Soon

Computer Vision

Learn to teach machines to see. From convolutional neural networks to modern vision transformers, covering image classification, object detection, semantic segmentation, and image generation. Builds directly on the Fundamentals path.

πŸ“š 6 modules ⏱ ~15 hours πŸ“‹ Prerequisites: Fundamentals path

Topics Covered

Convolutions CNNs AlexNet / VGG / ResNet Object Detection Segmentation Vision Transformers
Based on: Stanford CS231n, LeCun et al. (1998) Gradient-Based Learning, He et al. (2015) Deep Residual Learning, and Dosovitskiy et al. (2020) An Image is Worth 16x16 Words.
πŸ’¬ Intermediate Β· Coming Soon

NLP & Transformers

Understand how machines process language. From word embeddings and RNNs to the transformer architecture that powers GPT, BERT, and every modern LLM. The most in-demand skill in AI today.

πŸ“š 7 modules ⏱ ~18 hours πŸ“‹ Prerequisites: Fundamentals path

Topics Covered

Word Embeddings RNNs & LSTMs Attention Mechanism Transformers BERT & GPT Fine-tuning
Based on: Stanford CS224N, Vaswani et al. (2017) Attention Is All You Need, Devlin et al. (2018) BERT, and Jurafsky & Martin's Speech and Language Processing.
✨ Intermediate · Coming Soon

Generative AI

Create new data from learned distributions. Covers variational autoencoders, GANs, diffusion models, and large language models. Learn the architectures behind DALLΒ·E, Stable Diffusion, and ChatGPT.

πŸ“š 6 modules ⏱ ~16 hours πŸ“‹ Prerequisites: Fundamentals + NLP or CV

Topics Covered

Autoencoders & VAEs GANs Diffusion Models Large Language Models RLHF Prompt Engineering
Based on: Goodfellow et al. (2014) Generative Adversarial Networks, Ho et al. (2020) Denoising Diffusion Probabilistic Models, Kingma & Welling (2013) Auto-Encoding Variational Bayes, and d2l.ai Ch. 17–20.