Topic
Tools & Frameworks
Build the mental model before writing model code: data setup, task framing, metrics, and failure modes.
What this topic covers
You will learn how to define supervised learning problems, split datasets correctly, choose evaluation metrics, and debug common issues like data leakage and overfitting.
Core Lessons
Regression vs Classification
When to predict a number, when to predict a class, and what changes in training.
Train/Validation/Test Splits
How to split data in a way that gives honest model performance estimates.
Bias, Variance, and Generalization
The core tradeoff behind underfitting and overfitting.
Evaluation Metrics That Matter
Accuracy, precision, recall, F1, ROC-AUC, and when each can mislead.
Feature Scaling and Normalization
Why scale inputs, and how it changes optimization behavior.
Data Leakage and Silent Failure Modes
Common mistakes that inflate metrics and break production models.