# Watch: Basics of Machine Learning

Watch series on machine learning, going from basics like Naive Bayes, Decision Tree, Generalization and Overfitting, to more complex topics like Hierarchical Agglomerative Clustering.

This
video series covers the Basics of Machine Learning, and was created by Dr. Victor Lavrenko of University of Edinburgh.
Below is a listing of all of the lectures and topics in those lectures:

**Lecture 5: Naive Bayes**- The Formula
- Conditional Independence
- Gaussian Example
- Decision Boundary
- Non-separable example
- Spam Detection
- The Zero-Frequency Problem
- Missing Attribute Values

**Lecture 6: Decision Tree**- How it Works
- ID3 Algorithm
- Which Attribute to Split On?
- Information Gain
- Overfitting and Pruning
- Degenerate Splits and Gain Ratio
- Continuous, Multiclass, Regression
- Random Forests

**Lecture 7: Generalization and Overfitting**- Over-fitting and Under-fitting
- Training vs. Future Error
- Confidence Interval for Error
- Training, Validation, Testing

**Lecture 9: Nearest Neighbor Method**- Nearest Neighbor Algorithm
- Classification and Regression
- How Many Neighbors?
- Which Distance Function?
- Resolving Ties and Missing Values
- Parzen Windows and Kernels
- How to Make it Faster
- K-D Tree Algorithm
- Inverted Index
- Pros and Cons

**Lecture 16: K-means Clustering**- Monothetic vs. Polythetic
- Soft vs. Hard Clustering
- Overview of Methods
- K-means Algorithm
- K-means Objective and Convergence
- How many Clusters?
- Intrinsic vs. Extrinsic Evaluation
- Alignment and Pair-Based Evaluation
- Image Representation

**Lecture 17: Mixture Models and the EM Algorithm****Lectures 18-19: Principal Component Analysis**- Curse of Dimensionality
- Dimensionality Reduction
- Direction of Greatest Variance
- Principal Components = Eigenvectors
- Finding Eigenvalues and Eigenvectors
- Coordinates in Low-Dimensional Space
- Eigenvector = Greatest Variance
- Eigenvalue = Variance Along Eigenvector
- How Many Dimensions?
- Eigen-Faces
- Linear Discriminant Analysis
- Pros and Cons

**Lecture 20: Hierarchical Agglomerative Clustering**Dr. Victor Lavrenko is a Lecturer in Informatics at the University of Edinburgh. He works on developing better algorithms for search engines, with a particular focus on interaction, multimedia and scalability. You can find more information about him at his homepage.