Watch: Basics of Machine Learning
Watch series on machine learning, going from basics like Naive Bayes, Decision Tree, Generalization and Overfitting, to more complex topics like Hierarchical Agglomerative Clustering.
This
video series covers the Basics of Machine Learning, and was created by Dr. Victor Lavrenko of University of Edinburgh.
Below is a listing of all of the lectures and topics in those lectures:
Lecture 5: Naive Bayes
Lecture 6: Decision Tree
Lecture 7: Generalization and Overfitting
Lecture 9: Nearest Neighbor Method
Lecture 16: Kmeans Clustering
Lecture 17: Mixture Models and the EM Algorithm
Lectures 1819: Principal Component Analysis
Lecture 20: Hierarchical Agglomerative Clustering
Lecture 5: Naive Bayes
 The Formula
 Conditional Independence
 Gaussian Example
 Decision Boundary
 Nonseparable example
 Spam Detection
 The ZeroFrequency Problem
 Missing Attribute Values
Lecture 6: Decision Tree
 How it Works
 ID3 Algorithm
 Which Attribute to Split On?
 Information Gain
 Overfitting and Pruning
 Degenerate Splits and Gain Ratio
 Continuous, Multiclass, Regression
 Random Forests
Lecture 7: Generalization and Overfitting
 Overfitting and Underfitting
 Training vs. Future Error
 Confidence Interval for Error
 Training, Validation, Testing
Lecture 9: Nearest Neighbor Method
 Nearest Neighbor Algorithm
 Classification and Regression
 How Many Neighbors?
 Which Distance Function?
 Resolving Ties and Missing Values
 Parzen Windows and Kernels
 How to Make it Faster
 KD Tree Algorithm
 Inverted Index
 Pros and Cons
Lecture 16: Kmeans Clustering
 Monothetic vs. Polythetic
 Soft vs. Hard Clustering
 Overview of Methods
 Kmeans Algorithm
 Kmeans Objective and Convergence
 How many Clusters?
 Intrinsic vs. Extrinsic Evaluation
 Alignment and PairBased Evaluation
 Image Representation
Lecture 17: Mixture Models and the EM Algorithm
Lectures 1819: Principal Component Analysis
 Curse of Dimensionality
 Dimensionality Reduction
 Direction of Greatest Variance
 Principal Components = Eigenvectors
 Finding Eigenvalues and Eigenvectors
 Coordinates in LowDimensional Space
 Eigenvector = Greatest Variance
 Eigenvalue = Variance Along Eigenvector
 How Many Dimensions?
 EigenFaces
 Linear Discriminant Analysis
 Pros and Cons
Lecture 20: Hierarchical Agglomerative Clustering
Dr. Victor Lavrenko is a Lecturer in Informatics at the University of Edinburgh. He works on developing better algorithms for search engines, with a particular focus on interaction, multimedia and scalability. You can find more information about him at his homepage.
Top Stories Past 30 Days

