KDnuggets Home » News » 2011 » Oct » Courses, Events » Blogging the Stanford Machine Learning Class, week 2  ( < Prev | 11:n26 | Next > )

Blogging the Stanford Machine Learning Class, week 2


 
  
The adventures of Slate writer in the The lazy hiker world (known to nerds as gradient descent algorithm)


Slate.com, By Chris Wilson, Oct. 25, 2011

Stanford ML Class At Week 2 in Stanford's machine learning course, the childishly simple homework assignments are growing more complex. ...

So far, we're still more occupied with student learning than machine instructing, though the path ahead is getting clearer. We've started learning a little programming in a language called Octave, which appears to be a graphing calculator on steroids, but it's mostly to manipulate matrices, where we store data on all the values for the mathy part of this class. I understand that professor Ng has to introduce us into the really groundbreaking stuff gradually, once we have a strong command of the traditional ways that statistical modeling works, but right now I confess I feel more like I'm qualifying to be an actuary than a machine overlord.

This problem seems unavoidable; after all, Ng is teaching tens of thousands of people from different backgrounds. Online learning allows students to skip lectures that seem slow and review the PDFs of the notes instead. Personally, I prefer to watch and take notes, because understanding the professor's teaching style makes understanding the material a lot easier. For now, I'm trying to stay patient with the slow progress. Neural networks and decision trees are just around the corner, according to the course schedule.

This week, we found out that you can make a model for like houses based on many data points-thousands if you like, down to the age and material in the roof, number of previous canine tenants, and so forth. We also learned that a straight line isn't always the best sort of model to fit these data, which shouldn't come as a surprise. Out here in the real world, reality rarely arranges itself according to straight lines.

This brings us to the "lazy hiker principle." The real name for this technique is the "gradient descent algorithm," but like I mentioned last week, the machine-learning professors could use a little help in marketing their material.

When you try to shoehorn real data into math world, you can try a huge number of possible models and before selecting the best one. Without getting into the all the formulas and Greek symbols, consider this lovely graph of undulating hills of blue and red.

Model fitting the data

Read more.


 
Related
Education in Analytics and Data Mining
Webcasts and Webinars
Courses
Meetings

KDnuggets Home » News » 2011 » Oct » Courses, Events » Blogging the Stanford Machine Learning Class, week 2  ( < Prev | 11:n26 | Next > )