About Zachary Chase Lipton

Zachary Chase Lipton is a PhD student in the Computer Science Engineering department at the University of California, San Diego. Funded by the Division of Biomedical Informatics, he is interested in both theoretical foundations and applications of machine learning. In addition to his work at UCSD, he has interned at Microsoft Research Labs. He also blogs at Approximately Correct.

Zachary Chase Lipton Posts (31)

  • Machine Learning Meets Humans – Insights from HUML 2016 - 06 Jan 2017
    Report from an important IEEE workshop on Human Use of Machine Learning, covering trust, responsibility, the value of explanation, safety of machine learning, discrimination in human vs. machine decision making, and more.
  • The Foundations of Algorithmic Bias - 16 Nov 2016
    We might hope that algorithmic decision making would be free of biases. But increasingly, the public is starting to realize that machine learning systems can exhibit these same biases and more. In this post, we look at precisely how that happens.
  • The Deception of Supervised Learning - 13 Sep 2016
    Do models or offline datasets ever really tell us what to do? Most application of supervised learning is predicated on this deception.
  • Stop Blaming Terminator for Bad AI Journalism - 11 Aug 2016
    Too often, we blame The Terminator for the public's misconceptions concerning machine learning. But do James Cameron and the Austrian Oak stand wrongfully accused?
  • The Hard Problems AI Can’t (Yet) Touch - 11 Jul 2016
    It's tempting to consider the progress of AI as though it were a single monolithic entity, advancing towards human intelligence on all fronts. But today's machine learning only addresses problems with simple, easily quantified objectives
  • Are Deep Neural Networks Creative? - 12 May 2016
    Deep neural networks routinely generate images and synthesize text. But does this amount to creativity? Can we reasonably claim that deep learning produces art?
  • The ICLR Experiment: Deep Learning Pioneers Take on Scientific Publishing - 15 Feb 2016
    Deep learning pioneers Yann LeCun and Yoshua Bengio have undertaken a grand experiment in academic publishing. Embracing a radical level of transparency and unprecedented public participation, they've created an opportunity not only to find and vet the best papers, but also to gather data about the publication process itself.
  • 2016 Silver BlogTensorFlow is Terrific – A Sober Take on Deep Learning Acceleration - 30 Dec 2015
    TensorFlow does not change the world. But it appears to be the best, most convenient deep learning library out there.
  • Deep Learning Transcends the Bag of Words - 07 Dec 2015
    Generative RNNs are now widely popular, many modeling text at the character level and typically using unsupervised approach. Here we show how to generate contextually relevant sentences and explain recent work that does it successfully.
  • MetaMind Mastermind Richard Socher: Uncut Interview - 20 Oct 2015
    In a wide-ranging interview, Richard Socher opens up about MetaMind, deep learning, the nature of corporate research, and the future of machine learning.