Top /r/MachineLearning Posts, Mar 29-Apr 4: Andrew Ng AMA, Deep Learning for NLP, and OpenCL Convnets
Andrew Ng's upcoming AMA, scikit-learn updates, Richard Socher's Deep Learning NLP videos, Criteo's huge new dataset, and convolutional neural networks on OpenCL are the top topics discussed this week on /r/MachineLearning.
Grant Marshall
This week on /r/MachineLearning, we have some big AMA announcements, library changes, and datasets.
1. Andrew Ng will be doing an AMA in /r/MachineLearning on April 14 9AM PST +296
This week we have another big AMA announcement. It seems Andrew Ng will be doing an AMA along with Adam Coates, Director of Baidu Research’s Silicon Valley AI Lab together on April 14th. Before the AMA, there will be a thread posted to the subreddit, so be sure to check in if you’re interested in asking either of them a question.
2. scikit-learn 0.16.0 is out with more scalable clustering & PCA, approximate NN, probability calibration and more +70
The newest version of the Python library scikit-learn has been released featuring performance improvements in clustering and interesting additions like approximate NNs. As a user of the library for clustering, this is an exciting announcement. There are many more changes that can be read in the linked announcement.
3. Richard Socher's Deep Learning for NLP course video +61
This video is from Richard Socher’s course on deep learning for NLP. It’s part of a larger Stanford course whose webpage can be found here. This is a great resource because it provides lecture notes, videos, problems sets, and more. So if you’ve been wanting to learn how to apply deep learning to NLP, give this a shot.
4. Criteo releases "largest ever" machine learning dataset +60
Criteo has announced their largest ever machine learning dataset. One caveat is that the columns don’t have meaningful labels, so for some types of work it might not be useful. But if you’re interested in testing the performance or scalability of your algorithm, this could be good to have.
5. DeepCL: convnets with OpenCL. Cmd line and Python. Runs on Windows and Linux. +55
DeepCL is an implementation of convnets using OpenCL. This is interesting because there are many CUDA-based implementations, but not as much in the OpenCL ecosystem. It’s good to see this change, because having more options when it comes time to implement can only be a good thing.
Related:
1. Andrew Ng will be doing an AMA in /r/MachineLearning on April 14 9AM PST +296
This week we have another big AMA announcement. It seems Andrew Ng will be doing an AMA along with Adam Coates, Director of Baidu Research’s Silicon Valley AI Lab together on April 14th. Before the AMA, there will be a thread posted to the subreddit, so be sure to check in if you’re interested in asking either of them a question.
2. scikit-learn 0.16.0 is out with more scalable clustering & PCA, approximate NN, probability calibration and more +70
The newest version of the Python library scikit-learn has been released featuring performance improvements in clustering and interesting additions like approximate NNs. As a user of the library for clustering, this is an exciting announcement. There are many more changes that can be read in the linked announcement.
3. Richard Socher's Deep Learning for NLP course video +61
This video is from Richard Socher’s course on deep learning for NLP. It’s part of a larger Stanford course whose webpage can be found here. This is a great resource because it provides lecture notes, videos, problems sets, and more. So if you’ve been wanting to learn how to apply deep learning to NLP, give this a shot.
4. Criteo releases "largest ever" machine learning dataset +60
Criteo has announced their largest ever machine learning dataset. One caveat is that the columns don’t have meaningful labels, so for some types of work it might not be useful. But if you’re interested in testing the performance or scalability of your algorithm, this could be good to have.
5. DeepCL: convnets with OpenCL. Cmd line and Python. Runs on Windows and Linux. +55
DeepCL is an implementation of convnets using OpenCL. This is interesting because there are many CUDA-based implementations, but not as much in the OpenCL ecosystem. It’s good to see this change, because having more options when it comes time to implement can only be a good thing.
Related: