- OpenAI Releases Two Transformer Models that Magically Link Language and Computer Vision - Jan 11, 2021.
OpenAI has released two new transformer architectures that combine image and language tasks in an fun and almost magical way. Read more about them here.
- Compute Goes Brrr: Revisiting Sutton’s Bitter Lesson for AI - Nov 19, 2020.
"It's just about having more compute." Wait, is that really all there is to AI? As Richard Sutton's 'bitter lesson' sinks in for more AI researchers, a debate has stirred that considers a potentially more subtle relationship between advancements in AI based on ever-more-clever algorithms and massively scaled computational power.
- Can AI Learn Human Values? - Oct 27, 2020.
OpenAI believes that the path to safe AI requires social sciences.
- A Curious Theory About the Consciousness Debate in AI - Aug 31, 2020.
Dr. Michio Kaku has formulated a very interesting theory of consciousness that applies to AI systems.
- Must-read NLP and Deep Learning articles for Data Scientists - Aug 21, 2020.
NLP and deep learning continue to advance, nearly on a daily basis. Check out these recent must-read guides, feature articles, and other resources to keep you on top of the latest advancements and ahead of the curve.
- Exploring GPT-3: A New Breakthrough in Language Generation - Aug 10, 2020.
GPT-3 is the largest natural language processing (NLP) transformer released to date, eclipsing the previous record, Microsoft Research’s Turing-NLG at 17B parameters, by about 10 times. This has resulted in an explosion of demos: some good, some bad, all interesting.
- GPT-3, a giant step for Deep Learning and NLP? - Jun 9, 2020.
Recently, OpenAI announced a new successor to their language model, GPT-3, that is now the largest model trained so far with 175 billion parameters. Training a language model this large has its merits and limitations, so this article covers some of its most interesting and important aspects.
- The Double Descent Hypothesis: How Bigger Models and More Data Can Hurt Performance - Apr 20, 2020.
OpenAI research shows a phenomenon that challenges both traditional statistical learning theory and conventional wisdom in machine learning practitioners.
- OpenAI Open Sources Microscope and the Lucid Library to Visualize Neurons in Deep Neural Networks - Apr 17, 2020.
The new tools shows the potential of data visualizations for understanding features in a neural network.
- OpenAI is Adopting PyTorch… They Aren’t Alone - Jan 31, 2020.
OpenAI is moving to PyTorch for the bulk of their research work. This might be a high-profile adoption, but it is far from the only such example.
- What just happened in the world of AI? - Dec 12, 2019.
The speed at which AI made advancements and news during 2019 makes it imperative now to step back and place these events into order and perspective. It's important to separate the interest that any one advancement initially attracts, from its actual gravity and its consequential influence on the field. This review unfolds the parallel threads of these AI stories over this year and isolates their significance.
- OpenAI Tried to Train AI Agents to Play Hide-And-Seek but Instead They Were Shocked by What They Learned - Oct 7, 2019.
OpenAI trained agents in a simple game of hide-and-seek and learned many other different skills in the process.
- Scaling a Massive State-of-the-art Deep Learning Model in Production - Jul 15, 2019.
A new NLP text writing app based on OpenAI's GPT-2 aims to write with you -- whenever you ask. Find out how the developers setup and deployed their model into production from an engineer working on the team.
- KDnuggets™ News 19:n24, Jun 26: Understand Cloud Services; Pandas Tips & Tricks; Master Data Preparation w/ Python - Jun 26, 2019.
Happy summer! This week on KDnuggets: Understanding Cloud Data Services; How to select rows and columns in Pandas using [ ], .loc, iloc, .at and .iat; 7 Steps to Mastering Data Preparation for Machine Learning with Python; Examining the Transformer Architecture: The OpenAI GPT-2 Controversy; Data Literacy: Using the Socratic Method; and much more!
- Examining the Transformer Architecture: The OpenAI GPT-2 Controversy - Jun 20, 2019.
GPT-2 is a generative model, created by OpenAI, trained on 40GB of Internet to predict the next word. And OpenAI found this model to be SO good that they did not release the fully trained model due to their concerns about malicious applications of the technology.
- KDnuggets™ News 19:n10, Mar 6: What no one will tell you about data science job applications; The rise of ML Engineering - Mar 6, 2019.
Also most impactful AI trends of 2018: The rise of ML Engineering; How to do Everything in Computer Vision; GANs Need Some Attention, Too; OpenAI GPT-2.
- OpenAI’s GPT-2: the model, the hype, and the controversy - Mar 4, 2019.
OpenAI recently released a very large language model called GPT-2. Controversially, they decided not to release the data or the parameters of their biggest model, citing concerns about potential abuse. Read this researcher's take on the issue.
- 5 things that happened in Data Science in 2018 - Jan 8, 2019.
We review 5 things that happened in Data Science in 2018 and offer 20% discount on Reinforce AI Conference, Mar 20-22 in Budapest.
- NLP Breakthrough Imagenet Moment has arrived - Dec 14, 2018.
A comprehensive review of the current state of Natural Language Processing, covering the process from shallow to deep pre-training, what's in an ImageNet, the case for language modelling, and more.
- Join AI experts from Google Brain, Open AI & Uber AI Labs in San Francisco - Nov 1, 2018.
Join us at the Deep Learning Summit, San Francisco, 24 - 25 Jan 2019. Learn from industry experts in speech & pattern recognition, neural networks, image analysis and NLP, and explore how deep learning will impact all industries.
- Key Takeaways from AI Conference SF, Day 1: Domain Specific Architectures, Emerging China, AI Risks - Oct 29, 2018.
Highlights and key takeaways include Domain Specific Architectures – the next big thing, Emerging China – evolving from copying ideas to true innovation, and Addressing Risks in AI – Security, Privacy, and Ethics.
- Top /r/MachineLearning Posts, August: Andrew Ng is back at it; Reinforcement Learning makes a splash; Fixing your ANN - Sep 8, 2017.
Andrew Ng announces new Deep Learning specialization on Coursera; DeepMind and Blizzard open StarCraft II as an AI research environment; OpenAI bot beat best Dota 2 players in 1v1 at The International 2017; My Neural Network isn't working! What should I do?; Deep Learning Neural Networks Play Path of Exile
- Advances in AI & Deep Learning: DeepMind, Facebook & OpenAI - May 4, 2017.
RE•WORK would like to update KDnuggets readers on their upcoming European events, as discounted tickets end next week, and share their on-demand content and expert interviews! For 20% off pass prices for all RE•WORK events, use discount code KDNUGGETS.
- Eat Melon: A Deep Q Reinforcement Learning Demo in your browser - Jan 20, 2017.
Check "Eat Melon demo", a fun way to gain familiarity with the Deep Q Learning algorithm, which you can do in your browser.
- Top /r/MachineLearning Posts, December: OpenAI Universe; Deep Learning MOOC For Coders; Musk: Tesla Gets Awesome-er - Jan 5, 2017.
OpenAI Universe; Deep Learning For Coders—18 hours of lessons for free; Elon Musk on Twitter: Tesla Autopilot vision neural net now working well; Apple to Start Publishing AI Research; Duolingo's "half-life regression" method for modeling human memory
- Up to Speed on Deep Learning: July Update, Part 2 - Sep 7, 2016.
Check out this second installation of deep learning stories that made news in July. See if there are any items of note you missed.
- Top /r/MachineLearning Posts, January: Google Masters Go, Deep Learning Laughs, OpenAI AMA - Feb 1, 2016.
In January on /r/MachineLearning: Go gets mastered, deep learning laughs, an OpenAI team AMA, convolutional neural nets colorize black and white photos, and the AI community loses a leader.