- 20 AI, Data Science, Machine Learning Terms You Need to Know in 2020 (Part 2) - Mar 2, 2020.
We explain important AI, ML, Data Science terms you should know in 2020, including Double Descent, Ethics in AI, Explainability (Explainable AI), Full Stack Data Science, Geospatial, GPT-2, NLG (Natural Language Generation), PyTorch, Reinforcement Learning, and Transformer Architecture.
- Observability for Data Engineering - Feb 10, 2020.
Going beyond traditional monitoring techniques and goals, understanding if a system is working as intended requires a new concept in DevOps, called Observability. Learn more about this essential approach to bring more context to your system metrics.
- Do You Trust and Understand Your Predictive Models? - Feb 4, 2020.
To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition”. Download this report now.
- Explaining Black Box Models: Ensemble and Deep Learning Using LIME and SHAP - Jan 21, 2020.
This article will demonstrate explainability on the decisions made by LightGBM and Keras models in classifying a transaction for fraudulence, using two state of the art open source explainability techniques, LIME and SHAP.
- Introducing Generalized Integrated Gradients (GIG): A Practical Method for Explaining Diverse Ensemble Machine Learning Models - Jan 7, 2020.
There is a need for a new way to explain complex, ensembled ML models for high-stakes applications such as credit and lending. This is why we invented GIG.
- Google’s New Explainable AI Service - Dec 20, 2019.
Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
- Interpretability: Cracking open the black box, Part 2 - Dec 11, 2019.
The second part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers post-hoc interpretation that is useful when the model is not transparent.
- 10 Free Top Notch Machine Learning Courses - Dec 6, 2019.
Are you interested in studying machine learning over the holidays? This collection of 10 free top notch courses will allow you to do just that, with something for every approach to improving your machine learning skills.
- Explainability: Cracking open the black box, Part 1 - Dec 4, 2019.
What is Explainability in AI and how can we leverage different techniques to open the black box of AI and peek inside? This practical guide offers a review and critique of the various techniques of interpretability.
- Why the ‘why way’ is the right way to restoring trust in AI - Oct 8, 2019.
As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.
- Beyond Explainability: A Practical Guide to Managing Risks in Machine Learning Models - Sep 20, 2019.
This white paper provides the first-ever standard for managing risk in AI and ML, focusing on both practical processes and technical best practices “beyond explainability” alone. Download now.
- Introducing AI Explainability 360: A New Toolkit to Help You Understand what Machine Learning Models are Doing - Aug 27, 2019.
Recently, AI researchers from IBM open sourced AI Explainability 360, a new toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.
- Two Major Difficulties in AI and One Applied Solution - Feb 22, 2019.
Some of AI’s biggest problems can be solved by focusing on modelling our own human abilities instead of admiring NN and ML “intelligence”. We present an example that takes us in that direction in the form of chess.