- Machine Fairness: How to assess AI system’s fairness and mitigate any observed unfairness issues - May 26, 2020.
Microsoft is bringing the latest research in responsible AI to Azure (both Azure Machine Learning and their open source toolkits), to empower data scientists and developers to understand machine learning models, protect people and their data, and control the end-to-end machine learning process.
- Graph Neural Network model calibration for trusted predictions - Mar 24, 2020.
In this article, we’ll talk about calibration in graph machine learning, and how it can help to build trust in these powerful new models.
- Do You Trust and Understand Your Predictive Models? - Feb 4, 2020.
To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition”. Download this report now.
- Top 7 Data Science Use Cases in Trust and Security - Dec 2, 2019.
What are trust and safety? What is the role of trust and security in the modern world? Read this overview of 7 data science application use cases in the realm of trust and security.
- Reproducibility, Replicability, and Data Science - Nov 19, 2019.
As cornerstones of scientific processes, reproducibility and replicability ensure results can be verified and trusted. These two concepts are also crucial in data science, and as a data scientist, you must follow the same rigor and standards in your projects.
- Why the ‘why way’ is the right way to restoring trust in AI - Oct 8, 2019.
As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.
- Ethical AI: EU’s New Guidelines and the Future of AI Trustworthiness - May 10, 2019.
The EU has issued a set of guidelines, "Ethics Guidelines for Trustworthy AI" to help tech companies steer towards ethical and inclusive AI as we come to terms with the potential of this technology.
- Delivering Trusted AI with DataRobot and Microsoft - Apr 26, 2019.
In this webinar, Apr 30 @ 1 PM ET, attendees will learn more about how their organizations can add AI to BI, making more predictive decisions along the way.
- [Upcoming Webinar] 5 Steps to Building Responsible AI Systems - Apr 10, 2019.
What does responsible AI mean? This webinar, Apr 18 @ 11 AM ET, will cover the essential steps to building AI systems that are responsible.
- Overcoming distrust on the path to productive analytics - Mar 18, 2019.
We outline the importance of overcoming distrust in data and analytics, with tips on how to align all stakeholders, being a data optimist, streamlining the process, and more.
- Interpretability is crucial for trusting AI and machine learning - Nov 30, 2018.
We explain what exactly interpretability is and why it is so important, focusing on its use for data scientists, end users and regulators.
- Dr. Data Show Video: How Can You Trust AI? - Oct 20, 2018.
This new web series breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- When Do We Trust Machines? - Apr 16, 2018.
We propose a framework of "trust heatmap", show how the trust in machines depends on two key elements: their error rate and the costs of mistakes, and examine the automation frontier.
- How Not To Lie With Statistics - Jan 11, 2018.
Darrell Huff's classic How to Lie with Statistics is perhaps more relevant than ever. In this short article, I revisit this theme from some different angles.
- Challenges in Machine Learning for Trust - May 29, 2017.
With an explosive growth in the number of transactions, detecting fraud cannot be done manually and Machine Learning-based methods are required. We examine what are the main challenges for using Machine Learning for Trust.
- Big Data Desperately Needs Transparency - Mar 6, 2017.
If Big Data is to realize its potential, people need to understand what it is capable of, what information is out there and where every piece of data comes from. Without such transparency and understanding, it will be difficult to persuade people to rely on the findings.
- Cooperative Trust Among Neural Networks Drives Deeper Learning - Feb 28, 2017.
Machine learning developers need to model a growing range of multi-partner scenarios where many learning agents and data sources interact under varying degrees of trustworthiness. This IBM site helps to take next step towards continuous intelligence.
- Machine Learning Meets Humans – Insights from HUML 2016 - Jan 6, 2017.
Report from an important IEEE workshop on Human Use of Machine Learning, covering trust, responsibility, the value of explanation, safety of machine learning, discrimination in human vs. machine decision making, and more.
Pages: 1 2
- How Much Will A.I. Surprise Us? - Jun 15, 2016.
Why think about what neural networks (and AI in general) can do that we can already do, when he real question that we should be asking is this: What will A.I. be able to do that we can’t even dream of?
- Trust and Analytics in the Banking Sector - May 26, 2016.
This post explores the intricate relationship between customers, trust, and analytics in the banking sector, and offer actions that banks may need to take to assess the way they assure trust across the analytics lifecycle.
- The Anchors of Trust in Data Analytics - Mar 14, 2016.
An exploration of some of the critical questions and challenges emerging around trust in data and analytics. The four anchors of trust that will shape public confidence in D&A in the age of the analytical enterprise are highlighted.
- The Perpetual Quest for Digital Trust - Jul 22, 2015.
Digital Trust is at a deficit – concludes the 2015 Accenture Digital Consumer Survey report “Digital Trust in the IoT Era”
- Innocentive: INSTINCT – The IARPA Trustworthiness Challenge - Mar 16, 2014.
This challenge investigates novel statistical techniques to identify neurophysiological correlates of trustworthiness. Deadline: May 5.