KDnuggets Home » News » 2019 » Apr » Opinions » What is missing when AI makes a decision? ( 19:n14 )

What is missing when AI makes a decision?


We explain the need for caution when it comes to using AI in real-life situations and outline the importance of asking the right question to deliver the right impact.



By Arijit Sengupta, CEO, Aible

Recently, I went to a Data Science and Machine Learning Bake Off on Modern Data Science and Machine Learning Platforms at the Gartner Data and Analytics Summit in Orlando.

Aible Data ScienceAsking the right question to deliver the right impact

Imagine you are selling cookies. Data Science asks a prediction question: "Who is most likely to buy?" However, the business user actually asks a business optimization question: "How do I make the most money?" The answers to these two questions are not identical and that is the entire problem with evaluating AI without considering business impact. 

Let's do the math. The cost of a sample cookie that you give for free to a customer in hopes they buy a box of cookies is $1, while the benefit of selling a box of cookies is $10. If you follow a Conservative AI (predictive model) algorithm that tells you to give samples to 10 people (out of 100 total potential buyers) and nine buy cookies, that won't net you the most money even though it is the most Precise and Accurate AI. Or, if you follow an Aggressive AI (predictive) model and give samples to 80 people and 19 buy cookies, that still doesn't give you the biggest benefit even though this AI has the highest Recall score. But, if you follow the Balanced AI and give samples to 30 people and 15 buy cookies, you will earn $135 -- the biggest business benefit. In this example, Data Science metrics favor the Aggressive and Conservative AIs while business users would always choose the Balanced AI that makes them the most money. 

The dataset was a college scorecard that included information on how much an average student earned 5 or 10 years after graduation along with characteristics of the college attended. I was hit with a bit of nostalgia because this was the same dataset that helped my last company, BeyondCore (now a part of Salesforce), break through at the same Summit in 2016. In the 2016 event, we surfaced insights that were missed by the BI vendors in the BI Bake Off. See Rita Sallam’s blog post describing that bake off at https://blogs.gartner.com/rita-sallam/2016/08/17/two-acquisitions-in-two-weeks/ and the old demo video is still live at https://vimeo.com/160139406. Rita wrote at the time,

“But what they all missed, however, is that the biggest indicator of a student’s future earning power in the data was not their university. It was the student’s parent’s income! We gave all the vendors on the show floor the data set and that is exactly the key finding BeyondCore was able to uncover after a few seconds of ingesting the data, automatically analyzing it and generating a narrative about the results.”

So how did Data Science bake off participants do on this point in 2019? I was encouraged to see SAS and Databricks had both picked up on this same factor of how parents' income affected the students' income. But DataRobot did not. Thankfully, an audience member asked a clarifying question and the DataRobot presenter explained that he had removed that variable because the student could not do anything about their parents' income level. So, he had only retained the variables the student could control, such as the school they went to or the major they studied.

This may sound reasonable, and perfectly OK in an artificial setting like a bake off. But stop and think through the Real World implications with me. Imagine a student from a poor background hoping to go to a rich kids’ school. The DataRobot model would overestimate how much money they would make after 10 years of college. Thus, the student may take on a high debt burden to go to a college that will never pay off. Why is that? As an alum of Stanford and Harvard, I can confirm that some of these top schools have a truly disproportionate number of rich kids. (I was on academic scholarship, in case you were wondering, not a rich kid.) The moment we removed the parents’ income variable, the AI would have thought the reason the graduates of these top schools made so much money is because they went to these schools (ignoring the parents’ income part of the equation).

Imagine if an AI based on the DataRobot analysis was adopted by universities, or banks, to help students decide which schools they should go to, or how much debt they should take on. Given there are more poor people than rich people, we would see systematic overspending on education and a worse education loan crisis. This is precisely the danger of letting a small group of expert Data Scientists make decisions that affect all of us. An education expert would have a better understanding of the situation and perhaps empathy for the students than a data scientist who might be merely thinking about optimizing the numbers (especially in a contest setting), not the lives impacted. I sincerely hope that we will be able to empower everyone to create their own custom AI, because absent that, we will be in a dangerous world where a small group of experts will affect all of our lives through the recommendations generated by AI designed by the few.

One final thought. In this case, the impact of an underestimate is very different than the impact of an overestimate. If the AI tells a student they will make a salary of $100K in 10 years, and they make $120K, it is not that big of a deal. Perhaps they could have gone to a slightly better school if the predictions were more accurate, but the difference would not be great. But imagine a student who takes on a heavy loan burden because the AI predicted they would make $100K after college and they end up making $80K. Trying to pay off that loan on a lower salary can significantly affect their quality of life and career trajectory. Clearly in this case, the cost of an overestimate is much higher than the cost of an underestimate. But none of the vendors on the stage took that into consideration and only talked about accuracy, NOT impact.

The key question when we create any AI is: “How does this affect my stakeholders?” In this case, the key stakeholder is a student deciding which college they are going to. If we do not consider the net impact of our AI and treat these decisions merely as mathematical abstractions, we will end up in an AI future that most of us would not want to be a part of. The onus is on all of us to choose the right path here. We don’t get to sit this one out.

Resources:

Related:


Sign Up