The Ethics of AI

Marketing scientist Kevin Gray asks Dr. Anna Farzindar of the University of Southern California about a very important subject - the ethics of AI.



By Kevin Gray and Dr. Anna Farzindar

Ethics of AI header

Original art by Dr. Anna Farzindar

 

Kevin Gray: AI has become part of our daily lives, hasn’t it!

Dr. Anna Farzindar: I was working on my laptop when my college daughter said “Mom please don’t do anything wrong with AI!” Then two days later during our family dinner, my younger freshman high school daughter told a story about a video on social media showing a small home care robot that tricked the owner and lied. She asked me “Mom, aren’t you afraid of robots?”

These short conversations made me think about how the new generation is a big consumer of technology but, at the same time, they are concerned and worried about the future AI.

 
KG: Getting back to basics, what is AI?

AF: From talking to your virtual assistance on smartphone (like SIRI), watching a recommended movie on Netflix, searching on Google, following the suggested Instagram posts, using the sophisticated methods of an auto trading stock market, applying the decision making systems for your loan approval, or (soon) sitting in a self-driving car, AI algorithms are so embedded in our daily life that is hard to imagine living a single day without them! AI is like our closest friend who serves us. But is AI our best friend who looking our interests, or could it turn to an enemy?

Artificial intelligence (AI) is the field of computer science which creates human-like intelligence, even with the capacity of predicting the future. AI algorithms give machines the ability of performing tasks by learning from experience and data then refining their learning from new input to adapt to new situations.

Intelligent robots and simulations of the human brain have been the topic of fiction for decades. Is it true that AI will take over the human race and become a danger to humanity? How many people will lose their jobs because of robots? Will smart phones keep children occupied for hours and even replace parenthood? Is AI an imminent threat to humankind? Who actually controls AI and what are their goals?

It is time for everyone to be aware of the impact and ethics of artificial intelligence no matter if we are a user of technology or a programmer who creates the algorithms. The question is where we are heading as humans in this AI world.

 
KG: Can AI really become as intelligent as humans?

AF: Based on the Oxford dictionary the definition of intelligence is: the ability to learn, understand and think in a logical way about things; the ability to do this well. Considering this definition of intelligence if a machine can learn and perform the tasks in a logical way then it could be intelligent!

In 1950 Alan Turing, an English mathematician and computer scientist, proposed a method to determine if a machine can think like a human. In this test one person as a questioner communicates in natural language with two separate rooms to get an answer, one with a computer and one with a human. After a number of questions, if the human evaluator can’t decide whether the answers came from a human or AI for half of the test runs, then the machine is said to have passed the test. 

Some researchers showed that AI can pass Turing test in many complex natural language processing (NLP) tasks like machine translation and automatic summarization, simulating a human in 60% of the cases.

To assess the quality of AI performance we need to define the evaluation metrics.

Intrinsic evaluations directly judge the quality of output of a machine by direct analyses in terms of some set of norms. For example, in stock market prediction we can train the model on historical financial datasets (e.g. for the past ten years) and make a prediction by reading a window of data from the last 50 days to determine next day’s opening price. The intrinsic evaluation will measure how the predicted price is different from actual price.

Extrinsic evaluations assess the performance of AI system components based on how they affect the completion of some other tasks. In our stock market prediction example, in addition to predicting the stock price, if we analyze several aspects such as the positive and negative sentiment from tweets, impact of the news and measure other financial technical indicators, then all of these components could be evaluated to determine the performance of the whole system. 

In some applications AI performance is good enough for specific tasks that have been requested. But in other applications, AI execution is largely superior to humans because of access to the big data, power of computing machines, speed of processing, powerful algorithms, contribution of team of engineers and billions of dollars of investment to create AI systems.

 
KG: What is emotional intelligence?

AF: Emotional intelligence is the ability to perceive, understand emotions and integrate emotion to facilitate thought and responses to a task. Some AI applications are designed for interpretation of human emotion. For example, AI can help the company to understand their customers better by measuring the consumer’s emotion from the reviews or social media. Also, AI can analyze spoken expressions from calls to customer service and find voice patterns. Some systems have the ability for automatic recognition of the facial expressions from videos and capturing the emotional reactions in real time.

But there are some biases in emotional AI technologies used to interpret the human emotion. For example, the system could assign more negative emotions to people of certain ethnicities or cultures that could mislead an organization in understanding customer satisfaction and, consequently, make wrong decisions.

 
KG: How dangerous really is AI? What are some of the ways it now is and potentially could be seriously abused?

AF: Most of the AI algorithms are developed with poorly specified goals but are rapidly deployed in a large-scale environment. The impact of "super intelligent" machines is unknown and it could be difficult or impossible for humans to control it. From the past experiences such as the atomic scientists leading to a catastrophic event, there is an urgent need to take responsibility for AI before humans are surprised by their creation.

Therefore, there is a necessity to develop specific legal and ethical guidelines for everyone: for technology developers to be aware of the algorithms and biases, for industry on to how use AI and connect the data in their various applications with different objectives, for governments to assist them better in planning for AI, and for consumers to be aware of AI and its decision-making in transparent and explainable processes.

We could create unconscious and implicit biases in the algorithms. Imagine a brilliant young computer scientist from the Middle East who applies for a job and sends her resume to companies but gets no answers from HR. This is one of the current inequalities in the field of AI, which is the use of machine learning in recruitment for candidate assessment and preselection.

If there are no records of past Middle Eastern female engineers in the company, then the model will never be able to select a candidate with this profile! In this case, AI made an unfair decision by rejecting a qualified candidate. “Fairness” is the behavior of AI models without privileging or discriminating against an individual or group of users, for example, based on their gender or race. 

 
KG: What can we do to prevent abuse of AI? Are there things ordinary citizens can do?

AF: The spreading of false information and fake news is an important case of abuse of AI. Recently “DeepFake” apps raised concerns regarding candidates for the 2020 US election. DeepFake, a combination for “deep learning” and “fake,” refers to AI software that can merge a digital face onto an existing video and audio of a person.

The manipulation of behavior of AI could affect the vulnerable groups like teenagers, specific races or ethnic minorities by unintentionally promoting hate by AI systems. For example, recently “Black Lives Matter” drew attention to the problem of racism, but some people against this movement posted the racist videos on social media. When a video gets a certain number of views, the AI model promotes it as interesting content for more exposure to viewers, resulting in more hate in society. Also, fake news and misinformation about COVID-19 are another example of abuse of AI. Social media allows people across the globe to spread false information and conspiracy theories. AI is not intelligent enough to distinguish what are good values and bad values for human beings, but people can.

Also, it is important to study the long-term effect of AI, for example the impact on the next generation. Some concerns are the possibility of addiction to Instagram or social media, raising pampered children with physical and mental problems, and lost connections between the parents and children.

AI ethics must go beyond the news headlines and theoretical discussions. It is imperative to develop the capacity to learn about the consequences of our work as developers of AI systems, or as consumers of such technologies in influencing the wider world.

 
KG: Could you elaborate more on the ethics of AI?

AF: The ethics of AI concern the moral obligation of tasks performed by machines and how it impacts on humans. For example, many companies integrate AI to improve their business by collecting data on users’ behaviors and analyzing patterns. Do they use the data or sell them to third parties in a responsible manner? What are the guidelines for making ethical choices by industry?

The ethics includes different aspects of AI such as fairness, bias in AI, explainable artificial intelligence, manipulation of behavior, human-robot interaction, AI safety, adversarial attack, and data privacy.

  • Fairness is the behavior of AI without privileging one arbitrary group of users over others, e.g. based on their age, gender or race. For example, in the hiring process many AI systems fail to give the equal opportunity to some candidates.
  • Bias in AI is the errors in a system with unfair outcomes. For example, some facial recognition algorithms falsely identified African-American and Asian faces more than white faces due to bias and lack of data in training the machine learning.
  • Explainable and Transparent AI means how easily the results of a machine can be understood by humans. Especially with recent AI technologies and deep learning techniques, it is very hard to understand how the system predicts the future using the millions of patterns from the past experience and makes an automatic decision based on it. For example, applications for a credit card or loan approval by AI is hard to track in a transparent way.
  • Manipulation of Behavior could happen in business or gaming for example. Most social media platforms and games are designed based on human psychology to leave users to “feel” in control. In gambling the system gives the “illusion of control” to users.
  • Ethics of Human-Robot Interaction means AI can be used to manipulate humans into believing and doing things. For example, elderly care must clarify the purpose the robot will serve.
  • AI safety can be defined as the effort to ensure that AI is deployed in ways that do not harm and humans can control it.  For example, in designing and deploying self-driving cars human safety must be the primary objective.
  • Ethical Adversaries - Adversaries are inputs to AI models that the attackers have intentionally designed to push AI decision-making towards making mistakes. Then AI will perform unexpected behaviors and make unfair decisions. For example, by adding some noise and wrong data into the input that would cause error, similar to optical illusions for machines. 
  • Data Privacy is responsibly collecting, using and storing data pertaining to an individual or group. Data ethics is doing the right thing with data for human and society. For example, when an organization processes personal data and sells these data to a third party, they must be responsible for data protection when the third party is vulnerable to security or violations of privacy.

 
KG: I found Stuart Russell’s Human Compatible: Artificial Intelligence and the Problem of Control very informative. Are there other books, articles or podcasts you can recommend to those who’d like to learn more about this topic?

AF: I am giving a webinar on Ethics of AI on October 21st, 2020 and the video will be available on Youtube.

There are some resources for better understanding of AI such as: Artificial intelligence and life in 2030

There are several interesting TED talks about Ethics of AI, such as: Stuart Russell: 3 principles for creating safer AI

More details about the Ethics of Artificial Intelligence and Robotics could be found at: https://plato.stanford.edu/entries/ethics-ai/

Some information about Bias in AI could be eye-opener such as: The Truth About Algorithms by Cathy O'Neil

I think it is time to create a new scientific field to investigate Artificial Intelligence (AI) and its influence on people, their communities, society, and humanity. A discipline to study short-term and long-term goals and impacts of AI which could be called AIology!

KG: Thank you, Anna!

 
Kevin Gray is President of Cannon Gray, a marketing science and analytics consultancy.

Anna Farzindar, Ph.D. is a faculty member of the Department of Computer Science, Viterbi School of Engineering, University of Southern California. Her Instagram art page is https://www.instagram.com/annafarzindar and her personal website is www.farzindar.com.

About the painting: 
Title:  Ethics of Technology
Technique: watercolor on rice paper
Size: 37in x 25in
Year:2020
Artist: Anna Farzindar

 
Related: