Ethical AI: EU’s New Guidelines and the Future of AI Trustworthiness

The EU has issued a set of guidelines, "Ethics Guidelines for Trustworthy AI" to help tech companies steer towards ethical and inclusive AI as we come to terms with the potential of this technology.

By Nathan Sykes

Ethical AI

Artificial intelligence is a part of our daily lives now, and it represents an unprecedented combination of potential and possible harm. You're almost certainly familiar with the sci-fi-tinged worst-case scenarios concerning malevolent AIs overtaking and replacing us. However, the "black box" of AI development and behavior represents somewhat more mundane, though no less worrying, problems as well. Whether the rollout of AI around the globe spells liberation or disaster is a question we can begin answering by introducing formal standards for how this technology is designed and deployed.

To that end, the European Union has issued a set of guidelines, called the "Ethics Guidelines for Trustworthy AI." The goal is to help EU member nations and their tech companies steer a course toward ethical and inclusive AI as we come to terms with the potential and pitfalls of this technology. Here's a look at what these guidelines mean for the future of AI development.


What Do the EU's AI Guidelines Entail?

The EU is not the first governing body in the world to lay out recommendations for the ethical development of artificial intelligence, although its efforts may be some of the most specific to date. During the presidency of Barack Obama, the National Science and Technology Council — with participation from dozens of relevant government agencies — provided its own set of broad guidelines called "Preparing for the Future of Artificial Intelligence."

The European Union's efforts appear to be somewhat more actionable, since they contain a checklist — a "practical assessment list" — for companies engaged in the development of artificial intelligence. How were they created, and what do they say?

For a start, the EU collaborated with 52 experts on the subject of AI and drew on feedback from 500 members of the public who submitted comments. It's important to note that these guidelines are, presently, not legally binding. However, they do cover an impressive amount of ground in several major categories:

  • Transparency: Any time an AI system makes decisions on a user's behalf, that person should be aware of it. The reasoning behind decisions should be easily explainable.
  • Safety: AI systems should be designed to withstand attempted hijacking and other attacks performed by hackers.
  • Fairness: Decisions made by AI systems should not be influenced by gender, race or other personal identifiers. They should be as impartial as possible and not reflect human biases.
  • Environmental stewardship: Not all the stakeholders in AI development are human. The development of these platforms and the implications of their decision-making and sustainability should take into account the needs of the larger environment and other forms of life.

If the ultimate purpose of creating artificial intelligence is to improve human life on earth, these tenets seem like a solid foundation on which to build it. The EU goes further by providing specific and actionable guidelines for current and future architects of AI systems.


How Specific Do the EU AI Guidelines Get?

It's worth noting that the EU's "Ethics Guidelines for Trustworthy AI" has not yet reached its final form. The institution refers to the guidelines as a living document. As such, they have issued an open invitation to technology companies and public advocacy groups to render their own input and help shape future drafts. The EU appears cognizant of the fact that, like AI itself, our rules for governing its development and use should be equally flexible and open to change as we learn more.

The checklist for trustworthy AI, in its current form, is a plain-English set of questions that any chief technology officer, CEO or member of the public should be able to understand. Here's a small handful of them, lightly paraphrased for brevity:

  • Does the AI system have a kill switch to immediately cease its operations and delegate control to a human operator?
  • Was a comprehensive risk assessment performed to safeguard against vulnerabilities and cyberattacks?
  • What are the ramifications, and types of harm, that may occur if the system makes an inaccurate prediction?
  • Will human operators receive notice if the system begins making a potentially dangerous number of inaccurate predictions?
  • Did the system's designers incorporate a way to prevent human biases — on the basis of race, creed, etc. — from entering the decision-making algorithms?
  • How easily usable is this system for individuals with developmental or physical disabilities or other special needs?

It's not difficult to imagine some of the specific cases the EU's slate of experts had in mind as they drew up these guidelines. Given the number of potential applications of AI in human life, there's an emerging sense of urgency when it comes to formulating common-sense guidelines — followed closely by enforceable laws — for how technology companies engage in the design of AI systems.


We Need Sensible AI Guidelines With the Weight of Law Behind Them

Tesla promises fully autonomous functionality in its cars by the end of 2019. Elon Musk is on record saying his car company will accept liability in the event of an accident, provided the software made a mistake or a leap in logic. This means our AI-powered driverless cars must collect a variety of data as they operate, and at all times while the vehicle is in motion.

In parts of the U.S., artificial intelligence is being actively explored as a means to predict the likelihood of offenders committing crimes again in the future. Closer studies of the accuracy of these systems revealed they awarded higher "crime likelihood scores" to blacks than to whites.

China's social credit system relies on artificially intelligent algorithms to judge citizens' creditworthiness and grant or restrict privileges and rights based on their public behavior.

Artificial intelligence even has the potential to supplant the research of human geneticists and chemists on the hunt for life-saving medications and to help pharmaceutical companies bring drugs to market faster. The focus of such efforts must be the greatest good rather than the greatest profitability.

The EU signals its belief that the public has a right to an accounting of the type and variety of data these systems gather from the world around them, the potential for human biases to infiltrate their governing algorithms, the explainability of the logic informing an AI system's decision-making process and much more. Other groups throughout the world are lending their own voices to this timely conversation. It's a good sign for things to come, but governing bodies need to follow through by turning guidelines into laws to keep our innovators honest and protect the public from potential harm.

Bio: Nathan Sykes is a business and technology freelancer and blogger from Pittsburgh, PA. To read his latest articles, check out his blog, Finding an Outlet.