The Ethics of AI: Navigating the Future of Intelligent Machines

Why does the continuous growth and future of intelligent machines concern ethics?



The Ethics of AI: Navigating the Future of Intelligent Machines
Image by Author

 

Depending on your life, everybody has different opinions on artificial intelligence and its future. Some believed that it was just another fad that was going to die out soon. Whilst some believed there was a huge potential to implement it into our everyday lives. 

At this point, it’s clear to say that AI is having a big impact on our lives and is here to stay. 

With the recent advancements in AI technology such as ChatGPT and autonomous systems such as Baby AGI - we can stand on the continuous advancement of artificial intelligence in the future. It is nothing new. It's the same drastic change we saw with the arrival of computers, the internet, and smartphones.

A few years ago, there was a survey conducted with 6,000 customers in six countries, where only 36% of consumers were comfortable with businesses using AI and 72% expressed that they had some fear about the use of AI.

Although it is very interesting, it can also be concerning. Although we expect more to come in the future regarding AI, the big question is ‘What are the ethics around it?’.

The most developing and implemented area of AI development is machine learning. This allows models to learn and improve using past experience by exploring data and identifying patterns with little human intervention. Machine learning is used in different sectors, from finance to healthcare. We have virtual assistants such as Alexa, and now we have large language models such as ChatGPT. 

So how do we determine the ethics around these AI applications, and how it will affect the economy and society? 

 

The Ethical Concerns of AI

 

There are a few ethical concerns surrounding AI:

1. Bias and Discrimination

Although data is the new oil and we have a lot of it, there are still concerns about AI being biased and discriminative with the data it has. For example, the use of facial recognition applications has proven to be highly biased and discriminative to certain ethnic groups, such as people with darker skin tones. 

Although some of these facial recognition applications had high racial and gender bias, companies such as Amazon refused to stop selling the product to the government in 2018. 

2. Privacy

Another concern around the use of AI applications is privacy. These applications require a vast amount of data to produce accurate outputs and have high performance. However, there are concerns regarding data collection, storage, and use. 

3. Transparency

Although AI applications are inputted with data, there is a high concern about the transparency of how these AI applications come to their decision. The creators of these AI applications deal with a lack of transparency raising the question of who to hold accountable for the outcome. 

4. Autonomous Applications

We have seen the birth of Baby AGI, an autonomous task manager. Autonomous applications have the ability to make decisions with the help of a human. This naturally opens eyes to the public on leaving the decision to be made by technology, which could be deemed ethically or morally wrong in society's eyes. 

5. Job security

This concern has been an ongoing conversation since the birth of artificial intelligence. With more and more people seeing that technology can do their job, such as ChatGPT creating content and potentially replacing content creators - what are the social and economic consequences of implementing AI into our everyday lives? 

 

The Future of Ethical AI

 

In April 2021, the European Commission published its legislation on the Act of the use of AI. The act aimed to ensure that AI systems met fundamental rights and provided users and society with trust. It contained a framework that grouped AI systems into 4 risk areas; unacceptable risk, high risk, limited, and minimal or no risk. You can learn more about it here: European AI Act: The Simplified Breakdown.

Other countries such as Brazil also passed a bill in 2021 that created a legal framework around the use of AI. Therefore, we can see that countries and continents around the world are looking further into the use of AI and how it can be ethically used. 

The fast advancements in AI will have to align with the proposed frameworks and standards. Companies who are building or implementing AI systems will have to follow ethical standards and conduct an assessment of the application to ensure transparency, and privacy and account for bias and discrimination. 

These frameworks and standards will need to focus on data governance, documented, transparent, human oversight, and robust, accurate, cyber-safe AI systems. If companies fail to comply, they will, unfortunately, have to deal with fines and penalties. 

 

Wrapping it up

 

The launch of ChatGPT and the development of general-purpose AI applications have prompted scientists and politicians to establish a legal and ethical framework to avoid any potential harm or impact of AI applications. 

This year alone there have been many papers released on the use of AI and the ethics surrounding it. For example, Assessing the Transatlantic Race to Govern AI-Driven Decision-Making through a Comparative Lens. We will continue to see more and more papers getting released till governments conduct and publish a clear and concise framework for companies to implement. 

 
 
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.