European AI Act: The Simplified Breakdown
The AI act aims to ensure excellence in the EU, provide the correct conditions for the development of AI and guarantee that AI systems are beneficial to people.
Sasun Bughdaryan via Unsplash
On the 19th of February 2020, a White Paper was published on AI - “A European approach to excellence and trust”. Later on, on the 21st of April 2021, the European Commission published its legislation on the Act of the use of AI.
According to the EU, some AI systems are very complex, unpredictable, and very opaque. Their aim is to ensure that these different types of AI systems meet the fundamental rights and provide trust, whilst reducing market fragmentation.
The main objective of the Act is that:
- The available AI systems on the European market are safe
- These AI systems respect the values of the EU and the rights of the citizen
- These AI systems ensure legal certainty to help aid the investment and innovation of AI.
- These AI systems are trustworthy to ensure that the market does not fail
- It improves the existing legislation on the safety and rights requirements of these systems
However, this law may not only apply to the EU; other countries have already looked into new ways to provide more transparent AI systems. In September 2021, Brazil passed a bill that creates a legal framework.
AI Risk Framework
The law has categorized these AI systems into 4 risk areas; unacceptable risk, high-risk, limited, and minimal or no-risk.
Minimal or No Risk
This allows the use of minimal-risk AI or no-risk AI systems, with no restrictions. Although there are not any restrictions, the provider of the AI system must adhere to the voluntary codes of conduct. The Commission predicts that most AI systems will fall into this ‘minimal or no risk’ category.
These types of AI systems include spam email filtering.
These AI systems are also permitted, however, they are subject to more in-depth information and higher transparency providing resources such as technical documentation. They may also choose to adhere to voluntary codes of conduct.
These types of AI systems include chatbots.
These are the type of AI systems that are the most at risk and are permitted, however, they are not to be compliant in relation to the requirements and ex-ante/ex-post conformity assessment. The conformity assessment will need to be performed before the system enters the market.
These AI systems will either be:
- AI systems are used as a product safety component, for example, medical devices
- Stand-alone high-risk AI systems, for example law enforcement
These types of AI systems are completely prohibited as they are known to pose an ‘unacceptable risk’ to the safety and rights of people.
For example the exploitation of children or mentally disabled persons. This can be through a child’s doll that includes an integrated voice assistant which could motivate the product user to have dangerous behavior.
The Requirements For High-Risk AI Systems
High-risk AI systems are the most at risk due to their requirements. These include
Data and Data Governance - ensuring that these AI systems use high-quality data that is relevant and representative
Documentation and record-keeping - create documentation and logging features to help with traceability and auditability, as well as make sure that the AI systems are compliant.
Transparency and Provision of Information to Users - provide users with information, such as on how to use the system to provide transparency.
Human Oversight - human intervention is imperative during the building phase of the AI system, as well as the implementation of it.
Robust, Accurate, Cybersecurity - these elements are important to any AI system to protect both the business and the user.
If companies, from the company, manufacturers of these AI systems to distributors can deal with severe fines. These are split into three-level sanctions, depending on the severity of the breach.
Up to 10 Million Euros
This is the lowest level of fine stated in the AI Act. It can be due to incomplete or false information provided to the authorities. It can either be up to 10 million Euros or 2 percent of a firm's worldwide annual turnover.
Up to 20 Million Euros
This is the next potential fine and can be due to a breach in the requirement for the AI system. For example, the lack of technical documentation provided to provide transparency. This can be up to 20 million Euros or 4 percent of a firm's worldwide annual turnover.
Up to 30 Million Euros
This is the maximum fine and can occur due to the use of a prohibited AI system or the quality of the AI system does not meet the criteria. This can be up to 30 million Euros or 6 percent of a firm's worldwide annual turnover.
Although there is yet more to review and nothing is set in stone yet, the AI Act has raised a lot of concern. For business, manufacturers, and distributors both inside and outside of the EU. The AI act aims to ensure excellence in the EU, provide the correct conditions for the development of AI and guarantee that AI systems are beneficial to people.
Nisha Arya is a Data Scientist and Freelance Technical Writer. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.