top of page
Search

The AI Act is coming – what’s in store?

By Arnoud Engelfriet


The widespread and fast adoption of AI has prompted serious concerns. From algorithmic bias to erosion of personal autonomy, AI technologies have had unintended consequences that impact the lives of people across the globe. Such risks have stirred public and policy debates, leading to an increasingly urgent call for effective regulation. The European drive to regulate AI was sparked by these ethical concerns, although it is also part of a broader approach, called the Digital Decade 2030.


The call for ethics in AI

There is a growing realization that the impact of AI goes beyond technological efficiency and enters the realm of human rights, societal values, and fundamental ethical considerations. The last five years in particular have prompted an increasing public and academic dialogue on the need for ethical frameworks that ensure these technologies are designed and deployed responsibly. This wave of attention, while partly driven by academic interest and foresight, has been propelled into mainstream discussion by several high-profile incidents that have exposed the darker side of AI, notably the 2018 Facebook/Cambridge Analytica scandal.


The ‘techlash’ that was the result of these and more incidents set in a period of increased attention to ‘honest’ or ‘ethical’ AI. Initially, the main effect was to produce codes of ethics, that made large promises of fairness and transparency (“virtue signaling”) but did little to change the actual products or services.  As already known from the field of business ethics, changing a company’s behavior requires more than ethical statements: “[w]hen ethical ideals are at odds with a company’s bottom line, they are met with resistance”.  Hence the logical next step – calls for legislation.


The EU first signaled its intentions to regulate AI in a Commission strategy document in April 2018. The three aims of the strategy were to boost the EU's technological and industrial capacity and AI uptake, encouraging the modernization of education and training and ensuring an appropriate ethical and legal framework, based on the Union's values and in line with the Charter of Fundamental Rights of the EU. The work resulted in an ethical framework regarding “Trustworthy AI”, the key term underlying all AI regulatory efforts in the EU. These Ethics Guidelines for Trustworthy AI systematically analyze ethical concerns in AI and propose a concrete framework for addressing them. The framework consists of three components:


  1. AI should be lawful, complying with all applicable laws and regulations;

  2. AI should be ethical, ensuring adherence to ethical principles and values; and

  3. AI should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.


The AI Act seeks to regulate AI from the perspective of enforcing each of these components, and does so from a risk-based perspective. Its provisions do not merely ban or restrict certain actions, but tie compliance requirements to risk management or ban activities that present specific risks to human beings.


For implementors of AI, the Guidelines translate these principles into seven concrete requirements:


  1. Human agency and oversight, including fundamental rights, human agency and human oversight.

  2. Technical robustness and safety, including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.

  3. Privacy and data governance, including respect for privacy, quality and integrity of data, and access to data.

  4. Transparency, including traceability, explainability and communication.

  5. Diversity, non-discrimination and fairness, including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation.

  6. Societal and environmental wellbeing, including sustainability and environmental friendliness, social impact, society and democracy.

  7. Accountability, including auditability, minimisation and reporting of negative impact, trade-offs and redress.


Autonomy as a definitional key

While these ethical guidelines give a clear indication of which aspects of AI are to be regulated and to what end, a key issue is missing – what exactly is Artificial Intelligence? The leading handbook AI: A Modern Approach alone presents eight different definitions of AI organized into four categories: thinking humanly, acting humanly, thinking rationally, and acting rationally.


In their 2018 Communication that kicked off the process that would ultimately produce the AI Act, the European Commission gave a loose definition of AI: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.” While this definition still uses the problematic term ‘intelligence’, it did provide a hint for a new approach to define AI: autonomy. The AI Act is poised to adopt the 2019 OECD definition of “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”.


The given justification is to align with international developments and to provide a definition that is flexible enough to accommodate the rapid technological developments in this field. Its choice for autonomy as key criterion underlines the risk-based approach of the AI Act: generally speaking, it is precisely because AI systems can operate autonomously that many of the associated risks occur. It is true that this definition captures many algorithm-driven systems that would not directly be called “AI” in their marketing literature. However, given the risk-management nature of the AI Act this outcome is unavoidable.


Three levels of risk

The AI Act seeks to regulate AI from a risk-based perspective. The term ‘risk’ here refers to risks to humanity: health, safety, fundamental rights, democracy and rule of law and the environment. AI systems thus cannot be considered in isolation but must be evaluated after identifying potential risks to any of these aspects.


In broad terms, the AI Act divides AI systems into three categories:

  1. Unacceptable risk AI. This type of AI threatens basic human values or dignity to such an extent that it simply may not be put on the European market. Next to subliminal manipulation and social credit systems, real-time biometric monitoring is a prominent example.

  2. High risk AI. This category of AI poses significant risks, but its benefits may outweigh those of sufficient precautions are taken. The bulk of the compliance work under the AI Act relates to this type of AI system.

  3. Low risk AI. An AI system that does not meet the high-risk category is deemed “low risk” and only faces minimal requirements such as transparency regarding its status as an AI and not being allowed to make decisions on human behavior or actions.


There is also a “regulatory sandbox”, the purpose of which is to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with the AI Act and other legislation. Participation by human subjects should only be possible with specific informed consent.


Some of the biggest debates surrounding the AI Act focused on which AI system should be classified where. The approach chosen is to recite all unacceptable risk AI practices in the AI Act itself, to provide maximum clarity and a clear barrier to changes: this would require a new version of the Act. High risk AI is defined in a two-step approach: an annex lists certain “critical areas and use cases”, and the Act deems an AI practice “high risk” if a use or area “poses a significant risk of harm to the health, safety or fundamental rights of natural persons”.


Examples of high-risk areas include employment, access to essential services, law enforcement and the administration of justice. An AI system in any of these areas thus must be checked for specific risks it may cause: a job applicant screening algorithm may be biased against certain ethic groups, access to services may be harder for the poor or may unfairly threaten the environment, and so on. If such a risk is identified, the AI system is high risk and faces a mountain of compliance items. However if the risk can be minimized (or is not present to begin with), the AI system would not be high risk even if it operates in a high risk area. There is some debate still on the burden of proof: may an AI provider simply self-certify that no risks are present, would an impact assessment or other formal analysis be required or would prior involvement of a supervisory authority be necessary?


Geographical application

The AI Act is a European law, but there is a peculiarity with such legislation: the Brussels effect. As one of the largest and most integrated economies in the world, the EU's stringent regulations often set the standard for global norms. An early example is the 2007 REACH Regulation (Registration, Evaluation, Authorization, and Restriction of Chemicals): chemical companies must identify and manage the risks linked to the substances they manufacture and market in the EU. Its influence has extended beyond EU borders, leading chemical companies worldwide to adopt similar practices to ensure market access in the EU. The 2016 General Data Protection Regulation (GDPR) is also widely cited as exhibiting a similar effect in other countries.


The AI Act clearly follows the lead of the GDPR in setting a worldwide scope. The schematic below illustrates the key terminology of the AI Act. A provider of AI systems creates and/or releases an AI under its own brand, and a deployer brings this system to the markt. (These can be one and the same entity, of course). A deployer may need an importer or other distributors to get the AI system to its market. Finally, the use of an AI system may impact certain affected persons who may or may not be aware of the AI system’s very existence.


Under the AI Act, any provider established in the European Union is subject to its regulations. The same applies to a deployer, as well as to importers and distributors who first introduce the AI system into the European market (e.g. Google’s Play Store or the Apple App Store). These are the clear-cut cases. More controversial is the case where the deployer is outside the EU and the AI system’s output is “is intended to be used in the Union”. This may apply to a European firm outsourcing certain work to a third party outside the EU, with the third party employing an AI to do the work.


A remote surveillance or analytics operation would also fall under this provision. The third party doing so would be required to appoint a representative in the EU, who would be legally liable for any failures to comply with the law by the non-EU provider and/or deployer. (This mechanism was borrowed directly from the GDPR.)

If no representative is appointed, any use of the AI system is banned in the EU. Compliance work

If and when an AI system is deemed to be high risk, its introduction on the European market is subject to stringent requirements. These are aimed at reducing the risk, e.g. through formal assessments of any bias and documented steps taken to explicitly balance output. Data management and risk management processes must be put in place and personnel must be trained to diligently work with AI systems, rather than blindly accepting their output. Full transparency of the system’s design (in terms of data provenance and algorithm operation) is required, as is the use of the highest standards in cybersecurity and robustness. A quality management system serves as a backstop and stimulates constant improvement and error correction. Full documentation and information should be available to both supervisory authorities and affected persons.


Readers may be familiar with with the old CE logo; “Conformité Européenne” is a 1992 EU standard for certification that a system meets European regulatory standards. The AI Act will elevate that standard: an AI system may only be brought on the market if it has the CE logo, and to be allowed to do that, full documentation must prove a proper assessment of all risks has been made, with steps to reduce impact and processes to maintain quality.


The CE logo also triggers a different aspect of EU law: a product that carries defects despite its bearing of the logo is considered noncompliant and its producer subject to civil claims for damages from purchasers. The burden of proof is reversed: if the purchaser can establish a reasonable link between damage and the apparent defect, the producer must show convincingly that no such link is in fact present. Otherwise, the producer will have to pay for the damages in full – and no small print or terms of service can prevent that. Combine this with the rising trend of mass tort claims in European countries and you understand why AI providers are getting worried.


Worries exist all the more because of the planned system of supervisory authorities, with powers to impose fines up to 10 million Euros for compliance violations and up to 40 million Euros for releasing unauthorized high-risk or prohibited AI systems onto the European market. Additional powers have not been settled yet but are likely to include a ban on further use of the AI system or even the destruction of the underlying dataset.


Moving forward in the age of the AI Act

The AI Act is a wake-up call for businesses, developers, and stakeholders to prioritize ethical considerations, human rights, and societal values in their AI endeavors. The Act's risk-based approach, with its clear categorization of AI systems, underscores the EU's commitment to ensuring that AI serves humanity responsibly and safely. The message is clear: passive observation is no longer an option. The challenges posed by the AI Act are manifold, from ensuring compliance with its stringent requirements to navigating the potential legal and financial repercussions of non-compliance. Yet, these challenges also present an opportunity. By actively working towards compliance, organizations can not only mitigate risks but also position themselves as leaders in the responsible development and deployment of AI. The time to act is now. Embrace the AI Act's principles, invest in ethical AI practices, and lead your organization into a future where AI is not just technologically advanced but also ethically sound.


 

About the Author

Arnoud Engelfriet is director and Chief Knowledge Officer of ICTRecht Legal Services in Amsterdam, the Netherlands. He is the author of the upcoming book “AI and Algorithms: Mastering Legal & Ethical Compliance”, from which the above text is an edited pre-publication. The book will appear in January 2024 and can be pre-ordered through: MasteringAIcompliance.com #ArnoudEngelfriet #AI #ethics #compliance #EU #legaltech #regulation

Comments


bottom of page