24/10/2024
Artificial Intelligence (AI) has swiftly revolutionised industries, economies, and societies across the globe. Acknowledging both the vast potential and the intrinsic risks of AI technology, the European Union (EU) has initiated comprehensive legislation to govern the development and utilisation of AI. Below Rob Nidschelm will delve into the milestones, history, and objectives of the EU’s new AI legislation, explore its relationship with other regulatory frameworks (like GDPR, NIS2, and DORA), and examine similar initiatives outside the EU.
A Brief History of AI Legislation in the EU
The EU’s legislative efforts on AI started earnestly in the late 2010s, and aimed to nurture innovation while safeguarding citizens’ rights and safety. Key milestones include:
- April 2018: The European Commission unveiled the European Strategy on Artificial Intelligence, concentrating on amplifying public and private investments in AI, preparing for socio-economic transformations, and ensuring an appropriate ethical and legal framework.
- December 2018: Adoption of the Coordinated Plan on Artificial Intelligence, encouraging collaboration among member states to maximise the impact of AI investments at both EU and national levels.
- April 2019: The High-Level Expert Group on AI published the Ethics Guidelines for Trustworthy AI, outlining principles such as transparency, accountability, and human oversight.
- February 2020: Release of the White Paper on Artificial Intelligence by the European Commission, proposing policy options to enable trustworthy and secure development of AI in Europe.
- April 2021: Introduction of the Artificial Intelligence Act (AI Act) proposal, aiming to establish a legal framework for AI that balances innovation with the protection of fundamental rights.
Understanding the Artificial Intelligence Act
The proposed AI Act represents a landmark piece of legislation intended to regulate AI technologies based on their potential risks. It adopts a risk-based approach, categorising AI systems to ensure appropriate levels of regulation without stifling innovation.
At the top of the hierarchy are AI systems that pose an “unacceptable risk”. These are banned outright due to their potential to threaten safety, livelihoods, or fundamental rights. This includes systems that manipulate human behaviour to bypass users’ free will, or enable social scoring by governments.
Next are “high-risk” AI applications, which are subject to stringent obligations before they can be marketed. These systems are typically used in critical sectors like healthcare, transportation, and law enforcement. These standards include conducting risk assessments, ensuring high-quality datasets, maintaining logs of activities, and allowing for human oversight.
“Limited risk” systems are those with specific transparency obligations. For example, chatbots must inform users that they are interacting with a machine, to ensure informed consent.
Finally, “minimal risk” AI systems can be developed and used, which are open to existing legislation without additional legal requirements.
The AI Act in Relation to GDPR, NIS2, and DORA
The AI Act does not exist in a vacuum; it complements and intersects with other significant EU regulatory frameworks, this creates a cohesive environment for technology governance.
The General Data Protection Regulation (GDPR), enforced since May 2018, is the bedrock of data protection and privacy in the EU. The AI Act builds upon GDPR’s principles by addressing data quality and governance in AI systems. Both regulations emphasise the protection of personal data, transparency, and individuals’ rights. For instance, AI systems must ensure data minimisation and lawful processing, aligning with GDPR requirements.
The Network and Information Security Directive 2 (NIS2) aims to reinforce cybersecurity across the EU, covering critical sectors and essential services. The AI Act meets NIS2 in ensuring that AI systems, particularly high-risk ones, are secure and resilient against cyber threats. This unity is crucial because AI systems could become targets or tools for cyberattacks, potentially compromising safety and privacy.
The Digital Operational Resilience Act (DORA) focuses on the financial sector’s ability to withstand and recover from ICT-related disruptions. The AI Act complements DORA by ensuring that AI systems used in finance are reliable and secure. Together, they promote operational resilience, emphasising risk management, incident reporting, and robust oversight.
By aligning the AI Act with GDPR, NIS2, and DORA, the EU creates a unified regulatory environment that addresses data protection, cybersecurity, and operational resilience, fostering a trustworthy ecosystem for AI development and deployment.
Objectives of the New AI Legislation
The EU’s AI legislation aspires to achieve several key objectives. A leading objective is the protection of fundamental rights and safety, ensuring that AI systems are developed and utilised in ways that respect principles such as non-discrimination, privacy, and data protection.
Promoting trustworthy AI is another central aim. By establishing clear rules and standards, the legislation seeks to grow public trust in AI technologies, which is an essential way for them to be adopted and accepted. The EU also wants to foster innovation by creating a single market for lawful, safe, and trustworthy AI applications. This is to limit market fragmentation and provide legal certainty for businesses and innovators.
Ensuring transparency and accountability is crucial. The legislation mandates transparency measures, such as disclosing when individuals are interacting with AI systems, and ensuring that systems are auditable and accountable. This openness is designed to empower users and maintain public confidence in AI technologies.
Impact on Businesses
For companies operating within the EU or dealing with EU citizens’ data, the AI Act presents both challenges and opportunities. Businesses will need to assess their AI systems to determine their risk category and ensure that they are compliant with the relevant obligations. This may involve significant adjustments to their development and deployment processes.
Compliance requirements could means necessary investments in new systems and processes, particularly for those deploying high-risk AI systems. However, clear regulations can provide a stable environment for innovation, encouraging investment in AI technologies that are compliant and trustworthy. As the EU often sets precedents in regulatory standards (as seen with GDPR), businesses will benefit from aligning with EU regulations, potentially gaining a competitive advantage in global markets.
Similar Legislation Outside the EU
The recognition of AI’s profound impact is not limited to Europe; globally, countries are developing their own frameworks to regulate AI, reflecting a worldwide trend towards responsible AI governance.
In the United States, a sectoral approach has been adopted, with various federal agencies issuing guidelines specific to their domains. The Algorithmic Accountability Act has been proposed to require companies to assess the impacts of automated decision systems and mitigate any risks. While not yet enacted, this legislation signifies a growing awareness of the need for AI oversight.
The United Kingdom, post-Brexit, is crafting its AI strategy with a focus on pro-innovation regulation. The UK plans to introduce a regulatory framework that encourages innovation while addressing risks associated with AI. This approach seeks to balance the UK’s position as a leader in AI development with the necessity of safeguarding public interests.
In China, the government has implemented regulations on AI, particularly focusing on data security and the ethical use of AI. China’s approach combines strict government oversight with an aggressive push for technological leadership in AI. Regulations emphasise the need for AI to align with social values and national security interests.
Moving to Canada, the government has proposed the Artificial Intelligence and Data Act (AIDA), aiming to regulate high-impact AI systems and ensure they are developed and deployed responsibly. The act would require organisations to adopt measures to mitigate risks and establish oversight mechanisms.
In Australia, the government released the AI Ethics Framework, providing voluntary principles to guide businesses and governments in designing, developing, and implementing AI. While not legally binding, it reflects Australia’s commitment to ensuring AI technologies are safe, secure, and reliable.
In Japan, the government has promoted the Social Principles of Human-Centric AI, focusing on principles such as human rights, privacy, and the promotion of innovation. Japan’s approach emphasises the harmonious coexistence of humans and AI, aiming to foster public trust and acceptance.
In Brazil, the government is considering the Legal Framework for Artificial Intelligence, which seeks to establish principles, rights, and duties for the development and application of AI. The framework focuses on promoting innovation while ensuring respect for ethical standards and fundamental rights.
In South Africa, as part of the African continent’s growing interest in AI, the government has begun exploring AI’s potential and the necessary regulatory responses. The Presidential Commission on the Fourth Industrial Revolution has recommended developing a comprehensive policy and legislative framework for AI, focusing on inclusive growth and ethical considerations.
These initiatives are from every continent and underscore a global movement towards establishing regulatory frameworks that balance innovation with ethical considerations and risk management. Each country’s approach reflects its unique socio-economic context, legal traditions, and strategic priorities, contributing to a diverse global landscape of AI governance.
Looking Ahead
The AI Act is still under discussion and subject to amendments. It must be approved by both the European Parliament and the Council before becoming law. Once adopted, a transition period will allow stakeholders to adapt to the new regulations. The integration with GDPR, NIS2, and DORA underscores the EU’s holistic approach to regulation, ensuring that AI systems are not only innovative but also secure, transparent, and respectful of individual rights.
Key Takeaways
- The EU is leading the way in creating comprehensive AI legislation with the proposed AI Act.
- The AI Act complements other regulations like GDPR, NIS2, and DORA, creating a cohesive framework for data protection, cybersecurity, and operational resilience.
- A risk-based approach categorises AI systems into unacceptable, high, limited, and minimal risk.
- The legislation seeks to protect fundamental rights, promote trust, foster innovation, and ensure transparency.
- Businesses must prepare to comply with new obligations, which may impact development and deployment strategies.
- Similar regulatory efforts are underway globally, reflecting a worldwide recognition of the need for responsible AI governance.
- Staying informed and engaged with the legislative process is crucial for organisations affected by these changes.
By understanding the history and objectives of the EU’s AI legislation and its relation to other regulatory frameworks, companies can navigate the new regulatory environment effectively and contribute to the development of AI technologies that are both innovative and aligned with societal values.
To discover more about what the EU’s AI framework will mean for you, please contact of one our dedicated experts. We will help you prepare your business for the changes outlined above, and utilise AI in a legal and ethically responsible way.