Key takeaways from EU’s proposed Artificial Intelligence Act

With the rapid rise and evolution of AI technologies worldwide, the need for a comprehensive regulatory framework has never been more evident. As an entrepreneur in the AI space and a lawyer, I've been observing these developments closely. The AI Act, currently under consideration in Europe, stands out as one of the most comprehensive regulatory frameworks not just in Europe, but across the globe.

What sets the AI Act apart is its potential to serve as a cornerstone for future AI regulations in different jurisdictions, including India. The Act not only proposes stringent requirements for businesses leveraging AI—including transparency, accountability, and data protection—but also endeavors to address ethical questions and implementation challenges.

However, as with any pioneering legislation, the AI Act is not without its imperfections. Critics argue that it falls short in addressing the issue of bias in AI systems, an area that needs urgent attention. Nevertheless, the AI Act represents a significant stride towards establishing a solid foundation for AI regulation.

Aims of the Proposed AI Act

The AI Act envisages a future where AI is governed by a robust set of rules and standards:

  1. Quality and Transparency: The Act seeks to implement rules around data quality, transparency, human oversight, and accountability. By doing so, it hopes to prevent misuse and ensure responsible use of AI.

  2. Ethics and Implementation: Recognizing the ethical complexities that AI brings, the Act aims to address ethical questions and implementation challenges in sectors as diverse as healthcare, education, finance, and energy.

  3. Uniform Regulation: The Act proposes the establishment of a European Artificial Intelligence Board to oversee the uniform application of the regulation across the EU.

  4. Comprehensive Oversight: The Act seeks to regulate all AI development occurring within, or used by foreign entities in, the European Union.

  5. Risk Assessment: The Act plans to implement a classification system to assess the level of risk an AI technology could pose to the health and safety or fundamental rights of a person.

  6. Supporting Innovation: The Act hopes to establish the EU as a leader in the development of safe AI systems by providing the right incentives and active support to Startups and SMEs.

Why Regulate AI?

Regulating AI is not just about curbing potential misuse; it's also about fostering trust and promoting ethical use:

  1. Safety Risks: AI systems, if not properly regulated, pose potential safety risks.

  2. Rights and Privacy: The misuse of AI can lead to violations of fundamental rights, including privacy.

  3. Trust and Reliability: Ensuring the reliability of AI output is crucial for fostering public trust in these systems.

  4. Preventing Discrimination: AI systems, if left unchecked, could unintentionally propagate discrimination.

  5. Public Safety: Certain AI applications could pose risks to public safety if not properly regulated.

Risk Levels in AI: A Four-Tier Classification

Recognizing the varied applications and impact of AI, the proposed Act classifies AI systems into four risk levels:

  1. Unacceptable Risk: This category includes AI systems that pose such a high level of risk that their deployment or development is prohibited, like Social Scoring systems.

  2. High Risk: AI systems that come under this category are allowed but are subject to strict compliance and conformity requirements. These include AI systems used in law enforcement, medical usage, education, employment, critical infrastructure, military, biometrics, judiciary, administrative AI critical to democratic processes, and other products subject to sectorial legislations.

  3. Transparency Obligations: This category includes AI systems that require humans to be notified when they are interacting with an AI system, like bots. The compliance requirements for these systems are broad but relatively low.

  4. Minimal / No Risk: This category includes AI systems that pose minimal or no risk. These systems only require the voluntary drafting of codes of conduct by the company board, and also voluntary application of self-imposed requirements. For example, spam filters, video games etc. (Art 69)

Mandatory Requirements for High-Risk AI Systems

High-risk AI systems are subject to rigorous requirements under the Act:

  1. Quality Data: High-quality training, validation, and testing data must be used to ensure reliable and unbiased outputs.

  2. Documentation and Logging: Comprehensive technical documentation and logging at every step of the AI system's development and operation are required.

  3. Transparency and User Interaction: High levels of transparency and clear instructions for users to interact with the system are mandated.

  4. Human Oversight: The Act insists on human oversight, robustness, accuracy, and security of the system. The system must also undergo conformity assessment, re-assessment, and post-market monitoring.

  5. Registration and Certification: High-risk AI systems must be registered in a central EU database, affixed with CE marking (Title III, Chapter 4, Art 49), and a declaration of conformity must be provided.

  6. Adherence to Existing Laws: The Act emphasizes that existing legal obligations, like the General Data Protection Regulation (GDPR), continue to apply.

  7. Upholding EU Values: Any AI application that contradicts EU values—for example, Subliminal Manipulation, Exploitation, remote biometrics (excluding face recognition by authorized agencies) (Title II, Article 5)—is prohibited under all risk levels.

Noteworthy Points of the Proposed AI Act

The Act carries some crucial considerations that businesses and AI users must be aware of:

  1. Varied Risk Levels for Same Technology: The same technology can be categorized into different risk levels depending on the use-case. For instance, a General Purpose AI or GPAIS like ChatGPT could fall into different categories depending on its application.

  2. Penalties for Non-compliance: The Act proposes steep penalties for non-compliance—up to EUR 30 million or 6% of global income, whichever is higher.

  3. Wide Scope: The proposed Act governs anyone who provides a product or a service that uses AI within the EU, users of AI within the EU, or users from another country whose information is used within the EU.

  4. Use of Copyrighted Data: A recent addition to the proposed Act would require companies to disclose their use of copyrighted data.

  5. Neutral Definition of AI: The definition of AI in the Act is as neutral as possible to cover techniques not known or developed currently, with potential amendments to the Act in the future.

In conclusion, the AI Act heralds a new era of AI regulation. While it may not be perfect, it sets a strong precedent for comprehensive, robust, and ethical AI governance. As businesses and AI users, it's crucial to stay informed and adapt to these regulatory changes to ensure responsible use and continued innovation in the AI space.

Link to the full proposed act : https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206