The EU AI Act is the first major law to regulate AI/ML models and has major implications for organizations operating within the EU or providing AI technologies to EU markets. Compliance with the act is mandatory, and non-compliance will result in significant fines and reputational damage. Organizations must carefully evaluate their AI systems to determine if they fall under the Act's high-risk category and take necessary steps to ensure compliance.
EU’s Approach to AI Regulation
The Act categorizes AI applications into 4 risk levels: unacceptable, high-risk, limited-risk, and low-risk. Unacceptable-risk AI systems are banned, while high-risk systems have specific legal requirements. The Act aims to ensure the safety, transparency, and accountability of AI systems used in the EU, with obligations for developers and users based on the level of risk posed by the AI.
What’s coming next?
The EU AI Act was passed in March 2024. The EU AI Act will be implemented in mid-2024 and the first provisions of the AI Act will be enforced in late 2024. Organizations have a 24-month transition period before it becomes fully enforced.
We have compiled the top 5 actions your company can take today to make compliance easier before the new act is fully implemented. The top 5 actions to take now to be compliant are:
1. Adopt a Technical Documentation Solution
Article 11 of the proposed regulation specifies the key technical documentation requirements for high-risk AI systems. These requirements are crucial for proving compliance with the regulatory framework and fostering transparency and accountability in the development and deployment of AI. Comprehensive model documentation stands as a strategic and fundamental aspect in the Act for ensuring the ongoing delivery of responsible AI solutions.
The technical documentation covers the following details pertinent to the respective AI system:
1) General Description of the AI System:
- Intended purpose, development personnel, system version, and date.
- Interactions with external hardware or software.
- How the AI system interacts or can be used to interact with hardware or software
- Versions of relevant software or firmware and any requirement related to version update
- Description of all forms in which the AI system is placed on the market or put into service
- Description of hardware on which the AI system is intended to run
- Where the AI system is a component of products, photographs, or illustrations
- Instructions of use for the deployer and a basic description of the user interface
2) Detailed development Description of the AI system
- Outline of AI development process, including use and modification of pre-trained/third-party tools by the provider.
- AI logic, key choices and rationale, target users, classification strategies, optimization goals, output expectations, and regulatory compliance trade-offs.
- System architecture and computational resources for AI development, training, testing, and validation.
- Data requirements, training methodologies, data sources, labeling, and data cleaning.
3) Monitoring, Functioning, and Control:
- Methods and steps performed for the development of the AI system
- Anticipated unintended outcomes, risks to health, safety, fundamental rights, and discrimination.
- Human oversight measures and technical facilitation of user interpretation.
- Input data specifications as relevant.
4) Risk Management System:
- Comprehensive description in alignment with Article 9.
5) Lifecycle Changes:
- Detailed account of modifications made to the AI system throughout its lifecycle.
6) Harmonized Standards and Technical Specifications:
- List of harmonized standards applied or alternative solutions to meet the requirements set out in Chapter 2 of Title III.
- Reference to other relevant standards and technical specifications if harmonized standards aren't applied.
7) EU Declaration of Conformity:
- A copy of the EU Declaration of conformity.
8) Post-Market Monitoring System:
- Detailed description of the post-market monitoring plan.
These technical documentation requirements ensure that AI developers, providers, and regulators thoroughly understand the AI system's development, functioning, and risk management processes. (Source: https://artificialintelligenceact.eu/)
2. Work Hand-In-Hand with Legal
In light of the EU AI Act, it's increasingly important for companies to integrate legal and compliance teams early in the AI development process. This proactive approach ensures that AI innovations comply with the new regulatory frameworks designed to manage AI's ethical and societal impacts. Effective communication and collaboration between legal experts and developers become paramount in navigating these regulations without stifling innovation.
Early and Transparent Engagement: It’s crucial for development teams to engage with compliance and legal teams early in the AI development process to foresee and navigate potential regulatory issues.
View Legal as Enablers: Legal teams should be seen as enablers rather than obstacles, helping to ensure regulatory compliance and mitigate risks.
Continuous Communication: Maintain open lines of communication with legal teams, especially when plans change, to ensure continuous compliance and adapt to the rapidly evolving AI legislation landscape.
"Raise what you need to accomplish early on and keep communicating. If there are changes, such as changes in the function of your model, vendors that are starting to use AI or new vendors, or you're changing the datasets you want to use, the countries you're releasing in, or the release timeline, especially if you want to release early,” said Grace Chu, former Senior Product Council - AI/ML of Adobe. “These are all important things you want to let legal know. If you're not sure whether something matters for compliance, it's far better to raise it early and get confirmation." (From our panel discussion at ELC)
Collaborative Planning: Work closely with legal to create feasible plans that meet compliance needs without hindering the development process.
Proactive Compliance Integration: Integrate compliance into the AI development process from the start rather than treating it as an afterthought, to avoid delays and potential legal complications.
As the EU AI Act shapes the landscape of AI technology, adopting an integrated and proactive legal strategy is key to both compliance and competitive advantage.
3. Implement Proactive Governance to Future-Proof Your AI Initiatives
As the regulatory landscape for AI continues to evolve, it’s crucial for companies to adjust their strategies to stay compliant proactively. These steps emphasize the importance of a proactive compliance culture, continual improvements in documentation, early involvement of legal expertise, adaptability to changing standards, and robust data governance. Each action aims to ensure that AI projects are both innovative and compliant with the latest regulatory demands.
Establish a Compliance Mindset: To secure the future of AI projects, forming an AI governance council is crucial. This council should consist of members from various departments including engineering, legal, and product teams. Their role is pivotal in ensuring AI projects align with current and forthcoming regulations.
Incremental Documentation Improvements: Thorough documentation of AI models are essential and engaging compliance teams early in the project lifecycle is beneficial for aligning with regulatory standards and promoting transparency.
Preparing for Regulatory Changes: Companies must stay proactive by adapting to changes in the AI regulatory landscape. This forward-thinking approach helps in managing the scrutiny associated with AI deployments and supports the sustainable integration of AI technologies.
4. Review Current Models and Assess Risk
Conducting a comprehensive review and classification of existing AI applications is crucial to identifying high-risk use cases that require compliance with the EU AI Act.
This process involves:
- Inventorying all AI applications currently in use or under development within the organization.
- Categorizing these applications based on their intended use, underlying technology, and potential risks.
- Identifying high-risk applications that fall under the purview of the EU AI Act, requiring strict compliance measures.
- Documenting these high-risk applications, including their purpose, functionality, data sources, and potential risks.
By thoroughly understanding the AI landscape and classifying applications based on risk, organizations can prioritize compliance efforts and ensure that critical systems adhere to the necessary regulations from the outset.
5. Implementing an AI Strategy and Governance Framework
Transparency is a cornerstone of the EU AI Act, and enhancing reporting mechanisms is essential for compliance. Companies should develop methods to document and report on the lifecycle of each AI system, particularly those classified as high-risk. This includes detailed documentation of the AI model’s development process, data training procedures, deployment environments, and the decision-making frameworks it influences.
Building robust reporting mechanisms enables not only adherence to regulations but also builds trust among users and stakeholders by demonstrating a commitment to ethical AI practices. Companies should consider leveraging advanced documentation tools that can automate parts of this process, making it easier to generate and maintain accurate records. Such tools can help capture the lineage of data and models, providing auditors and regulators with a clear trail of accountability.
By proactively enhancing transparency and strengthening reporting structures, companies can not only comply with current regulations but also position themselves favorably for future regulatory landscapes and public scrutiny.
Key Takeaways
- Prepare Thorough Technical Documentation: The upcoming EU AI Act mandates detailed documentation for high-risk AI systems, from development descriptions to lifecycle changes. Companies should adopt comprehensive documentation solutions to ensure transparency and regulatory compliance.
- Collaborate Early with Legal Teams: Integrating legal insights early in the AI development process is crucial. This collaborative approach enables companies to navigate regulatory landscapes effectively and ensures innovations remain within legal boundaries.
- Proactive Governance and Compliance: Establishing an AI governance council and continuously adapting to regulatory changes are essential for staying compliant. Engaging compliance teams early helps align projects with regulatory requirements and enhances transparency.
- Risk Assessment of AI Models: Companies must thoroughly review their AI applications to identify and document high-risk cases that fall under the EU AI Act. This proactive assessment helps prioritize compliance efforts for critical systems.
- Implement a Robust AI Strategy and Governance Framework: Enhancing reporting mechanisms for AI systems, especially high-risk ones, is vital for compliance. Leveraging advanced documentation tools can help maintain accurate records, foster transparency, and build trust among stakeholders.
Learn more about how Vectice can help: