DORA is an industry specific regulation applicable to financial institutions. At the same time, EU AI Act applies to all organizations including financial services. When first phase of EU AI Act comes at the heels of DORA, there is not enough time if the companies have not started on addressing the requirements of AI Act already.
Shared Objective: Ensuring a Secure and Resilient Financial Sector
Both DORA and the EU AI Act, although originating from different regulatory concerns, share a common objective: ensuring the stability and resilience of the financial sector in the face of emerging technological risks.
DORA focuses on bolstering the operational resilience of financial entities against ICT-related disruptions. This includes incidents such as cyberattacks, system failures, and third-party provider outages. The regulation aims to create a harmonized framework for ICT risk management, incident reporting, testing, and oversight of third-party providers.
The EU AI Act seeks to regulate the development and use of AI systems, particularly those deemed high-risk, to ensure they are trustworthy and do not pose unacceptable risks to fundamental rights, health, and safety. The regulation introduces various requirements for high-risk AI systems, including risk management, data quality, transparency, and human oversight.
The convergence of these two regulations highlights the growing recognition that AI, while offering significant opportunities for innovation in the financial sector, also introduces novel risks that need to be addressed proactively.
Overlapping Applicability: AI Systems in Financial Institutions
The EU AI Act explicitly states its applicability to providers and deployers of AI systems within the EU, regardless of whether they are established within the Union or in a third country. This means that financial institutions developing or deploying AI systems will be subject to the requirements of both DORA and the AI Act.
Financial Institutions as AI System Providers: Financial institutions developing their own AI systems for internal use or for offering new products and services will be considered "providers" under the AI Act and subject to the relevant obligations.
Financial Institutions as AI System Deployers: Financial institutions deploying AI systems developed by third parties will be considered "deployers" under the AI Act and will need to comply with the specific requirements for deploying high-risk AI systems.
This overlapping applicability underscores the need for financial institutions to carefully assess the nature of their AI systems and determine which regulations apply to which specific systems.
ICT Risk Management: A Shared Focus
Both the AI Act and DORA emphasize the importance of robust ICT risk management frameworks, though with different scopes and focuses.
DORA mandates financial entities to establish and maintain comprehensive ICT risk management frameworks that encompass risk identification, analysis, evaluation, and treatment. This framework should address various aspects of ICT risk, including security, resilience, incident management, and third-party provider oversight.
The AI Act, while focusing on regulating AI systems, also acknowledges the importance of managing risks associated with these systems. For high-risk AI systems, providers are required to establish risk management systems that address potential harms to health, safety, and fundamental rights, including those stemming from ICT vulnerabilities.
This shared emphasis on ICT risk management suggests potential synergies and areas for harmonized implementation:
Integrated Risk Management: Financial institutions developing or deploying AI systems could integrate the risk management requirements of both regulations into a single, comprehensive framework. This would streamline processes and ensure a holistic approach to managing both AI-specific and broader ICT risks.
Leveraging Existing Structures: Financial institutions with existing ICT risk management frameworks under DORA could adapt and enhance them to accommodate the specific risks associated with high-risk AI systems.
Incident Reporting: Harmonized Requirements
Both the AI Act and DORA introduce requirements for reporting ICT-related incidents, though with different triggering events and reporting channels.
DORA mandates financial entities to report major ICT-related incidents to competent authorities. The regulation specifies criteria for classifying incidents, sets materiality thresholds, and outlines reporting procedures.
The AI Act requires reporting of serious incidents related to high-risk AI systems, particularly those involving substantial harm to health, safety, or fundamental rights. The reporting procedures and channels for these incidents are still under development but are expected to align with the approach for incident reporting under the NIS 2 Directive.
The potential for overlap in incident reporting between the two regulations necessitates careful consideration:
Alignment of Reporting Procedures: The AI Act's incident reporting framework should be designed to avoid conflicting requirements and ensure harmonized reporting procedures for financial institutions subject to both DORA and the AI Act.
Consolidated Reporting: It may be beneficial to explore possibilities for consolidated incident reporting, where a single report can fulfil the requirements of both DORA and the AI Act for incidents involving high-risk AI systems in the financial sector.
Oversight of Third-Party Providers
Both the AI Act and DORA have requirements for managing third party providers.
DORA introduces specific requirements for the oversight of ICT third-party providers, including contractual provisions, risk assessments, and monitoring.
The EU AI Act also addresses third-party involvement in the AI value chain, particularly in the context of general-purpose AI models, requiring cooperation between providers of such models and providers of high-risk AI systems built upon them.
This overlap highlights the need for a coordinated approach to managing third-party risks related to AI systems in the financial sector. This could involve:
Extending the due diligence and monitoring processes outlined in DORA to encompass AI-specific risks posed by third-party providers of high-risk AI systems or components.
Leveraging the cooperation requirements in the EU AI Act to facilitate information sharing and joint risk assessments between financial institutions and their third-party AI providers.
Addressing Potential Conflicts between NIS 2, DORA, and the AI Act
The EU AI Act explicitly states that, for financial entities also covered by the NIS 2 Directive, DORA should be considered a sector-specific Union legal act.
This means that DORA's provisions on ICT risk management, incident reporting, operational resilience testing, information-sharing arrangements, and ICT third-party risk take precedence over those in NIS 2.
However, the relationship between the EU AI Act and DORA is less clear-cut, potentially leading to overlapping or conflicting requirements.
Financial institutions need clear guidance on how to navigate these three regulations and which requirements take precedence in specific situations.
Challenges and Opportunities: Navigating the Overlap
While the overlaps between the AI Act and DORA present opportunities for synergy and streamlined compliance, they also pose challenges that require careful consideration:
Scope and Applicability: Determining the precise scope and applicability of each regulation to specific AI systems in the financial sector will be crucial. Not all AI systems will be considered high-risk under the AI Act, and the specific requirements will vary depending on the system's intended purpose and potential impact.
Regulatory Complexity: Managing compliance with two overlapping regulations could increase the complexity and administrative burden for financial institutions, particularly those developing and deploying a wide range of AI systems.
Regulatory Evolution: Both the AI Act and DORA are relatively new regulations, and their implementation and enforcement are still evolving. This could lead to uncertainties and challenges in aligning compliance efforts.
Addressing these challenges effectively will require:
Close Collaboration: Regulatory authorities should closely collaborate to ensure clear guidance and harmonized implementation, minimizing potential conflicts and confusion for financial institutions.
Industry Engagement: Active engagement with the financial industry is crucial to understand the practical challenges and develop pragmatic solutions for managing overlapping compliance requirements.
Flexibility and Adaptability: Financial institutions should adopt a flexible and adaptable approach to compliance, closely monitoring regulatory developments and adjusting their frameworks and processes accordingly.
Conclusion: Towards a Harmonized Regulatory Landscape
The overlaps between the AI Act and DORA highlight the need for a coordinated and collaborative approach to ensure a harmonized regulatory landscape for AI in the financial sector. Clear guidance, aligned reporting procedures, and streamlined compliance processes will be essential to foster responsible AI innovation while effectively mitigating potential risks.
Comments