By clicking “Accept ”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookies Policy for more information.
Blog
Industry insights

A Comprehensive Guide for Banks to Ensure AI Act Compliance

12
min read

Following the updates to the AI Act, it’s essential for banks to ensure their AI systems comply with these new regulations. TapiX is the industry-leading API enabling banks and fintechs to build solutions driven by enriched transaction data.  Banks can use this data for their machine learning models, feeding AI systems, credit scoring, customer behavior anaylsis or building custom features such as AI chatbots or intelligent contextual advisors in internal systems or customer applications.  Since we are part of the AI supply chain, it made sense for us to put together these legislative insights for you. This guide breaks down the process into manageable steps, making it straightforward for digital banking experts to follow and ensure compliance.

What is the AI Act?

The EU AI Act is a significant legislative initiative designed to regulate artificial intelligence systems within the European Union. Introduced by the European Commission, this act aims to ensure that AI technologies are developed and utilized in a manner consistent with the EU's values and regulatory standards. It is expected to take effect in May 2024, with a transition period of at least two years for full implementation.

AI Act timeline

In addition to setting safety standards for AI systems, theAI Act seeks to protect users from "bad data." A major concern is the potential for AI algorithms to spread misinformation, especially with the rise of AI-generated content. This includes addressing "hallucinations," where AI systems generate or perceive incorrect data. The AI Act aims to mitigate these risks through requirements for transparency and accountability. Does it apply to your bank? Let’s find out 

TIP: How does the EU AI Act Affects Banking

Step 1: Determine if Your AI System Falls Under the AI Act

Identify AI Usage in Banking

First, identify if your system qualifies as AI under the AI Act:

Check for Algorithm Use: Determine if your system processes inputs (like customer transaction data) to produce outputs (like credit scores, fraud alerts, or loan approvals) using algorithms.

Automation Check: Ensure the process is automated, meaning it's done by machines, not humans. Examples include automated loan approvals and fraud detection systems.

Objective Achievement: Verify the system is designed to achieve specific objectives, such as identifying fraudulent transactions or recommending investment products.

Data-Driven Adjustments: Confirm the system adapts based on data inputs, like learning from transaction patterns to improve fraud detection.

TIP: How are banks using AI in 2024?

AI opportunity map

AI opportunity map based on selected use cases along the value chain in banking and payments (not exhaustive

AI opportunity map (source: Arkwright / DataArt)

Match AI Act Definition

Make sure your AI system fits the definition in Article 12, which includes producing outputs that can influence decisions or environments, such as approving a loan, flagging a transaction, or providing investment advice.

Step 2: Evaluate Possible Exceptions

Before diving into compliance actions, check if your AI system qualifies for any exceptions under the AI Act. Leveraging these exceptions can reduce regulatory burdens and streamline your compliance process. Here’s a quick overview of the key exceptions and how they might apply to your banking applications.

1. Material Influence Exception (Article 53)

Determine if your AI system does not significantly influence decision outcomes. For instance, it might only provide risk assessments that are reviewed by a human officer before final decisions are made.

2. No Humans Involved Exception (Article 53)

Assess if your AI system operates independently without improving the results of a human activity. For example, automated trading systems that don't require human intervention.

3. Research Exception (Article 2, Point 8)

Check if the AI system is still in the research, testing, or development phase and not yet launched for public use.

4. Open-Source Exception (Articles 89, 102-104)

Verify if your AI components are available as open-source to the public, which might be the case if your bank contributes to or uses open-source AI tools for fraud detection.

5. National Security Exception (Article 2, Point 3)

Identify if your AI system is used exclusively for military or national security purposes. This is less likely for standard banking applications but could apply to certain cybersecurity measures.

6. Law Enforcement Exception (Articles 33-35, 73)

Determine if the system is used for specific law enforcement purposes like biometric identification for regulatory compliance.

7. National Interest Exception (Article 46)

Assess if the AI system is used for public security or protecting critical infrastructure, such as ensuring the security of the financial system.

8. Financial Services Exception (Article 58)

Check if the system is used for detecting financial fraud or managing systemic risks, as these are exempt from some high-risk requirements.

Step 3: Compliance Actions if the AI Act Applies

If your AI system falls under the AI Act without exceptions, it's essential to follow specific compliance actions. This step ensures that your bank's AI systems meet regulatory standards, avoiding penalties and maintaining customer trust. Here’s a concise guide to the necessary actions for compliance.

Understand Risk Levels in Banking

Identify the risk level of your AI system:

Risk levels of AI systems according to AI Act

Unacceptable Risk: Includes social-scoring systems, manipulative techniques, and biometric categorization, which are generally not used in banking.

High-Risk Systems: Such as those handling biometric data for identity verification, credit scoring, or fraud detection.

Systemically Important General-Purpose Systems: Requires adherence to strict guidelines, relevant for banks using AI in core systems impacting the entire financial market.

Low-Risk Systems: These come with general guidelines but still require careful consideration.

Compliance Actions for High-Risk AI Systems

Obtain CE Marking

Prepare for the conformity assessment and obtain CE marking to show compliance. Follow guidelines for high-risk systems, including documentation and registration.

Set Up Risk and Quality Management Systems

Establish a robust risk-management and quality-management system using models like ISO/IEC 42001:2023 to ensure the AI systems used for credit scoring and fraud detection are reliable and unbiased.

Monitor Data and Bias

Regularly check for and address biases in your AI system’s outputs. For example, ensure credit scoring models do not unfairly disadvantage certain demographic groups. Ensure data quality and governance by adhering to standards like ISO 8000. Utilize open-source toolkits such as AI Fairness 360 to evaluate bias across a variety of applications.

Maintain Documentation and Transparency

Keep detailed records of how your AI system works and the decisions it makes. Ensure transparency by informing customers when they are interacting with an AI system, such as automated loan approval notifications.

Ensure Human Oversight

Implement mechanisms for human oversight to ensure AI systems are functioning correctly and addressing impacts over time. For example, have human officers review flagged transactions from fraud detection systems.

Focus on Robustness and Accuracy

Ensure your AI system is accurate and robust, following standards like ISO/IEC TS 4213 for classification tasks. Regularly test the AI system to maintain high performance, particularly in areas like fraud detection and risk assessment. Use guidelines like ISO/IEC TR 24029-1 for assessing the robustness of neural networks.

Strengthen Cybersecurity

Protect the AI system from cyber threats, including data poisoning and adversarial attacks. Follow cybersecurity standards like ISO/IEC27001:2022 and implement measures to secure customer data and transaction information. Utilize open-source toolkits such as the Adversarial Robustness Toolbox to defend against AI-specific attacks.

Additional Specific Actions for High-Risk AI Systems

1. Authorized Representative:

Appoint an authorized representative to handle communications with the AI Office and national regulators (Article 82).

2. Registration in AI Database:

Register high-risk AI systems in the newly created database (Article 131).

3. Technical Documentation and Record-Keeping:

Maintain detailed technical documentation and records(Articles 71, 72, 132-134). Ensure transparency by informing users about AI interactions.

4. Human Oversight:

Implement mechanisms for human oversight to oversee AI system operations and impacts (Articles 66, 73).

5. Bias and Protected Attributes:

Collect protected attributes to evaluate bias, even within regulatory sandboxes (Articles 70, 138-141).

Additional Considerations

Support for SMEs

Look into simplified compliance options for small and medium banks. Utilize regulatory sandboxes to test and develop AI systems in a controlled environment without full compliance requirements initially.

Fines and Penalties

Understand potential fines for non-compliance and the importance of adhering to the AI Act requirements. For instance, non-compliance could result in significant financial penalties. (Article 99):

- Engaging in prohibited practices: 35 million EUR or 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher
- Misbehaving the development or use of high-risk systems: 15 million EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Compliance Timeline

Ensure compliance by the end of 2024 for prohibited uses and by mid-2026 for high-risk systems to avoid penalties. Start early to ensure all systems and processes are fully compliant.

5 things to Keep in Mind When Utilizing AI

1. Measure the success and reliability of your AI solution on an ongoing basis.

Continuously monitor the performance of your AI system to ensure it meets the desired outcomes and adapts to any changes or new requirements.

2. Start with large/expensive models, optimize only after you understand the use-cases you are solving

Begin with robust models to effectively handle complex tasks. Once you have a clear understanding of the specific problems you're addressing, you can fine-tune and optimize the models for better efficiency and cost-effectiveness.

3. Don't give AI more privilege than the user would have

Ensure that the AI system does not have access to more data or permissions than a typical user. This prevents potential misuse and aligns the AI's capabilities with what is acceptable for users.

4. Introduce "human-in-the-loop" for critical operations

For important and sensitive tasks, involve human oversight to review and validate AI decisions. This helps in mitigating risks associated with errors and ensures more accurate outcomes.

5. AI is not infallible; there will always be a certain percentage of erroneous operations

Recognize that AI systems are not perfect and will make mistakes. Implement mechanisms to identify and correct these errors, and continuously improve the system based on feedback and observed inaccuracies.

Useful links

Compliance side

Technical side

By following this comprehensive guide, digital banking experts can systematically assess and ensure their AI systems comply with the AI Act requirements, ensuring legal compliance and maintaining trust with their customers. This proactive approach will help banks avoid penalties, enhance their AI systems' performance, and build a reputation for responsibility and transparency in their AI use.

About author

Ondřej Slivka, a marketing enthusiast, loves to share insights in the world of digital banking and fintech.

Ondřej Slivka

Senior insider

A seasoned B2B marketing enthusiast with 5+ years of experience sharing insights in the world of digital banking and fintech. My passion lies in crafting innovative strategies and engaging content that delivers desired results.

Table of contents