Does Your Financial Services Firm Have an AI Register? Here Is What the Law Requires.
Every Irish financial services firm using AI tools — credit scoring software, fraud detection systems, underwriting tools — has compliance obligations under the EU AI Act. On 7 May 2026, the European Commission confirmed a political agreement to move the Annex III high-risk AI deadline from 2nd August 2026 to 2nd December 2027. The Central Bank of Ireland is the regulator that will enforce these rules for Irish financial services firms, and it has put AI governance at the top of its supervisory agenda for 2026 and 2027.
Most compliance managers at Irish SMEs have heard of the EU AI Act. Very few have built the internal AI register that the law requires them to have.
Let me explain what an AI register is, why your firm needs one now, and how to build one before the clock runs out.
The Distinction That Catches SMEs Off Guard
When most compliance officers in Irish financial services hear "AI Act registration", they picture the EU's public AI database, the official register where companies that build and sell AI systems must list their products. That database is real, but it is not their obligation. Firms that use AI tools do not register in it at all.
Providers are the companies that build and sell AI systems. They register in the EU database. Deployers are organisations that use those systems in their business. They do not appear in the EU database at all.
What deployers are legally required to have is something different: an internal AI register. This is a living record of every AI system in use within your organisation, covering what it does, who supplies it, how it has been classified for risk, and what controls you have in place to meet your obligations under Article 26 of the EU AI Act.[1]
If your firm uses third-party software to assess creditworthiness, score insurance risk, or support any customer-facing decision, you are a deployer under the AI Act. The fact that you bought the tool from a vendor does not transfer your compliance obligations to them. Article 26 applies to your organisation directly, regardless of where the software came from.[8]
In summary
Deployers do not register in the EU's public database. That is the provider's obligation. What deployers are required to have is an internal AI register: a living record of every AI system in use and the governance surrounding it.
Why Financial Services Has a Heavier Compliance Burden
The EU AI Act takes a risk-based approach: the higher the potential impact of an AI system on people's lives, the stricter the compliance requirements. Financial services firms have a heavier burden than most sectors because several of the most common AI use cases in the industry are explicitly named as high-risk in the legislation.
Annex III of the Regulation is the list of high-risk AI categories.[3] The ones most relevant to Irish financial services firms are:
- AI used to assess the creditworthiness of individuals or generate credit scores is classified as high-risk (Annex III, point 5(b))
- AI used for risk assessment and pricing in life and health insurance is also classified as high-risk (Annex III, point 5(c))
The legislation includes an exception for fraud detection AI, but it is narrow and frequently misread. More on that below.
The European Banking Authority confirmed this in its November 2025 mapping exercise. Banks and payment institutions using AI for creditworthiness assessment and credit scoring face the full high-risk obligations under the Act, on top of existing requirements under CRD, CRR, and DORA (the EU legislation governing capital requirements and digital operational resilience for financial firms).[5]
A concrete example: A Dublin mortgage broker uses a credit assessment tool purchased from a fintech vendor. The vendor is responsible for showing that their system meets EU AI Act requirements. But the broker, as the deployer, still has to maintain an internal AI register covering that tool, keep system logs for at least six months, assign a named person responsible for human oversight, monitor how the system is performing, and report any serious incident to the Central Bank. None of that is the vendor's job. None of it is optional under Article 26 of the EU AI Act.
In summary
Buying a compliant tool from a vendor does not make your deployment compliant. Article 26 obligations fall on the deployer. The register, the oversight, and the logs are your responsibility.
The Deadline Has Changed: What It Means for Your Planning
On 7 May 2026, the European Commission announced a political agreement to extend the Annex III high-risk deadline from 2nd August 2026 to 2nd December 2027.[7] The formal legislative text has not yet been published, but the political agreement is confirmed. This gives Irish financial services firms an additional 16 months to build compliant governance.
It does not change what that governance needs to look like. The Central Bank of Ireland's supervisory agenda — confirmed before this agreement — remains unchanged. Its 2026 Regulatory and Supervisory Outlook states explicitly that it will bring an AI focus to all thematic reviews throughout 2026 and 2027.[4] The extension is an opportunity to build an AI register properly, not a reason to defer.
The Regulation of Artificial Intelligence Bill 2026 designates the Central Bank of Ireland as the Market Surveillance Authority for AI used by regulated financial service providers.[6] The Central Bank will enforce those obligations using the powers it already holds under the Central Bank Act 1942. Those powers cover inspections, information requests, formal directions, and fines. Its 2026 Regulatory and Supervisory Outlook states explicitly that it will bring an AI focus to all thematic reviews throughout 2026 and 2027.[4]
On the question of fines: Article 99 sets the upper limit at 15 million euro or 3% of global annual turnover, whichever is higher, for organisations not meeting their deployer obligations.[2] For SMEs, the lower of the two applies. For a firm with 10 million euro in global turnover, that is still a potential fine of 300,000 euro for failing to maintain adequate documentation.
What Article 26 Actually Requires of You
Article 26 of the EU AI Act sets out what organisations deploying high-risk AI systems must have in place.[1] Here is what it actually says, in plain terms.
- Use the AI system according to the provider's instructions. You need those instructions in writing, and you need to be able to show you have followed them. If you do not have them, ask your vendor for them today.
- Ensure human oversight is in place. The person or persons responsible must have the competence, training, and authority to step in if something goes wrong with the system's outputs.
- Manage the quality of data fed into the system. The data going in needs to be relevant and representative of the decisions the system is supporting.
- Keep operational logs for a minimum of six months. Financial institutions can integrate this with records already maintained under EU financial services law.
- Monitor the system's operation and report serious incidents to the provider and the Central Bank without undue delay.
- In certain circumstances, tell people when they are subject to a high-risk AI system. If a loan applicant's creditworthiness is assessed by an automated tool, they have a right to know. The precise scope of this duty depends on the use case and may interact with other obligations.
- Co-operate with the Central Bank in any action it takes in relation to the system.
One practical note for firms already operating under EU financial services law: Article 26 explicitly states that organisations subject to internal governance requirements under MiFID II, CRD, or DORA may satisfy the monitoring obligation by complying with those existing rules. The AI register does not replace your existing compliance framework. It builds on top of it.
In summary
Article 26 requires operational governance, not just documentation. Human oversight must be genuinely exercisable. Log retention must actually be in place. The obligations are live from 2nd August.
Building Your AI Register: A Practical Framework
An AI register does not require specialist software or a new compliance hire. For most Irish financial services SMEs, a well-structured spreadsheet is the right starting point. With proper version control, it is sufficient for initial regulatory scrutiny.
Pick one person (your compliance lead, head of operations, or risk manager) and set aside two hours for the initial inventory. The goal is not perfection. It is a defensible, documented starting point that shows the Central Bank your firm is actively governing its AI use.
The Core Fields
Each AI system in your firm needs its own entry covering the following:
| Field | What to Record | Example |
|---|---|---|
| System name | Exact name of the tool or model | CreditAssess Pro 3.0 |
| Vendor / provider | Who supplies or built the system | FinTech Vendor Ltd |
| Business purpose | What decision or task it supports | Consumer creditworthiness assessment |
| Department | Which team uses it | Credit Risk / Lending Operations |
| Risk classification | Minimal / Limited / High (per Annex III) | High-risk: Annex III, Part 5(b) |
| Named overseer | Individual responsible for human oversight | Head of Credit Risk, [Name] |
| Log retention status | Are six-month logs being retained? | Yes, integrated with CRD documentation |
Additional Fields for High-Risk Systems
For any system classified as high-risk, track these as well:
- Provider's Declaration of Conformity received: yes/no (this is the vendor's formal documentation confirming their system meets EU AI Act requirements)
- Instructions for use received: yes/no, and date reviewed
- Affected persons notification process: is there a procedure in place to inform customers subject to AI-assisted decisions?
- Incident reporting contact: named contact at the vendor for malfunction or serious incident notification
- Fundamental Rights Impact Assessment required: yes/no, and if yes, completion status (this is a structured assessment of how the AI system could affect individuals' rights; Article 27 requires deployers to carry one out in certain circumstances)
- Last governance review date
Finding the AI in Your Organisation
Before you can classify anything, you need to know what AI tools are actually in use across your firm. This is not always as straightforward as it sounds.
AI capabilities are embedded in CRM platforms, underwriting tools, AML screening software, and customer-facing chatbots. Often nobody has formally assessed whether any of them carry EU AI Act obligations. In the financial services firms I work with, the discovery process almost always surfaces tools that were never identified as AI systems at all.
How to find them:
- Survey all departments. Send a short questionnaire to each team lead asking what AI-enabled software their team currently uses.
- Collect vendor documentation for each identified tool: what it does, what data it processes, and what it is actually designed to do.
- Classify each system against the Annex III categories. The EU AI Act has four tiers: prohibited AI, high-risk AI, limited-risk AI, and minimal-risk AI.
- Prioritise your effort on confirmed high-risk systems. Document the others for completeness.
In summary
The starting point is not specialist software. It is a shared spreadsheet, two hours with your compliance lead, and a systematic survey of every AI tool in use across the firm.
The Fraud Detection Risk Worth Understanding
One of the most common classification errors I see in Irish financial services SMEs involves fraud detection and AML screening tools.
Annex III contains language that appears to exclude fraud detection AI from the high-risk category. Many compliance teams have read this to mean their transaction monitoring and AML screening tools have no high-risk obligations.
That reading is too broad.
That exception applies only to AI used solely for financial fraud detection. If your fraud detection or AML tool also profiles individual customers for risk scoring, feeds outputs into credit or underwriting decisions, or generates individual risk assessments that affect access to financial services, the high-risk classification is likely to apply — but classification always depends on the system's actual intended purpose and deployment context.
The Central Bank's 2026 report names fraud prevention and detection among the AI use cases already deployed across the sector.[4] I see this misclassification regularly in firms that bought operational efficiency software without checking whether the way they actually deploy it brings it into the high-risk category. The classification follows what the tool does within your firm, not what it says on the vendor's website.
What the Central Bank Will Look For
The Central Bank's 2026 Regulatory and Supervisory Outlook sets out supervisory standards for AI deployment that integrate governance into firms' mainstream prudential and conduct risk frameworks — not as a standalone technology question. The standards it identifies include clear accountability and responsibility for AI use, human oversight of AI-assisted decisions, proportionate risk management commensurate with the scale and sensitivity of each deployment, and documented processes for meeting EU AI Act transparency requirements.[4]
In summary
When Central Bank examiners arrive for a thematic AI review, the AI register is the first thing they will ask for. It is the foundation document for everything else.
In practical terms, examiners arriving for a thematic AI review in 2026 or 2027 will expect to see:
- A documented inventory of AI systems in use
- Named accountability for each system
- Evidence of human oversight procedures
- Log retention records for high-risk tools
- Ownership of AI governance sitting within the firm's risk management structure
The AI register is the foundation document that makes all of this demonstrable. Without it, there is nothing for the rest of your governance framework to anchor to.
Your Four-Week Plan
With the revised deadline now set for 2nd December 2027, there is time to build an AI register properly rather than reactively. The following phased approach is still the right starting sequence — beginning now means you arrive at December 2027 with governance that is operational and tested, not assembled at the last minute.
Week 1: Discover and document. Survey all departments. Build the initial register in a shared spreadsheet. Assign one named owner per AI system identified.
Week 2: Classify and prioritise. Apply the Annex III risk classification to every system. Identify confirmed high-risk tools. Contact vendors of high-risk systems to obtain their Declaration of Conformity and instructions for use.
Week 3: Close the compliance gaps. For each high-risk system, verify log retention is in place, assign a named human overseer, and draft or review your affected-persons notification process. Assess whether a Fundamental Rights Impact Assessment is required under Article 27.
Week 4 and beyond: Establish a governance rhythm. Set a monthly review cycle for the register through August. Brief your board or senior management on your AI governance position. Integrate register management into your existing risk and compliance framework.
The Business Case Beyond Compliance
Building an AI register before 2nd August is not a box-ticking exercise. Done properly, it creates three practical advantages for your firm.
Audit readiness. When the Central Bank asks about your AI governance (and the 2026 Outlook makes plain they will ask), a well-maintained register is your first line of evidence.
Vendor risk management. The discovery process routinely surfaces AI tools in use without proper vendor agreements, data processing addenda, or written instructions for use. Finding these gaps now costs far less than finding them during a regulatory inquiry.
Competitive differentiation. Enterprise clients and institutional counterparties are increasingly asking for AI governance evidence in procurement and due diligence processes. An Irish financial services firm that can demonstrate a structured, documented AI governance framework is in a different position to one that cannot.
