A Fundamental Rights Impact Assessment is a structured governance document required under Article 27 of the EU AI Act.[1] It must be completed before deploying certain high-risk AI systems. It applies to deployers (organisations that use an AI system in their own operations) rather than to providers (the organisations that built or placed the system on the market). It is not the same as a data protection impact assessment, and completing a DPIA does not satisfy this requirement.
Fundamental rights, in this context, means the rights set out in the EU Charter of Fundamental Rights.[3] These include the right to human dignity, non-discrimination, privacy, a fair trial, access to social security, and the right to an effective remedy. They are broader than data protection rights. A FRIA assesses whether an AI system could put any of those rights at risk for the people it affects.
This article explains what a FRIA covers, which organisations are required to complete one, what the process involves, and how it fits with other compliance obligations.
What does Article 27 of the EU AI Act require?
Article 27 requires deployers of high-risk AI systems to carry out a Fundamental Rights Impact Assessment before putting the system into use. The assessment must document the deployment context, identify which fundamental rights are potentially affected, and record the measures taken to address those risks.
The requirement applies to deployers, not providers. If your organisation is integrating or operating a high-risk AI system rather than building one for sale, this obligation falls on you.
Annex III is the EU AI Act's list of high-risk AI categories. It includes AI systems used in employment decisions, access to education, credit and insurance assessments, law enforcement, border control, and the administration of justice. The categories are specific. Not every AI system used in these sectors will qualify as high-risk, and Article 27 does not apply to every Annex III category. Most safety-component systems classified under Annex III point 2 are excluded from its scope. Classification depends on the function the system performs and the role it plays in the decision-making process.
In summary
Article 27 requires deployers of high-risk AI systems to assess and document the impact on people's fundamental rights before the system goes live. The obligation sits with the organisation using the system, not the organisation that built it.
Who is required to complete a FRIA?
The obligation applies to deployers that are bodies governed by public law, or private entities providing public services. It does not apply to every private commercial operator that uses a high-risk AI system.
Bodies governed by public law include government departments, public health bodies, courts, and publicly funded universities. Private entities providing public services would include, for example, private operators contracted to deliver welfare services, immigration processing, or other public-facing functions on behalf of a state body.
For Irish financial services firms, health insurers, and HR technology providers, two questions determine whether a FRIA is required. First: is the organisation acting in a public service capacity? Second: does its AI deployment fall into one of the high-risk categories in Annex III?
Some Annex III categories apply regardless of the public or private distinction. Deployers of systems in Annex III points 5(b) and 5(c) are required to complete a FRIA whether or not they are acting in a public service capacity. Point 5(b) covers AI used in employment and worker management decisions. Point 5(c) covers AI used to determine access to essential private services: credit scoring and insurance risk assessment are both included. For HR technology providers and Irish financial services firms, these categories are the most practical trigger to check against.
In summary
The FRIA obligation targets public-sector deployments and private operators acting in a public service role. It does not apply automatically to all commercial AI users.
What does a FRIA need to cover?
The regulation does not prescribe a fixed template, but it does define the required content. A FRIA must cover the following areas:
- The intended purpose of the AI system and the specific deployment context
- The period and frequency of intended use
- A description of the population or groups likely to be affected
- An identification of the fundamental rights that could be at risk
- An assessment of the likelihood and severity of adverse impact on those rights[1]
- The human oversight measures required by the system's instructions for use
- A description of the measures in place to mitigate each identified risk
The following rights must each be considered where relevant to the specific deployment and the people it affects:
- The right to human dignity
- The right to non-discrimination
- The right to a fair trial
- The right to access social security and essential services
- The right to an effective remedy when an AI-assisted decision causes harm
- The right to privacy and data protection (in addition to, not instead of, the GDPR obligations)
In summary
A FRIA requires organisations to think systematically about who is affected by the AI system and what could go wrong for them, in terms that go beyond data privacy.
How is a FRIA different from a DPIA?
A Data Protection Impact Assessment under GDPR Article 35 addresses risks to personal data and the rights of data subjects within the data protection framework. A FRIA addresses fundamental rights more broadly, including rights that have nothing to do with personal data.
The two assessments are not interchangeable. An organisation may need both, and completing one does not satisfy the other. Where a deployment involves personal data and also falls under Article 27, both assessments will be required.
That said, a FRIA can reference a completed DPIA to avoid duplicating the analysis of data-related risks. The FRIA then adds the layer of analysis that the DPIA does not cover. This includes rights such as access to justice, non-discrimination on protected grounds, and the right to an effective remedy when AI-assisted decisions cause harm.
In summary
A DPIA covers data risks under GDPR. A FRIA covers fundamental rights broadly under the EU AI Act. Both may be required for the same deployment.
When must a FRIA be completed?
The EU AI Act is clear that a FRIA must be completed before the system is put into service or use. Retrospective completion is not permitted.
For systems already in deployment, the EU AI Act's transition rules apply. Under Article 111, the FRIA obligation does not apply to a high-risk AI system already in use before 2nd August 2026 unless a significant change is made to it after that date. Where no significant change is made, full conformity is required by 2nd August 2027. That period is not open-ended, and organisations should not treat it as indefinite cover for inaction.
For new deployments from August 2026 onwards, the position is clear.[2] Any in-scope deployer putting a high-risk AI system into service after that date must have completed a FRIA beforehand.
In summary
The FRIA must be completed before deployment. It cannot be done after the system is already in use.
Key facts summary
- A FRIA is required under Article 27 of the EU AI Act for deployers of high-risk AI systems listed in Annex III (the Act's list of high-risk AI categories).
- The obligation applies to public bodies and private entities providing public services. It does not apply automatically to all commercial deployers.
- A FRIA is not a DPIA. It covers fundamental rights broadly, not just data protection. Both may be required for the same deployment.
- The assessment must identify the population affected, the rights at risk, the severity of potential harm, and the mitigation measures in place.
- Completion must happen before the AI system is deployed. Retrospective completion is not compliant.
- The completed FRIA must be notified to the relevant national market surveillance authority.
How to assess whether your organisation needs a FRIA
The first question for any Irish organisation is whether their AI deployments fall into a high-risk category and whether they are acting in a public service capacity. Those two questions determine whether a FRIA is required.
For a broader picture of what the EU AI Act requires from Irish businesses and when, see what the August 2026 EU AI Act deadline means for Irish SMEs. If your AI tools also raise data residency questions, see EU data residency and AI tools: what every Irish SME needs to know.
A Clear Gate Systems EU AI Act Compliance Audit identifies which of your AI systems are in scope, whether a FRIA is required, and what the assessment needs to cover for your specific deployment context.
Book a discovery call to discuss what this would involve for your organisation.
