Written by Ana Yuxi Collado Lopez

Member on the Working Group on Digital Policies

Abstract  

This paper examines the use of AI-assisted systems in refugee status determination within the EU asylum framework. It argues that, although these technologies are presented as tools for efficiency and consistency, their use raises concerns regarding transparency, accountability and procedural fairness. Focusing on Article 47 of the EU Charter of Fundamental Rights, the General Data Protection Regulation and the EU AI Act, the paper assesses whether AI- mediated decision-support systems can remain compatible with effective judicial protection in asylum procedures. It concludes that AI may support administrative tasks, but only within strict legal and institutional safeguards that preserve individualised assessment and contestability.   

Keywords: Artificial Intelligence (AI), refugee status determination, Article 47 Charter, EU asylum governance  

1. INTRODUCTION 

In January 2026, the European Asylum and Migration Management Strategy reaffirmed the Union’s goal to promote the potential of digitalisation and AI in migration and the asylum field through “efficient tools to improve the quality, consistency and timeliness of decision making, […] to enhance security, while improving services for people” (European Commission, 2026, p.3). Similarly, the European Parliament highlighted how the application of AI will reduce pressure on national asylum systems and provide more consistent asylum decisions (Dumbrava, 2025). Recent EU evidence shows that AI technologies are already entering asylum administration. The European Union Agency for Asylum reports that seven Member States (Denmark, Germany, Netherlands, Austria, Romania, Finland, Sweden, plus Switzerland) use language analysis tools, while six others consider their deployment (EUAA, 2022). Germany's Federal Office for Migration and Refugees (BAMF) conducted 7,808 automatic dialect analyses in the first half of 2022 alone (Biselli, 2022). These developments highlight the operational reality behind the Union’s digitalisation agenda in asylum governance. Nevertheless, the promotion of innovation should not be at the cost of legality. Despite their complementary role, AI systems greatly influence the interpretation of material throughout the process. Biases or inaccuracies emerging from the ‘black box paradigm’ may produce erroneous outputs that jeopardize the fairness of the final decision. Altogether it complicates applicants’ ability to obtain effective judicial review and fair trial under Article 47 of the EU Charter of Fundamental Rights (hereinafter ‘the Charter’).

2. LITERATURE REVIEW 

The literature on AI in asylum and refugee status determination has grown rapidly, but it remains divided in its assumptions and analytical focus. One body of literature presents AI as the solution to administrative overload, emphasising efficiency, consistency, and improved case management. While this perspective highlights potential operational benefits, it tends to treat technological innovation as inherently beneficial and pays limited attention to how AI systems may reshape evidentiary practices and credibility assessment in asylum procedures. A second, more critical body of literature rejects the assumption of technological neutrality. It argues that AI systems are shaped by institutional priorities, data structures, and existing power asymmetries. In the asylum context, this scholarship highlights risks related to opacity,  automation bias and the reproduction of patterns of disbelief and exclusion. Concern is directed at tools such as language analysis, biometric identification, credibility scoring and predictive risk assessment, which may influence decision-making while remaining difficult for applicants to understand or contest.

A third line of scholarship focuses on the applicable legal framework, including the Charter, the General Data Protection Regulation and the EU AI Act. This literature recognises that refugee status determination engages core procedural guarantees, such as the right to be heard, the duty to state reasons, equality of arms and the right to an effective remedy under Article 47 of the Charter. At the same time, it shows that these legal frameworks provide only partial safeguards in practice, particularly where technical opacity and private control limit effective oversight and accountability.

Despite these contributions, the literature remains analytically fragmented. Efficiency- oriented and rights-based approaches often develop in parallel, without fully addressing how AI-assisted decision-support affects the practical exercise of procedural guarantees. In particular, the implications of AI for the effective remedy under Article 47 remain insufficiently theorised. This paper aims to address that gap by analysing AI as an operational layer within EU asylum governance – whose compatibility with fundamental rights depends on transparency, contestability and meaningful human oversight.

3. METHODOLOGY 

The intersection of several legal fields – fundamental rights, data protection and AI related law – showcases the complexity of the regulatory landscape. Hence, a layered approach is required: the core of the paper will be doctrinal, starting with Article 47 of the Charter and analysing secondary legislation such as the General Data Protection Regulation (hereinafter GDPR), the Asylum Procedures Regulation and the EU AI Act. It examines how current norms (lex lata) intertwine technological integration with constitutional safeguards.

To complement the doctrinal analysis, the research incorporates a selected normative policy analysis and empirical evidence. The framework depicts the normative tension between the legalistic requirements and its contrast with ethical and human-centered guidelines. It also questions whether the pursuit for faster processes overshadows procedural justice. Thus, the guiding research question is: To what extent is the use of AI systems in refugee status  determination compatible with the right to an effective remedy under Article 47 of the Charter and the European data protection framework?

3.1 Limitations

While the research acknowledges the technical dimensions of machine learning, it does not provide a computer-based technical audit of algorithms. Instead, it focuses on the legal implications of their deployment. A primary limitation stems from AI opacity due to “trade secrets” and the non-public nature of many border technologies. Thus, the analysis relies on publicly available reports, impact assessments and secondary scholarly data. Moreover, the research focuses on the potential harm and the structural gaps in current regulation rather than an exhaustive empirical survey of every AI system currently in use by Member States.  

4. NORMATIVE ANALYSIS 

Since 2020, as highlighted by the European Parliament, the EU has experienced an increase in the applications for international protection (Dumbrava, 2025, p. 7). In 2023, events such as the Covid-19 pandemic and the Russia-Ukraine war of aggression brought the numbers to 1.1 million first-time applicants. Although statistics show a subsequent decline – less than 1 million in 2024 and 669 365 in 2025 – procedures remain lengthy and costly, maintaining elevated pressure on reception systems (Eurostat, 2026). Such demand calls for the deployment of AI across different procedural stages (identification, translation, case management and decision making). Thus, the use of these tools can be twofold, presenting both benefits and challenges for EU asylum governance, human rights of vulnerable groups and their personal data. Before diving into the substance of the paper, it is necessary to present the relevant normative structure that has allowed AI integration in asylum procedures.

4.1 The Common European Asylum System

The Common European Asylum System is the legal and policy cornerstone governing the framework of asylum procedures within the EU. Although the text does not directly refer to AI technologies, it sets out the substantive and formal norms that must be considered (1). One main guarantee is the right to effective judicial protection enshrined under Article 47 of the Charter. In this context, it encompasses access to challenge ex nunc the decision before a court or tribunal, ensuring a transparent and unbiased process. The obligation for an individual assessment and the right to be heard are central elements of this protection.

In 2024, the New Pact on Migration and Asylum strengthened harmonisation by replacing important instruments, such as the Asylum Procedures Directive, with directly applicable regulations (the Asylum Procedures Regulation; European Parliament and Council of the European Union, 2024a). In addition to the Screening Regulation, the Pact seeks to unify procedures by implementing standardised stages, mandatory time limits and improved interoperability and data collection. However, this drive for digital transformation showcases a ‘trade-off effect’ between the Migration Pact and the AI Act. While the Pact – specifically through the Screening Regulation and the expansion of Eurodac – prioritises quantity by imposing mass data collection and enhanced interoperability for security purposes, Article 10 of the AI Act demands data quality. This creates a potential conflict, making it difficult to meet the strict data minimization and accuracy principles established by the GDPR and the AI Act. This tension suggests that the pursuit of administrative efficiency may override the protection of the rights of the individual.

4.2 Data protection

AI is trained upon vast amounts of data, many of which identify or render the person identifiable, making the GDPR applicable. The overarching purpose of the regulation is to protect the rights of individuals whose personal data is being processed. The GDPR in Article 5 establishes a set of general principles for the processing of personal data. These principles constrain how AI systems must be used in asylum procedures: data minimisation, purpose limitation and transparency in AI use. Refugee status determination involves processing of special categories of data such as ethnicity, political opinions, religion, under Article 9. In the context of migration-related issues, processing of these traits underscores the vulnerability of asylum seekers if such data is leaked or used inappropriately, especially if no explicit and informed consent is granted. This asymmetry between the data subject and the controller (or processor) calls for additional requirements to determine lawfulness and proportionality of the collection.

In this regard, Article 22 restricts decisions solely based on automated processing that produce legal effects or similarly significant effects on the individual. Questions about algorithmic reliability arise if the asylum claim has been rejected, even when human oversight has taken place. Excessive reliance on such systems can amount to de facto automated decision-making. This occurs when caseworkers are unable to understand the reasoning or rely excessively on the output. Therefore, limitations are necessary where contestability issues arise or procedural safeguards are infringed.

4.3 The EU AI Act

In June 2024, the European Commission adopted the first ever European rules on trustworthy AI: Regulation (EU) 2024/1689 (European Parliament and Council of the European Union, 2024b, hereinafter AI Act). As described by Đuković (2024), it establishes the harmonisation of rules concerning the development, marketing and deployment of AI within the EU internal market. Based on a risk-based approach, technologies are classified into four categories (minimal risk, limited risk, high risk and unacceptable). The AI Act imposes on providers and deployers responsibility for risk management (Article 9), ensuring high quality and reliability of datasets (Article 10) and incorporating transparency obligations (Article 13). According to Article 6(3) and Annex III 7(c), AI systems used for asylum, migration and border control practices are classified as high risk. De Gregorio and Dunn (2022) argue that this categorisation focuses more on the existence of predetermined scenarios rather than analysing the intended use of the system in specific context.

While Article 6(3) presents exemptions where the material outcome of the decision is not influenced, the profiling caveat ensures that RSD systems remain high risk. It triggers meaningful human control enshrined in Article 14, which enables caseworkers to understand the outputs of the system and identify possible automation biases. Particularly, Recital 60 emphasises that, due to the vulnerability of asylum applicants, AI systems should not override procedural guarantees nor lead to discriminatory results. This human-in-the-loop approach is a necessary condition to ensure effective remedy under Article 47 of the Charter. Furthermore, due to the public nature of migration and asylum authorities, Article 27 requires a Fundamental Rights Impact Assessment (FRIA). However, the Article 6(3) exemption –  further widened by the Digital Omnibus Package – creates a “backdoor” for privatisation. By allowing providers to self-classify systems as purely procedural, the Regulation risks exempting the black-box tools that influence the applicant’s credibility assessment. As Qarssifi (2023) observes, migrants and EU citizens exist in a legal “parallel reality” under the AI Act, where migration-related AI systems benefit from exemptions that would not be available in other contexts. 

5. CREDIBILITY AND RISK ASSESSMENT TOOLS 

The European Union uses AI technologies to evaluate the vulnerability or risk posed by the individual. It also deploys intelligence search engines to support such tasks (Dumbrava, 2025, pp. 7-8). Since the late 1980s, the ongoing debate over technological determinism has centered on the idea that machines are not neutral. From this perspective, Whelchel (1986) argues that technology cannot be considered “value free” because its own design and deployment reflect “political and institutional interests” (pp. 3-4). The field of migration is no exception. Conceived as efficiency-driven, these technologies underscore the conflict between state security driven priorities and the protection of rights of vulnerable individuals. This cost-benefit ratio inevitably shapes the underlying principles and role of technology. While digital evidence is often presented as “objective”, an over-reliance on such data can undermine the subjective and epistemological dimensions of RSD. As Schittenhelm & Schneider (2017) suggest, the search for digital certainty may overlook the nuanced personal narratives that are central to a fair assessment (p. 1707).

To handle the case and evidence of an applicant, authorities have to gather testimonies, reports and the relevant information concerning the socio-political context of the country of origin (COI). On the one hand, procedurally the use of AI mechanisms moves away from the individualised assessment criteria by classifying individuals through statistical profiling. On the other hand, it detects inconsistencies, retrieves information from the COI or handles evidence. The concern arises when their complementary role becomes authoritative without transparency or contestability. Substantively, it stresses distinctiveness through language detection mechanisms or body measurements allowing a thorough digital examination (Alizadeh Westerling, 2022, p. 21). At EU level, systems like the European Asylum Dactyloscopy Database (Eurodac) collect fingerprints and automated facial recognition data from refugees and illegal migrants to identify which Member State is responsible for handling the application. Member states are using automated AI systems as assistive tools to  efficiently manage asylum cases and facilitate identification. Countries such as Germany and the Netherlands have already started to use language analysis to determine the origin of applicants.

Some scholars argue that the deployment of AI systems effectively reverses the burden of proof, a phenomenon that Ross (2021) identifies through the lens of the “proof paradox”. This shift creates a profound tension between a claimant’s “raw” personal testimony and the “clean” statistical data generated by technology, ultimately distorting the credibility assessment process (Alizadeh Westerling, 2022, p.21). Hence, an over-reliance on data obtained from AI systems – even if incorrect – combined with automation biases can effectively increase the burden of proof for the asylum seeker. As Palmiotto (2022) points out, this trend risks undervaluing the “benefit of the doubt” that is foundational to asylum law. While machine thinking is based on an inductive model, asylum procedures rely on abductive reasoning – a combination from all the knowledge and facts, where all doubts are relevant – to determine international protection (Evans Cameron et al., 2021). Furthermore, along with the interplay of other factors, challenging the outcomes can be accompanied by a skewed judgement of the court. The tendency towards a prima facie ‘culture of disbelief’ as Souter (2011) coins it, prioritises statistical data over the asylum narrative. Instead of being treated primarily as rights holders presenting claims for protection, asylum seekers risk being transformed into data subjects whose credibility is assessed through algorithmic outputs. Hence the question does not concern anymore ‘if’ AI can be applied, but rather ‘to what extent’ (eu-LISA, 2020). The gradual “datafication” of vulnerable individuals such as refugees fails to take into consideration the context dependent elements of RSD – which cannot be comprehended by machines in many occasions. The framework underlying AI risks perpetuating epistemic violence that, together with the ‘black box’ and discrimination challenges, heighten the pitfalls for effective judicial remedy.

6. ANALYSIS: COMPATIBILITY OF AI TECHNOLOGIES IN RSD WITH  ARTICLE 47 OF THE CHARTER 

To evaluate whether AI-assisted asylum procedures comply with the right to an effective remedy under the Charter of Fundamental Rights of the European Union, it is necessary to assess the procedural fairness of such systems. The analysis can be structured around three main elements: algorithmic opacity and transparency, technical reliability and the duty to  state reasons. Although these factors are not formal legal tests, they represent essential conditions to ensure that the administrative decision remains contestable. 5.1 Opacity and transparency One of the main challenges of AI technologies is the lack of explainability embedded in machine learning systems. While an output is inferred, the line of argumentation behind it remains a mystery: “We can build these models… but we don’t know how they work” (Blume, 2017). Accessing such information is sometimes constrained due to commercial interests or developers’ trade secrets, despite considerable financial investments in the AI field. In the case of RSD, the so called ‘black box’ problem is especially concerning since the failure to access information necessary for the final decision can hamper the right to a fair trial and subsequently impede the right to an effective remedy (Charter of Fundamental Rights of the European Union, 2012, art. 47). Such lack of transparency further impairs the principle concerning equality of arms, widening the asymmetric gap between migration authorities and asylum applicants who are unable to challenge algorithmic evidence.

6.1 Right to be heard

Safeguards must not be purely substantive. They must also account for the procedural dimension because asylum applicants must be able to understand the basis of the decision- making process and contest the evidence used against them. In this regard, transparency becomes instrumental to the right to be heard, which is recognised in Articles 41(2), 47(2), and 24 of the Charter and is reinforced by the Asylum Procedures Directive. As Reneman (2014) notes, fair asylum procedures require that applicants can effectively exercise their right to be heard and that authorities give adequate reasons for their decision (p. 111). Where the reasoning underlying automated or AI-assisted assessments remains inaccessible, judicial oversight is weakened and the court’s ability to verify the factual and legal soundness of the decision may be compromised. The right to duly reason requires a proper understanding and examination of the technology before a domestic court. In the case law of the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECHR), effective judicial protection requires that the court be able to assess whether the evidence relied on is factually and legally accurate, reliable and consistent. That concern was explicitly reflected in Ligue des droits humains, where the CJEU observed that opacity in automated processing can prevent individuals from understanding how criteria operate and from deciding whether to  challenge the decision. This raises the additional question of whether domestic courts can meaningfully review technical evidence without specialised expertise, and what procedural guarantees are needed to make that review effective.

6.2 Automation biases

Automation biases consist in the tendency to excessively rely on the output provided by AI technologies without taking into account human verification. One of the main advantages of AI support lies in its capacity to prevent arbitrariness by scrutinising large amounts of data and mitigating systematic errors imperceptible to the human eye. Article 10 of the AI Act establishes that AI models used for high-risk systems must comply with data quality and governance requirements. If they are trained properly, their use could help prevent mistakes or inconsistencies by caseworkers.

Against this background, results may be flawed or reproduce human biases, reinforcing existing ones and generating new ones. For instance, the Language Biometrics Assistance System (DIAS) used in Germany to determine the applicant’s COI indicated significant margin of error in linguistic analysis, with failure rates reaching 15%- 20% for Arabic and 27% for Persian dialects (Dumbrava, 2025, p. 9). Furthermore, automation biases do not only carry important weight in the decision, but they can also generate imbalanced datasets (Kaur, 2024). This implies that poor representation of minority classes while training the system can lead to discriminatory practices benefiting the majority group. Furthermore, due to the predominance of a securitisation approach among states, the use of technology is often perceived as a means to view asylum seekers as threats, rather than individuals requiring protection. The case concerning the German Federal Office for Migration and Refugees (BAMF) illustrates how authorities did not accept physical documents (tazkira, national ID or marriage certificate) and instead considered it necessary to search data on the applicant’s mobile phone. Reliance on such data not only negatively impacts credibility assessment, but can also amount to return to third countries. Where an applicant’s life might be in danger, there is a violation of human rights and the principle of non-refoulment. 3

6.3 Contestability and effective remedy 

The compatibility of AI systems with Article 47 of the Charter and the data protection framework cannot be assessed through a binary lens. For effectiveness to be achieved, the individual must have the possibility to challenge the outcome. As seen from the BAMF mobile data extraction case, digital evidence collected through AI technologies becomes a decisive element in the final determination of status. Formally, the applicant has the right to appeal, but this does not suffice if there is no knowledge of the system’s functioning. The issue is aggravated by the plurality of actors involved in border management. Although Member States remain legally responsible for RSD, AI infrastructure is outsourced to tech companies. These enterprises are prone to prioritise their economic and commercial interests over humanitarian considerations. By invoking trade secrets, they bypass transparency requirements and risk that data may be misused or processed for a purpose different from the initial – function creep. For providers, the linguistic ambiguity of the AI Act further widens their discretionary margins to define the parameters of the algorithms. For instance, Article 10(3) requires “appropriate statistical properties” and “appropriate levels of accuracy” without defining what “appropriate” means. Together with the Digital Omnibus package, private actors are not merely complying with the law – they are actively modelling the standards of evidence in the asylum process.

As a result, AI could shift discretionary powers from decision-makers to system deployers, altering not only how decisions are made but also the reality of asylum practices. In light of the recent reform of the Return Directive 2008/115/EC, the need to assess AI use in RSD through the scope of Article 47 of the Charter becomes more acute. Given its influence on credibility assessments, risk profiling and the evidentiary basis for refusal, these outputs present immediate and serious consequences that can lead to an applicant’s return. Therefore, effective remedy relies on preserving an individualised assessment – where the applicant is able to understand and contest the decision – procedural fairness and judicial scrutiny (European Commission, n.d.). 

7. DATA PROTECTION 

Under Articles 7 and 8 of the Charter, individuals enjoy the right to respect for private life and the protection of personal data. These guarantees are particularly relevant in asylum procedures, where authorities process extensive amounts of personal information relating to an applicant’s identity, background and personal circumstances. However, these rights are not  absolute. Limitations may be justified where there is a legitimate objective and compliance with the principles of necessity and proportionality (the Charter, 2012, art. 52).

 European law implements these requirements through the general principles of data processing enshrined in Article 5 of the GDPR: lawfulness, purpose limitation, data minimisation and accuracy, storage limitation and accountability. Furthermore, the collection usually involves the processing of special categories of data such as ethnic origin, political opinions and religious or philosophical beliefs (Article 9 GDPR). If the data are misused or inappropriately disclosed, asylum seekers face serious risks. Therefore, the GDPR imposes additional safeguards and stricter requirements. The gathered personal data will impact the refugee status determination and credibility assessment. Public authorities find themselves in a dominant position given their awareness of the applicant’s vulnerability. In practice, there is little margin for refusal and the processing of personal data becomes a pre-condition to progress in the application.

7.1 Privacy

In the asylum context, an intrusive and unlawful collection of personal data may also have procedural implications. Interfering with personal rights can indirectly affect the possibility to challenge the status determination decision, especially where the collection and processing of the data were not conducted in a transparent manner. The BAMF case illustrates how authorities conducted an extensive digital search in order to verify the applicant’s identity. The court found the measure disproportionate and therefore unlawful, noting that such intrusive data collection should be used only as a measure of last resort. Moreover, the necessity assessment was conducted ex post, once the data had already been obtained, prioritising data collection over privacy. In this instance, the authorities opted for a digital analysis despite the existence of alternative documents capable of identifying the person. An indiscriminate collection of mobile data raises concerns about data minimisation and raises the doubt of whether there was a lawful basis, such as valid consent or legitimate interest (Article 6 GDPR). The absence of information and transparency also highlights the tensions with Article 15 GDPR, affecting the applicant’s procedural rights. Although the judgement focused primarily on data protection and privacy compliance, it illustrates the concerns about the increasing digitalisation of asylum procedures and the absence of human oversight ex ante. 

7.2 Automated profiling

Automation of tasks potentially influencing the final decision also sits uneasily with the prohibition on automated decision-making under Article 22 GDPR. The article provides that no decision may be subjected solely to automated processing where such processing has a legal impact on the individual. Although paragraph 2 presents three exceptions (contractual necessity, authorisation by Union or Member State or explicit consent of the data subject), none seem to be applicable in the context of refugee status determination. Considering the complexity of the procedures, complete and exclusive reliance on automated methods is non- viable under current European legislation and would infringe the GDPR. 

8. DISCUSSION 

The analysis shows that the use of AI in refugee status determination creates a structural tension between administrative efficiency and procedural fairness. AI systems may support reception, case management and evidence handling, but their deployment becomes legally problematic where they shape credibility assessments, the weighting of evidence or the final decision without sufficient transparency or human scrutiny. The central issue is therefore not whether AI can be used in RSD, but whether it can be deployed in a manner that preserves effective judicial protection under Article 47 of the Charter of Fundamental Rights of the European Union.

This tension is intensified by the opacity of algorithmic systems and by asymmetries in access to information between applicants, administrations and private providers. Where trade secrecy limits disclosure and providers shape the technical parameters of decision-support systems, asylum seekers may be unable to understand or contest the basis of the decision. This weakens equality of arms and reduces the practical effectiveness of the right to appeal. Judicial review is therefore insufficient if the underlying reasoning remains inaccessible or unverifiable.

The discussion further confirms that data quality cannot be treated as a merely technical concern. In asylum procedures, AI systems process sensitive personal data and often rely on incomplete or unbalanced datasets. When such datasets are used for language analysis, risk assessment or credibility evaluation, they may produce a distorted evidentiary framework that disadvantages vulnerable applicants. While AI may improve consistency and reduce certain  forms of arbitrariness, its reliability remains contingent on robust governance, effective oversight and a clear distinction between assistive functions and decisional authority.

More broadly, the paper situates AI within the wider governance of migration control. These technologies are embedded in institutional frameworks that prioritise speed, security, and interoperability. This context shapes both the design and the function of AI systems, which are increasingly used not only to assist decision-making but to structure it. The resulting shift in discretionary power from individual caseworkers to algorithmic systems and their deployers complicates the realisation of procedural guarantees and makes the effective exercise of rights under Article 47 more difficult in practice, especially where AI-supported assessments inform subsequent return or removal decisions.

A key limitation of this analysis lies in the restricted public access to many AI systems used in border and asylum management. Their opacity limits direct technical scrutiny, so the paper relies on publicly available reports, case law and secondary scholarship. Even so, the legal implications remain clear: AI in refugee status determination is compatible with EU fundamental rights only where it remains contestable, explainable and subject to meaningful human oversight. 

9. POLICY RECOMMENDATIONS 

To address the structural and procedural concerns identified in this paper, policy intervention must move beyond formal compliance and towards the operationalisation of accountability in the deployment of AI systems in refugee status determination. Accountability should be understood as a continuous process combining ex ante safeguards with ex post review, ensuring that the use of AI does not compromise the right to an effective remedy under Article 47 of the Charter of Fundamental Rights of the European Union.

First, the European Commission should establish a delegated pre-approval regime for AI systems used in refugee status determination that rely on the exemption under Article 6(3) of the EU AI Act. The central difficulty is that systems presented as merely auxiliary may, in practice, influence credibility assessments, evidentiary weighting and ultimately the outcome of the decision. To prevent strategic self-classification, national data protection authorities, in coordination with the relevant supervisory bodies, should verify prior to deployment whether the system fulfils three cumulative conditions: it performs a strictly procedural function, it has no material influence on the decision and it does not rely on profiling or special  categories of data within the meaning of Article 9(1) GDPR. This mechanism would ensure that systems with substantive impact remain subject to high-risk obligations.

Second, the evidentiary role of AI-generated outputs must be clearly limited. In refugee status determination, reliance on algorithmic tools in credibility assessment risks displacing the individualised and abductive reasoning that asylum adjudication requires. The European Union Agency for Asylum should therefore issue practical guidance clarifying that AI outputs cannot constitute decisive evidence unless independently verified by a human decision-maker. Caseworkers should systematically cross-check algorithmic outputs against country of origin information, documentary evidence and the applicant’s testimony. Compliance should remain subject to supervisory review and departures from these requirements should be capable of affecting the legality of the administrative decision in subsequent judicial proceedings.

Third, the quality of the data used to train and validate AI systems must be treated as a substantive legal safeguard. Article 10 of the EU AI Act requires datasets to be relevant, representative and sufficiently accurate, yet these standards remain under-specified in practice, particularly where sensitive data under Article 9 GDPR are processed. The Commission should therefore support the development of harmonised technical standards clarifying how dataset quality is assessed in the asylum context. These standards should include context-sensitive validation, regular statistical testing and representativeness checks that reflect the diversity and variability of applicant populations. The objective is not the imposition of rigid numerical thresholds, but the prevention of structural bias and evidentiary distortion.

Fourth, effective implementation depends on a coherent institutional framework. Oversight should be distributed across the AI Office, national supervisory authorities and data protection bodies in accordance with their respective competences. The AI Office should coordinate transparency obligations and the registration of systems within the EU database, while national authorities should conduct periodic assessments of system performance and dataset quality. The European Data Protection Board may contribute where deployment raises issues under data protection law, but it should not be treated as the central authority for AI governance. At the judicial level, non-compliance with these safeguards should inform the assessment of whether the applicant has had access to an effective remedy, enabling courts to apply heightened scrutiny where procedural guarantees are weakened. 

Finally, human oversight cannot remain merely formal. It requires both technical and institutional capacity. Asylum officers, NGO caseworkers and legal practitioners should receive mandatory and specialised training on the interpretability, limitations and risks of AI systems used in refugee status determination. Human oversight is meaningful only where decision-makers are capable of identifying automation bias, critically assessing algorithmic outputs and recognising when such outputs lack evidentiary reliability. Taken together, these recommendations aim to reconcile the efficiency gains associated with AI with the procedural guarantees required under EU law. The compatibility of AI with refugee status determination is therefore conditional, not automatic. It depends on the existence of robust safeguards, meaningful oversight and continuous evaluation of its impact on fundamental rights. 

10. CONCLUSION 

The incorporation of AI predictive systems into refugee status determination reflects the European Union’s response to rising application volumes and its broader digital transformation agenda. Yet it also reveals a constitutional tension between technological efficiency and the protection of fundamental rights in a policy field increasingly shaped by security-driven logic. AI systems in this context are not unlawful as such. Their compatibility with Article 47 of the Charter depends on whether they remain subject to transparency, reliability and genuine contestability, together with meaningful human oversight. Where these conditions are absent, AI risks weakening procedural fairness and the practical effectiveness of judicial protection. The legality of AI in refugee status determination therefore turns on the safeguards that accompany its use, not on the technology alone. If the Union is to preserve individualised assessment and effective remedy, AI must remain an assistive tool, not a substitute for legal judgment. Overall, the legality of AI embedded technologies does not depend on the technology itself but the use and conditions under which it is deployed. As noted by the European Commission and Deloitte in their joint report (2020, p. 25), the automation of repetitive tasks such as data registration would allow humans to focus on the ‘higher value’ tasks concerning risk or credibility assessments.

Note

  1. For the purpose of this paper, an AI system shall be understood in accordance with Article 3 of the EU Artificial Intelligence Act (2024) as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
  2. Resolution 78/265 adopted by theGeneral assembly of the United Nations (UN) on 21 March 2024 emphasized that “human rights and fundamentale freedoms must be respected, protected and promothed throughout the life cycle of artificial intelligence”

BIBLIOGRAPHY   

Alizadeh Westerling, F. (2022). Technology-related risks to the right to asylum: Epistemic  vulnerability production in automated credibility assessment. European Journal of Law and  Technology, 13(3). 

Biselli, A. (2022, March 11). BAMF weitet automatische Sprachanalyse aus. Netzpolitik.org.  https://netzpolitik.org/2022/asylverfahren-bamf-weitet-automatische-sprachanalyse-aus/ 

Blume, B. (2017, May 22). We can build these models but we don’t know how they work.  Medium. https://benb.medium.com/we-can-build-these-models-but-we-dont-know-how-they-  work-ec1e2e295739 

Court of Justice of the European Union. (2022, June 21). Ligue des droits v. Conseil des  ministres, Case C-817/19, ECLI:EU:C:2022:491.  https://infocuria.curia.europa.eu/tabs/affair?sort=AFF_NUM-DESC&searchTerm=%22C-  817%2F19%22&publishedId=C-817%2F19 

De Gregorio, G., & Dunn, P. (2022). The European risk-based approaches: Connecting  constitutional dots in the digital age. Common Market Law Review, 59(2), 473-500.  https://doi.org/10.2139/ssrn.4071437 

European Commission, Directorate-General for Migration and Home Affairs, &  Deloitte.(2020). Opportunities and challenges for the use of artificial intelligence in border  control, migration and security. Publications Office of the European Union.  https://data.europa.eu/doi/10.2837/923610 

Eurostat. (2026). Asylum applicants by type, citizenship, age and sex – annual aggregated  data (migr_asyappctza) [Data set]. European Commission.  https://ec.europa.eu/eurostat/databrowser/view/migr_asyappctza/ 

Dumbrava, C. (2025). Artificial intelligence in asylum procedures in the EU (Briefing No. PE  775.861). European Parliament. 

Đuković, M. (2024). Algorithmic risk in EU migration & asylum governance: Reconciling  the EU AI Act and the Council of Europe Framework Convention (AFAR Policy Brief).  Hertie School, Centre for Fundamental Rights.   

European Commission. (2026, January 29). Communication from the Commission to the  European Parliament and the Council: European asylum and migration management  strategy. https://home-affairs.ec.europa.eu/document/download/ce0d294e-5dd9-4e2a-bf68-  53d9d16fc95a 

European Commission. (n.d.). An effective, firm and fair EU return and readmission policy.  https://home-affairs.ec.europa.eu/policies/migration-and-asylum/irregular-migration-and-  return/effective-firm-and-fair-eu-return-and-readmission-policy_en 

European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679  of 27 April 2016 on the protection of natural persons with regard to the processing of  personal data and on the free movement of such data, and repealing Directive 95/46/EC  (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88.  https://eur-lex.europa.eu/eli/reg/2016/679/oj 

European Parliament and Council of the European Union. (2024a). Regulation (EU)  2024/1348 of 14 May 2024 establishing a common procedure for international protection in  the Union and repealing Directive 2013/32/EU. Official Journal of the European Union, L  2024/1348. https://eurlex.europa.eu/legalcontent/EN/TXT/?uri=CELEX:32024R1348 

European Parliament and Council of the European Union. (2024b). Regulation (EU)  2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial  Intelligence Act) and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No  168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU,  (EU) 2016/797 and (EU) 2020/1828. Official Journal of the European Union, L  2024/1689. https://eur-lex.europa.eu/eli/reg/2024/1689/oj 

European Union Agency for Asylum. (2022). Study on language assessment for  determination of origin: Executive summary. Publications Office of the European Union.  https://www.euaa.europa.eu/sites/default/files/publications/2022-  09/Study_on_Language_Assessment_for_Determination_of_Origin_Executive_Summary.pd  f 

EU-LISA. (2020). Artificial intelligence in the operational management of large-scale IT  systems.    https://www.eulisa.europa.eu/sites/default/files/documents/AI%20in%20the%20OM%20of%  20Large-scale%20IT%20Systems.pdf 

Evans Cameron, H., Goldfarb, A., & Morris, L. (2021). Artificial intelligence for a reduction  of false denials in refugee claims.  https://static1.squarespace.com/static/5f03515f47274a7fa3017d54/t/60631002ddb75c5405ca  0484/1617104899219/AI+in+Refugee+Law_final.pdf 

Kaur, G. (2024, September 11). Class imbalance in machine learning. Train in Data.  https://www.blog.trainindata.com/class-imbalance-in-machine-learning/ 

Palmiotto, F. (2024). Procedural fairness in automated asylum procedures: Fundamental  rights for fundamental challenges. Computer Law & Security Review, 55, 106065.  https://doi.org/10.1016/j.clsr.2024.106065 

Schaefer, P., & Dalla Vecchia, M. (2026, February 5). At Europe’s borders, AI is testing the  limits. The Parliament Magazine. https://www.theparliamentmagazine.eu/news/article/at-  europes-borders-ai-is-testing-the-limits-of-eu-rights 

Reneman, M. (2014). EU asylum procedures and the right to an effective remedy. Hart  Publishing. 

Ross, L. D. (2021). Recent work on the proof paradox. Philosophy Compass, 15(6).  https://doi.org/10.1111/phc3.12667 

Schittenhelm, K., & Schneider, S. (2017). Official standards and local knowledge in asylum  procedures: Decision-making in Germany’s asylum system. Journal of Ethnic and Migration  Studies, 43(10), 1696–1713. https://doi.org/10.1080/1369183X.2017.1293592 

Souter, J. (2011). A culture of disbelief or denial? Critiquing Refugee Status Determination in  the United Kingdom. Oxford Monitor of Force Migration 1(1), 38-54.https://www.rsc.ox.ac.uk/files/files-1/wp102-culture-of-disbelief-2014.pdf 

Whelchel, R. J. (1986). Is Technology Neutral? IEEE Technology and Society Magazine,  5(4), 3–8. https://doi.org/10.1109/MTAS.1986.5010049   

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like