Regulation (EU) 2024/1689 — the AI Act — and Regulation (EU) 2016/679 — the GDPR — together configure a demanding regulatory framework that conditions the way in which AI can be developed and used in European judicial systems. The use of Artificial Intelligence in the administration of justice constitutes a complex and sensitive regulatory reality. For its part, the AI Act expressly classifies AI systems intended for the administration of justice as high-risk systems.
Item 8, paragraph a), of Annex III of the AI Act is unequivocal: systems intended to be used by a judicial authority, or on its behalf, to assist in the investigation and interpretation of facts, of the law and to assist in the application of the law to the factuality in question, including those used in alternative dispute resolution mechanisms, are high risk. Recital 61 reinforces this position: AI can support the decision-making power of judges, but the final decision must always remain a human activity.
This classification imposes strict obligations on States: risk management systems (art. 9), quality of training data (art. 10), technical documentation (art. 11), transparency (art. 13) and effective human supervision (art. 14). Any system that assists a court in jurisprudential research or evidentiary analysis will have to comply with a conformity assessment prior to its effective use.
In addition to these requirements, GDPR discipline is added. The processing of personal data in a judicial context — often involving data relating to criminal convictions (art. 10 of the GDPR) — is subject to the international transfer regime in Chapter V. Transfers to third countries are only lawful following an adequacy decision by the Commission (art. 45), subject to the existence of adequate guarantees such as standard contractual clauses (art. 46) or in the situations strictly provided for in art. 49, that is, with the explicit consent of the holder, arising from contractual necessity, important relevant public interest, declaration, exercise or defense of rights in legal proceedings, protection of vital interests or transfers from public records within legal limits.
In turn, article 48 establishes that decisions of authorities in third countries do not constitute, in themselves, a valid basis for transfers, therefore, the decision can only be recognized or executed in the Union if it is based on an international agreement in force between that third country and the Union or a Member State, as happens, for example, with instruments of mutual legal assistance.
It is in this framework of regulatory intersection that the issue of data sovereignty emerges. When a European court uses an AI system whose processing takes place on servers outside the EEA, each interaction may constitute an international data transfer subject to the GDPR. The sensitivity of judicial data – which touches on fundamental and property rights – makes this issue particularly complex and sensitive.
The combination of these requirements points to the inevitability of AI multinationals wishing to operate in the European Justice sector being compelled to locate their data processing infrastructures in the territory of user States, or, at the very least, within the EEA. Digital sovereignty, which is the technological translation of the principle of jurisdictional sovereignty, thus requires it.
The challenge for Member States — and for Portugal — is twofold: benefit from the transformative potential of AI, without giving up control over the data that feeds a sovereign function par excellence: administering justice.

Leave a Reply