JUSTICE AND THE PROPOSAL FOR REGULATION OF ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI) plays an increasingly prominent role in society. Considering that the European Union, specifically the European Parliament and the Council of the European Union, on June 14th, 2023, approved the first changes to the proposed Artificial Intelligence Regulation, which highlighted the total ban on the use of Artificial Intelligence in some matters, trainee lawyer Carolina Martins and José Pinto de Almeida, lawyer and partner at our firm, focused on the question of: What scope will the aforementioned prohibitions have in the field of Justice?

Artificial intelligence (AI) is playing an increasingly important role in our daily lives, and its effects are naturally beginning to be felt in the context of law and justice.

The European Union (EU) has been active in regulating and guiding AI to ensure that its development and use occurs in a manner that is ethical, safe, transparent, and beneficial to citizens and society at large. The EU’s approach to AI aims to balance promoting innovation and economic growth with protecting the fundamental values ​​and rights of European citizens.

In December 2020, the European Commission presented a proposal for an AI Regulation, which is now in the process of discussion and review by the European Parliament and the Council of the European Union. There have been significant discussions and adjustments to legislation to address specific concerns and balance innovation with protecting citizens’ rights.

The most recent stage of this legislative process took place on June 14th, 2023, with the European Parliament approving its amendments to the proposed regulation, now highlighting the matters in which the European Parliament wants a total ban on the use of AI.

  • Real-time remote biometric identification systems in publicly accessible spaces;
  • Deferred remote biometric identification systems, with the sole exception for the repression of serious crimes by law enforcement authorities and only after judicial authorization;
  • Biometric categorization systems that use sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, the workplace and educational establishments; and
  • Non-targeted removal of facial images from the Internet or video surveillance footage (closed circuit) to create facial recognition databases (violation of human rights and the right to privacy).

These new rules begin to provide more concrete elements to guide and stabilize the legal framework for the development that has emerged worldwide in the area of ​​applying AI in Courts.

 

For example, in Brazil, the Brazilian Constitutional Court, at the Federal Supreme Court (STF), uses an artificial intelligence robot called VICTOR that analyses petitions for extraordinary appeals that reach the STF with the aim of identifying whether they deal with topics that already exist. were decided by the Court, for the purposes of applying the solution to the specific case, with the return of the case to the Court of origin or the rejection of the extraordinary appeal.

In Estonia, a robot judge has been used to decide certain legal conflicts in contractual matters whose value does not exceed €7,000.00. The plaintiff and the defendant send electronically the documents they consider relevant to the solution of the case and the robot judge gives the decision, without the need for the parties to be present, although there is the option for the decision to be reviewed by a judge.

If these two examples seem to adapt to future European rules, this will no longer be the case with the reality that comes to us from the United States of America.

This country has used three main systems: COMPAS, PSA and LSI-R. Despite being used by all States, they nevertheless present variations across different jurisdictions. The objective of these systems is to assess the risk of recidivism of defendants, the results of which are used to determine the criminal sentence. COMPAS assesses variables relating to five main areas: criminal involvement, relationships and lifestyle, personality and attitudes, family circumstances and social exclusion. In turn, the LSI-R also uses information covering a wide range of factors, from criminal records to personality patterns. Finally, the PSA analyses a more restricted set of parameters, only taking into account the defendant’s age and criminal record.

Regarding the latter, the confrontation/conflict between these systems used in the USA, for assessing the risk of recidivism with a view to their use to set sentences, and the rule implemented in the proposal for an EU Regulation, which considers a prohibited practice relating to systems AI, the use of people risk assessment systems based on profiling and past criminal behaviour.

We know that the proposed regulation that concerns us here aims only to establish extra-contractual civil liability rules applicable to AI and is therefore not directly applicable to the essential task of administering justice that is the responsibility of the states.

But precisely because it is a task of the State that constitutes part of an essential human right, we dare to say that it will always be very difficult for a member state to use AI in the administration of justice in terms that are not permitted in other areas of society.

The truth is that there has been significant progress in the negotiations and all those involved indicate their desire to reach a quick consensus, reflecting the general feeling that the publication of rules that guide the development of AI is necessary and urgent.

Justice will certainly not be left untouched by this revolution and will find the best ways to use it, within the limits of the law.