TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Argentina issues recommendations for reliable AI Related reading: Why we do not need to reinvent the wheel for an ethical approach to AI

rss_feed

""

During the last few months, much has been said about the use of artificial intelligence in all industries. Particularly, many have discussed the use of generative AI and, more precisely, ChatGPT (in its different versions), together with a letter signed by many technology industry leaders calling for precaution in developing and deploying AI tools.

In that regard, Argentina does not have specific legislation regulating AI use, development and/or deployment. Although "artificial intelligence" appears in the recitals of different laws and regulations, there is little to no guidance. For example, Argentine Central Bank Communication A 7724 refers to certain obligations and requirements — including conducting an impact assessment — applicable to financial institutions using AI to provide their services. At the same time, Argentina has adhered to the Resolution 2/2023 UNESCO Recommendation on Artificial Intelligence.

In these circumstances, the Information Technology Subsecretariat, part of the Chief of Staff Office, recently issued Resolution 2/2023, approving a set of recommendations for trustworthy AI directed to the public sector.

The recommendations aim to compile and provide tools for those carrying out innovation projects through technology, specifically focusing on those involving the use of AI. They aim to provide a framework for the technological adoption of AI focused on individuals and their rights. As anticipated, these recommendations are directed specifically to the public sector but could also work as nonmandatory guidelines for the private sector.

The paper establishes a conceptual framework of the whole of AI, describing its origins, theoretical and scientific approaches, the background and describing the different types of AI that theories consider, including both narrow and general AI.

In general, the recommendations focus on establishing ethical principles to guarantee the protection of fundamental rights, respect democratic values, prevent or reduce risks, and foster innovation and people-centered design. The recommendations are structured and developed throughout the life cycle of artificial intelligence. 

They establish a preparatory stage that deals with how AI should be conceived and the recommended measures to take before starting the AI cycle.

On the other hand, within the AI cycle itself, the recommendations are divided into four stages: design and data modeling; verification/validation; implementation; and operation and maintenance.

It is in the fourth stage that recommendations set out what ethical issues should be considered outside the AI cycle.

How should AI be conceived?

The guidelines address issues that should be considered before using or working with AI. The first step is to distinguish between the concepts of responsibility and execution, making it clear that although the execution of a task or service is delegated to algorithms, the decision and, therefore, the responsibility must necessarily rest with the organization controlling the development and deployment.

The guidelines also highlight the principles of proportionality and harmlessness, safety and security, equity and nondiscrimination, sustainability, right to privacy and data protection, oversight and human decision-making, transparency and explainability, responsibility and accountability.

They also highlight a number of values and principles adopted at meetings of other bodies and institutions. The guidelines mentions some of the principles and challenges addressed at the Asilomar Conference, as well as the OECD Principles on Artificial Intelligence.

Measures to take before the AI cycle begins

Build an interdisciplinary team. The recommendations state the need for a team of people from different areas, with diverse perspectives, in order not only to contribute to more complex and creative solutions but also to better address and identify biases inherent in data, algorithms and automated decisions.

Raise awareness. The guidelines recommend the implementation of communication campaigns, talks and training with an approach to the principles, adoption model, destination of use, control and risk management, among other actions, in order to contribute to the adoption of this technology.

Investigate other solutions. A prior exploration of different types of simpler, less risky and equally efficient technologies or solutions is proposed before choosing artificial intelligence (and I would add, a less intrusive technology, considering the purpose of use).

Definition of the scope of the model. Details that the adoption of AI can be implemented through two models:

  • The automation model consists of substituting human labor for hardware, software and/or algorithms for the performance of repetitive, sequential tasks, operations or processes of varying complexity but with duly typified problems.
  • The "human in the loop" model, which consists of the performance of tasks with the participation of a person in conjunction with the AI system, with the purpose of complementing the analysis functions of the AI, through the requirements of a person for decision-making.

Intended use of AI: The guidelines highlight the need to correctly identify the intended use of AI, taking into account the diversity of risks, auditability and traceability of systems.

Pre-mortem analysis: before the implementation of the project, the identification of the worst-case scenarios of failure in the implementation of the project with AI and recommend the involvement of the interdisciplinary team to obtain a broad view of the causes of failure, probability of impact and the choice of how to deal with the risks.

Measures to take within the AI cycle

The stages within the AI cycle are divided into:

Stage 1: Design and data modeling. In this stage, the design criteria are established along with the application of ethical aspects to the following spaces:

  • Team: The minimum aspects that any member must know are established, referring to principles, risks, transparency and accountability, and roles, among others.
  • Data design: Processing based on good data science practices is encouraged in order to improve the quality of the project data. The recommendations develop a series of classifications for ethical data design.
  • Model design: The elimination of biases in the models designed is encouraged, as is the transparency and understanding of the models' processes to improve decision-making and explain its functioning to third parties.

Stage 2: Verification/validation. The guidelines suggest carrying out verifications and validations of the designs implemented in the first stage, considering the principles defined by UNESCO as well as their impact on the people targeted by the model (carried out as a projection or conceptually). 

The recommendations propose:

  1. Members sign a charter of ethical commitment to the AI project.
  2. Pre-implementation validation of datasets, involvement of data science professionals and measurement of compliance with the principles set out by UNESCO.
  3. The participation of professionals and multidisciplinary team to validate the models in conditions similar to those of their implementation and with the aim of confirming the congruence between the results and expectations of design, absence of biases, among other aspects.
  4. Recording all actions and decisions within an AI project, using a formal means that allows traceability and auditing of all verification and validation actions (I would add, similar to a records of processing activities).

Stage 3: Implementation. The recommendations distinguish between implementation on premise (on own infrastructure), via cloud services or a combination. In all cases, it must be ensured implementation guarantees an adequate degree of information security, traceability of actions and decisions, auditing and user accessibility facilities:

In this regard, it states, inter alia:

  • The importance of implementing best practices related to information security, including, knowledge of standards, log and access management and vulnerability testing. 
  • Aspects to ensure traceability and auditability of the model and actions and decisions, depending on whether the model was deployed on its own infrastructure or whether it was deployed through contracted cloud services.
  • The implementation of accessibility best practices either for websites or mobile applications considering regulations and evaluations, among other issues.

Stage 4: Operation and maintenance. In this stage, the guidelines establish recommendations for operation and maintenance to guarantee the availability, continuity and sustainability of the service provided by this technology. The recommendations detail:

  • How to monitor and what tools to use to assess the system's performance in terms of quality of service and possible biases.
  • Consideration of the existence of ethical incidents and their treatment and use for system improvement.
  • Internal user control procedures, including aspects such as authentication and authorization management, change control, updates, upgrades, improvements and registration of all modifications made during the project, among other aspects.

Ethical issues to consider outside the AI cycle

In the last section, the guidelines raise a number of issues related to the post-cycle of AI, with the understanding that each stage requires constant assessment of changes and risks, the designation of individuals responsible for containing and remedying the harms generated by the tech, and the proper recording of accountability and responsibility actions for learning and process improvement.

These guidelines are the first set of recommendations aimed at the public sector, which follow, in many respects, a different set of international principles from those of UNESCO. It remains to be seen how and if the public sector will follow them and whether they would carry any weight in the private sector. At the same time, many expect the different public regulators, including, for example, the data protection authority, to work together with the Information Technology Subsecretariat and other bodies to come together and produce a more comprehensive set of recommendations that would tackle AI from many different angles, including privacy and data protection as well as intellectual property.

Key terms for AI Governance

This glossary provides definitions and explanations for some of the most common terms related to AI governance.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.