TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | The case of the EU AI Act: Why we need to return to a risk-based approach Related reading: The EU AI Act: A discussion with MEP and co-rapporteur Dragoș Tudorache

rss_feed

The benefits of using artificial intelligence to address a wide range of societal challenges and improve our way of living are bountiful. AI can empower public- and private-sector organizations to deliver services across a multitude of industries, including health care and medical research, automotive, agriculture, financial services, law enforcement, education or marketing. In science, open-source AI AlphaFold just solved the complex issue of protein folding and can now predict the structure of proteins from their amino acid sequence, which in turn will aid medical research and the development of drugs. The New York Times recently reported radiologists in Hungary are achieving promising results when deploying AI to detect breast cancer in x-ray images with AI matching and, on occasion, outperforming experienced doctors and flagging cases previously missed.

AI helps us find our way to the nearest coffee shop or guides ambulances to the nearest hospital, and it works to detect diseases and prevent accidents. AI is deployed to fight human trafficking or to optimize production in factories to reduce energy consumption and emissions — the potential use cases are endless. But to quote Spider-Man’s Uncle Ben, “with great power comes great responsibility.” The enormous progress in AI technology has also created concerns regarding a variety of potential legal, ethical, and societal risks and challenges.

The European Commission’s proposal for the Artificial Intelligence Act was the first of its kind globally and, as such, will undoubtedly have a signal effect on other global legislators that will follow. The commission approached AI regulation with a welcome risk-based and relatively light-touch approach, essentially creating a risk pyramid with an outright ban for certain AI applications, stringent requirements for AI systems classified as high risk, and a more limited set of (transparency) requirements for AI applications with a lower risk. The commission intended to present “a balanced and proportionate approach limited to the minimum necessary requirements to address the risks linked to AI without unduly constraining technological development.”

Since then, while moving through the complicated EU legislative process, the drafting has been marred by lengthy negotiations in the Council of the European Union, a record number of amendments and ongoing debates in the European Parliament. These have pushed the proposal away from its original objectives.

The focus must be on the use of AI

AI is, by definition, intended to develop and evolve constantly, with new applications (generative AI) created almost in real time. For AI regulation to remain effective in protecting fundamental rights while also laying a foundation for innovation, it must remain flexible enough to adapt to new developments and use cases, a constantly changing risk taxonomy, and the seemingly endless range of applications. Approaching AI regulation through rigid categorization according to perceived levels of risk turns the focus away from AI’s actual risks and benefits to an exercise that may become quickly outdated and risks being so over inclusive as to choke future innovation. In reverse, this approach also risks giving applications that do or do not yet fit the profile a pass, even though they may pose a significant risk on closer inspection.

Regulating AI cannot be a mere question of classifying large parts of it as high-risk out of an abundance of caution, which seems to be the approach with the ever-expanding list of high-risk AI systems in Annex III of the proposed AI Act. We are still developing our understanding of how to define AI itself, as well as how to qualify the actual risks and inherent benefits of AI. A future-proof AI regulation should leave room for such evolution of understanding by focusing on the key AI issues, with a risk-based approach combined with organizational accountability requirements. Without case-by-case risk assessments, we will inevitably end up including applications of AI in the high-risk category that, on closer inspection, may not actually pose any significant risk at all or fail to regulate those that do.

A recent survey conducted by appliedAI found many European startups are concerned with the current direction of travel of the AI Act, as 33-50% of respondents would see their technology potentially falling into the high-risk classification of the AI Act proposal. This would be a significant jump from the 5-15% envisaged in the AI Act’s initial impact assessment. Many startups and small- and medium-sized enterprises may find the extensive conformity assessment required for high-risk AI difficult without the benefit of a frame of reference or extensive resources. Demonstrating technical and organizational compliance at the stage of placing the technology on the market may equally pose a serious challenge for smaller players. However, we are also counting on these same startups and SMEs to be the fount of innovation and bring us the next big idea. Where available and accessible to smaller organizations, regulatory sandboxes can be a way to mitigate this dilemma.

The case of General Purpose AI systems further demonstrates the importance of focusing on the application and associated risk rather than the technology per se. GPAIs are trained on broad data sets and can be fine-tuned for a wide range of downstream tasks. A single GPAI large language model can act as the foundation for hundreds of other applied models. For example, the large language model that powers ChatGPT, the online conversational and generative chatbot receiving so much attention, also serves as the foundation for hundreds of other applications used for businesses, communications, financial services and more.

For example, GPAI systems might be trained to moderate different types of content in the health care space. A single GPAI model trained on health and medical data from diverse sources and modalities can be adapted for a number of different downstream tasks, such as matching individuals to a suitable clinical trial, answering patient questions or summarizing patient records for providers. The appliedAI survey found 45% of the surveyed startups considered their AI system to be GPAI. As with most AI applications, the seemingly limitless capacity of GPAIs may be daunting but does not always constitute a high risk without the context of where and how it applies downstream.

This also brings into focus the thorny question of who is ultimately liable for GPAI obligations under the AI Act. Most AI systems are not developed as standalone products or services released into the marketplace by a single entity. They are often the result of actors building upon each other’s efforts. An AI application that emerges from the open-source community, for instance, might be the result of the efforts of hundreds or thousands of contributors. The users of AI systems ultimately determine how each system is going to be deployed, and it will be up to the user to ensure they undertake context-specific risk assessments and mitigation exercises to minimize any AI system and application failures. GPAI is purpose-agnostic, and users may apply them in scenarios beyond those envisioned by developers. Apart from thoroughly testing their models prior to release, developers can support risk mitigation by documenting their models’ development, intended uses and limitations for those intending to make use of them downstream through approaches such as model cards.

Alignment with existing (and foreseeable) legal frameworks is vital

The success of an EU AI Act will also turn on its alignment with the existing and developing legislative landscape. Several key EU General Data Protection Regulation requirements and principles (purpose limitation, data minimization, data storage limitation and limited legal grounds for processing of special category data to train for bias detection) remain in tension with the needs of AI technologies and must be resolved. In addition to data protection provisions, labor laws or sector-specific legislation at the member state or EU level already provide for obligations that may overlap or even conflict with proposed obligations under the AI Act. However, duplicative or conflicting requirements lead to legal uncertainty and inconsistent protections for individuals and their rights, and this will have to be carefully analyzed beyond Annex II of the AI Act proposal.

Conclusion

Regulating AI is a delicate exercise in finding balance. It should provide outcome-based rules for the evaluation of risks and benefits of AI systems, adoption of mitigating measures to the identified risks and, above all, enough flexibility to adapt to new technologies, avoiding restrictions that suppress valuable and beneficial innovations and uses of AI.

A risk-based approach must assess the risk of the impact of AI technology in the context of specific uses and applications, rather than the risk of the technology in the abstract. Any risk assessment must account for the benefits of a proposed AI application or the risks of not proceeding with its development or deployment.

Ultimately, trust in AI and confidence in the digital economy has to be supported through robust organizational accountability, with organizations establishing accountability frameworks to operationalize legal requirements through risk-based, verifiable and enforceable controls and practices with the necessary degree of transparency to stakeholders that are continuously monitored and verified.

Most of us do want technological advances. But they must be governed by principles aimed to ensure the protection of our rights and accompanied by demonstrable, careful and constant assessment of their risks by experts diverse enough to understand them.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.