TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | What's next for potential global AI regulation, best practices Related reading: A view from Brussels: EDPB hammers at transborder data flow, ChatGPT

rss_feed

""

""

The rise of artificial intelligence is poised to be the great technological revolution of the early 21st century.

At the IAPP Global Privacy Summit 2023, stakeholders, including practitioners, privacy advocates and government regulators, exchanged ideas to better understand AI's technological potentials, best practices for governance, privacy risks and possible iterations of future government regulation.

Given the rapid development of AI technologies in a space where existing laws around the world have not always been modernized enough to mitigate unintended consequences of certain deployments of automated systems, looming questions of when stricter regulations may come online and how disruptive they will be from a compliance perspective remain.

Latest on EU AI Act 

Around the world, the EU may be furthest along in developing a comprehensive law to regulate AI development and its commercial applications with the proposed AI Act.

IAPP Vice President and Chief Knowledge Officer Caitlin Fennessy, CIPP/US, hosted a breakout session 4 April called "The Challenges of Governing Artificial Intelligence: U.S. and EU perspectives," which focused on the current statuses of both jurisdictions' efforts to respond the development of AI.

AI Act co-rapporteur and Member of European Parliament Brando Benifei said a complicating factor in the EU's pursuit of comprehensive AI regulation that machine learning systems have already been deployed for a variety of commercial and governmental applications across the continent.

Any AI already in use "needs to comply with the existing norms" before the AI Act is in force, Benifei said, adding systems must "respect the existing norms," but hopefully the EU does not place "further regulatory burden" on them.

Acknowledging the wide spectrum of current uses for AI systems, Benifei said he hopes the AI Act does not stifle the development of machine learning systems and rather ensures high-risk systems are held in check to prevent discriminatory, undemocratic and dystopian outcomes.

"Based on our legal model, we want to identify (AI systems) that might pose further risks to safety and fundamental rights in our union," Benifei said. "That's also why we are now trying to further regulate those high-risk AI uses that we are identifying. ... For high-risk systems, we will ask for further verifications on data governance, and how they can manage them, in order to enter the EU market."

Within EU member states, there has already been a push to ensure companies releasing AI technology and companies purchasing various automated systems do so under the protections of the EU General Data Protection Regulation.

On 3 Feb., Italy's data protection authority, the Garante, issued a temporary ban on the popular generative AI application ChatGPT out of concerns it allegedly did not check users' ages and lacked "any legal basis that justifies the massive collection and storage of personal data" to train its algorithm.

Since the Garante instituted the temporary ban, ChatGPT parent company OpenAI pledged to cooperate and provide clarity with respect to its legal basis for processing, as well as an updated plan to improve its algorithmic transparency. Additionally, the European Data Protection Board created a ChatGPT task force.

Elsewhere in the EU, a potential discrepancy is emerging among DPAs. A German authority is mulling a ban on ChatGPT as of 21 April, while Irish Data Protection Commissioner Helen Dixon seeks "thoughtful analysis" before considering a ban on generative AI.

Evaluating the U.S. regulatory landscape

In the U.S., despite several states and municipalities pursuing various forms of AI regulation, there has not been as dramatic a push for comprehensive national AI legislation as last year's efforts to pass the American Data Privacy and Protection Act.

Principal Deputy U.S. Chief Technology Officer Alexander Macgillivray said, in general, the U.S. and EU are pursuing similar policies around AI. However, he said the U.S. approach to future AI regulation will consider that a significant amount of innovation in the field is taking place in the U.S. So long as its continued development operates within the boundaries of frameworks, such as the U.S. National Institute of Standards and Technology's AI Risk Management Framework, future regulations should not attempt to stifle such innovation.

Macgillivary also addressed reactions to letter urging for the pause of AI research, signed by more than 1,000 technology entrepreneurs in late March. He said the letter hit on several issues outlined in the Biden administration's Blueprint for an AI Bill of Rights, issued in October 2022, and is a good basis for future AI policies that uphold civil rights.

"One of the most gratifying things for me about that letter was that it mirrored a lot of the concerns that we raised in the AI Bill of Rights," Macgillivray said. "When we came out with the blueprint … we really focused on a bunch of principles that were important things for people to be able to expect from these companies as they launched their AI rules and reference materials for people, for companies, for technologists and for governments, and how we think about AI and how we think about regulation."

Other U.S. regulators speaking at GPS 2023 were more reserved in calls for additional AI regulation, believing existing consumer protection laws could stave off some of the dystopian, yet potentially hyperbolic, claims about the rise of AI.

During his keynote address, U.S. Federal Trade Commissioner Alvaro Bedoya said he believes AI is already regulated in many ways, and suggestions to the contrary only serve potential bad actors in the industry to continue the development of harmful products.

"The idea that AI is unregulated helps that small subset of companies who are uninterested in compliance," Bedoya said. "We've heard these lines before: 'We're not a taxi company, we're a tech company. We're not a hotel company, we're a tech company.'"

However, he said technology companies' insistence their products exist beyond the boundaries of federal and state laws does not hold water in areas such as protections from employment  discrimination under Title VII of the U.S. Civil Rights Act of 1964, from lending discrimination under the Equal Credit Opportunity Act and from housing discrimination under the Fair Housing Act, for example. 

"These statements (by technology companies) are usually followed by claims that state or local regulations could not possibly apply to those companies ... there is no AI carveout," Bedoya said. "If a company makes a deceptive claim using or about AI, that company can be held accountable."

Focus on AI governance and ensuring privacy

Whatever may come in terms of future regulation, organizational governance of machine learning systems and strategies businesses can employ to prepare for forthcoming AI regulations around the world was top of mind at GPS 2023.

During an all-day AI governance workshop facilitated by IAPP Principal Researcher, Technology, Katharina Koerner, CIPP/US, White House Office of Science and Technology Policy Deputy Chief Technology Officer for Policy Deidre Mulligan said the paradigm of requiring user consent in exchange for use of a wide array of digital services is changing, and policy makers are pushing for technology companies to implement design standards that mitigate bias, increase user privacy and improve equity. As such, she said, companies' product development processes should reflect the changing paradigm.

"The evidence of that shifting paradigm in the U.S. is abounding; we've seen design focus popping up in state laws like California," Mulligan said. "Of course, the emphasis isn't just on addressing systemic risk, right? I think all of us are concerned more about design-oriented framework."

IAPP Country Leader, Italy, and Founder and Managing Partner of Panetta and Associates Rocco Panetta, CIPP/E, said as the EU AI Act gets closer to the finish line over the next year, companies can position themselves to take proactive steps to comply with existing drafts of the law because he does not anticipate its provisions to dramatically change in the final draft. Panetta said companies using AI should return to the "tradition" of combining the role of data protection officer with a chief ethics officer to ensure the privacy of personal data processed by the company and ethical operation of machine learning models when in use.

"It means that, by the text compared to the current draft, companies could take advantage of the situation of clarity, because (the drafting of the AI Act has been) transparent so companies can understand where we execute the policy," Panetta said. "If we go back to this tradition, and using the current tools that the legislation is offering to us, I see a big opportunity for each of us to help our scientists, our technologists, because they need our help."

IBM Vice President and Chief Privacy Officer Christina Montgomery said responsible AI governance that ensures customers' privacy involves getting all the stakeholders at a company on the same page and defining what core values will be incorporated into an organization's machine learning product.

"It's important to think about the 'why' behind AI governance. It's really easy to get stuck in the definitions," Montgomery said. "When we think about ethical AI or responsible AI, there can be different connotations of what that means, depending on what part of the world you're operating, or the values of your company that you articulate." At IBM, she said, "we set our program based on a foundation of principles and values … reducing risks and adverse outcomes in a way a that is prioritizing human agency."

'Human alternative' still necessary

Throughout GPS 2023, the underlying consensus was privacy professionals and technology experts are still wrapping their heads around various aspects of AI's trajectory and potential. As organizations develop compliant governance rules that are flexible enough to adapt to whatever regulations may come, the responsibility for the ethical deployment of machine learning systems will still come down to the human operators overseeing their use.

Macgillivray said rapid advancements of technology have created a situation where privacy pros are forced to take on more responsibility to ensure individuals' civil rights are upheld when customers use their companies' products and services.

Therefore, Macgillivray said, being transparent about when automated systems are in use and the purpose they serve will be pivotal going forward. With so much uncertainty surrounding the future deployment of automated machine learning systems, making sure a human element is monitoring potential issues of algorithmic discrimination and ensuring privacy will be paramount.

"It is amazing to me how many of the systems come out that the public sees as being extremely effective … but they dig into it and they're not so effective," Macgillivray said. "Making sure there is a human alternative when new systems go wrong is critical."

IAPP Privacy and AI Governance Report

This report explores the state of AI governance in organizations and its overlap with privacy management.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.