TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | The future of AI regulations and how companies can prepare Related reading: A view from DC: Privacy gets inclusive




The development of machine learning and artificial intelligence has gone into overdrive in recent years.

Now governments around the world at both federal and local levels are attempting to get out ahead of what may come from the exponential growth of AI’s potential to fundamentally change the digital world.

During the IAPP Privacy. Security. Risk 2022 conference, Uber Privacy and Security Public Policy Lead Shani Rosenstock facilitated a panel discussion about the number of the existing and potential regulations affecting AI around the world and how organizations can prepare for future legislation that will govern their machine learning systems. The other panelists were Future of Privacy Forum Senior Counsel Keir Lamont, CIPP/US, U.S. Senate Artificial Intelligence Caucus Director Sam Mulopulos and Cozen O’Connor Public Strategies Principal Madison Smith.

It was noted during the presentation that Canada, China, the EU, the U.K., and the African countries of Egypt, Mauritius and Rwanda, have either adopted or are in various stages of adopting regulations for the use of AI. For instance, China has rules that mandate how algorithms are operated, while Canada and the EU have introduced draft comprehensive regulations for AI with their respective C-27 legislation and the EU Artificial Intelligence Act.

Despite the U.S. lacking comprehensive privacy legislation that would govern AI, the panel discussed two pieces of legislation introduced during this session of Congress.

The Algorithmic Accountability Act would require companies to conduct impact assessments on their automated decision-making systems for bias, and the Advancing American AI Act would set U.S. government standards for using AI.

During the panel, Lamont said even though the American Data Privacy and Protection Act has stalled in Congress, passing the Algorithmic Accountability Act would serve as an immediate step to regulate AI as more enterprises begin employing the technology for various uses. He said ADPPA alone may not be enough to prevent states from crafting their own disparate AI regulations without an underlying federal standard.

“The actual proposal still may require businesses to conduct algorithmic impact assessments, as well as algorithmic design evaluation,” Lamont said. “So those are two very progress-oriented interventions to regulate AI.”

For now, Lamont said U.S. states are using “similar language” in their laws regulating AI, which are grounded in concepts developed from the “privacy governance of data protection data for the European Union.”

“We should keep an eye on whether or not the approach that ADPPA takes to regulating AI, whether that may stop ... trickling down to the state level.” he said. “Some of the advancements and discussion in Congress on ADPPA may actually start animating some state and local lawmakers’ problems (with AI).”

The panel examined states such as Colorado, Connecticut and Virginia’s consumer privacy laws’ “comprehensive privacy frameworks,” which give consumers the ability of opt out of user profiling conducted “solely” with automated decision-making. They also explored U.S. municipal efforts to regulate AI, including Washington D.C.’s “Stop Discrimination by Algorithms Act.” 

Mulopulos said Congress must act during its next session or risk having to play catch-up with states and municipalities that passed various AI regulations, as well as federal agencies attempting to issue rules governing AI's uses under laws that may not be entirely applicable to the evolving technology. He said if Republicans win control of the U.S. House of Representatives in the midterm elections and Democrats maintain control of the Senate, it could force the parties to compromise on federal privacy legislation that gets the ball rolling.

“We do need some (federal) certainty here, we've got states doing all sorts of stuff, we got cities doing all sorts of stuff, and then we got executive agencies indicating a desire to do things,” Mulopulos said. “So we can begin to move in the direction (of passing legislation) today because these issues affect us all. It doesn’t who you are or where you’re from, so if we have an appetite to do something about it, next session we’ll hopefully talk about how we pull it off.”

AI principles for compliance

The panel then shifted the conversation toward how businesses employing AI technology can best position themselves to adapt its uses to comply with new laws and regulations as they are enacted. The four main components, panelists said for preparing an organizations' AI system for what may come from the regulatory sphere are transparency on disclosing AI uses to customers, fairness in the AI system, accountability by publishing the system’s internal governance policies, and engaging with the regulatory and legislative processes, the panelists said.

Rosenstock said an example of Uber increasing transparency to its customers was when it launched its “Privacy Center” in January, where the company provided further information on its approach to data collection.

“Transparency is disclosing when AI systems are being used,” Rosenstock said. “Transparency further means providing your users or consumers with greater information on the purpose for which AI is being used, how the systems are running, how they operate; it's giving them information so that they can make informed decisions.”

In terms of accountability, Smith said, regardless of how global regulations evolve for AI, companies can take proactive measures today to avoid scrambling to achieve compliance with future laws.

“Putting aside an unknown of when (new laws) may be coming down the pike, external governance isn't the only part of the equation, there's an internal governance component to this too,” Smith said. “And your ability to be nimble, once that external governance actually goes into effect is going to be in large part based on the processes that you've already implemented.”

top photo:
Left to right: Uber Privacy and Security Public Policy Lead Shani Rosenstock, Future of Privacy Forum Senior Counsel Keir Lamont, U.S. Senate Artificial Intelligence Caucus Director Sam Mulopulos, and Cozen O’Connor Public Strategies Principal Madison Smith.

Credits: 1

Submit for CPEs


If you want to comment on this post, you need to login.