TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| IAPP GPS 2024: The building blocks for ethical AI programs Related reading: US OMB lays the groundwork for federal AI usage with memo

rss_feed

""

""

As more institutions look to incorporate artificial intelligence programs, more attention is paid to how to use those programs ethically. 

Such attention is warranted, as the ethical use of AI is tied directly to the trust the public has in the technology. Research shows some are mistrustful of AI due to its complexity and the possibility of bias. In some industries, such as news and publication, disclosing the use of AI can sow mistrust even if the material is accurate, a joint study from the University of Minnesota and University of Oxford found.

These concerns are just some of the factors in the overarching AI ethics debate that touches on everything from data privacy to copyright issues. Avoiding pitfalls will be key, according to panelists from an IAPP Global Privacy Summit 2024 breakout session, who advised attendees that well-rounded and ethical AI governance programs will require creativity, due diligence and leaving no rock unturned during establishment.

All those elements together are key to both fostering innovation while keeping human rights at the core of any AI use, and to "uplift every individual, regardless of borders," said Cognizant Global Practice Lead for Privacy and Responsible AI Tahir Latif, CIPP/A, CIPP/E, CIPP/US, CIPM, FIP, who moderated the panel.

Understanding parameters

A top priority in AI program building is to get a feel for an entity's scope and what kind of rules might apply to the company or industry. U.S. Central Intelligence Agency Deputy Privacy and Civil Liberties Officer Jacqueline Acker, CIPP/US, CIPM, said professionals might be surprised to see what overlap exists and what values are shared around the world. "It is best to think about what applies everywhere," she said.

Such fact-finding can help create an understanding around what the baseline for a company's AI usage is and whether it needs to account for different regulations, added Heather Domin, global leader for Responsible AI Initiatives at IBM. She pointed to different requirements for general purpose systems in the U.S.'s executive order on AI and the EU AI Act based on their computing power. Those differences can mean hundreds of millions of dollars in operating costs, she said.

"So sometimes we have to be thinking about specific jurisdictional differences," she said. "But overall, we want to think about the baseline approach."

Group-wide buy-in first

After establishing and inventorying AI use cases, companies can then begin the program-building process. That build starts with alignment and buy-in regarding ethical AI use throughout an organization.

This not only helps make development smoother, but prevents any corner-cutting or misunderstandings down the road, DoubleVerify Vice President and Chief Privacy Officer Beatrice Botti, CIPP/E, CIPM, said. It's something she understood once she began working with an international team.

"I'm saying one thing and five people are understanding five completely different things. And they're going off to their teams and in their minds, they're carrying out the mandate that we have them, but they're perceiving it so fundamentally differently," she said. 

AI teams also should not be afraid to tap people with different perspectives. Acker indicated some team members, like those who deal with procurement, can bring a different perspective to issues. Among those issues are privacy concerns that can be helpful for people used to viewing them through one lens.

"You may need different kinds of subject matter experts," Acker added.

Knowing the regulatory landscape

The jurisdictions where AI technology will be deployed will dictate how the program is built and the ethics behind it. A multinational deployment comes with a wider range of regulatory considerations toward an ethical vision.

IBM's Domin said companies of all sizes and areas of focus can benefit from resources that compare regulations and frameworks to get a sense of what works best for a given business or industry.

"For instance, if you're not going to put a product on the market in Europe, you might not have to think that much," Domin said.

Acker added AI governance professionals may be able to help understand what resources are needed to follow different standards. But she also indicated it's crucial to be proactive with any regulatory risks, planning against potential risk factors immediately rather than waiting until enforcement is a possible risk. 

"You need to be able to say, 'Here's the risk the organization has if we don't have these stopgaps in place," Acker said. "'Let's start building towards that.'"

It is also important to futureproof a program against a given regulatory regime. Botti said it is good practice to be familiar with different regulations in case expansion happens in the future.

There is also the possibility that some high-level regulations will become an industry standard, as the EU General Data Protection Regulation has in many parts of the world for data privacy.

"You know that if you get a client in Europe, they're going to want to start working tomorrow, they're not going to wait for a year for you to assess against the EU AI Act and tell them where the gaps are," she said. "If you already know where your gaps are, you can start working on making sure that by the time that that product goes live that you've done the work." 


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.