As companies look to integrate artificial intelligence into their products and workflows, tensions are rising regarding the application of governance to those integrations.
One notable issue focuses on the tweaks companies are making to terms-of-use policies to include clauses allowing for the collection of user data to train AI systems. But these oft-ignored policy documents can be a keystone of an institution's AI governance approach.
By taking the time to build constraints and guardrails into an AI user interface and being more transparent about when the terms of service are updated, companies can avoid regulatory and user backlash.
"When you used to buy software, you would have a nice little box with the shrink wrap, and the idea originally was that by tearing open the shrink wrap, you've agreed to how the software should be used," Stanford Center for Legal Informatics Associate Director Megan Ma said during a panel discussion at IAPP Privacy. Security. Risk. 2024 in Los Angeles, California. "And over time, it's kind of evolved into this mechanism where we just don't even think about it."
How changes to TOS can bring problems
Consumer privacy is a big area of concern with AI terms of service changes as well.
Axios reported Zoom users took issue when the company altered its terms of service in a way that seemingly gave it the ability to leverage user content to train its AI systems. The company then clarified it would not do so without user consent.
Adobe also backtracked and clarified it would not train its AI on user content stored locally or in the cloud after a terms update, which the company would access content through automated methods, caused backlash, The Verge reported.
In some cases, changes to the terms of use have caught the attention of privacy regulators.
Meta said it would halt its plans to train AI on user data after a notification to users of an upcoming change to its privacy policy raised concerns with the U.K. Information Commissioner's Office and Ireland's Data Protection Commissioner. The social platform X was banned in Brazil after it made similar changes to its privacy policy. The company later complied with the Supreme Federal Court of Brazil's orders and requested to be reinstated.
The U.S. Federal Trade Commission put companies on notice about this practice in February 2024, warning AI companies and others that it could be unlawful to change their terms of service without proper notice.
"These companies now face a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users' privacy," the agency said. "Companies might be tempted to resolve this conflict by simply changing the terms of their privacy policy so that they are no longer restricted in the ways they can use their customers' data."
It added that companies "should be on notice" that amending privacy commitments "risks running afoul of the law."
Use cases create wider understanding
The integration of AI into products can create a web of terms-of-service difficulty for deployers to navigate.
Differing product standards and terms between developers and deployers can cause conflicts. Those differences give rise to issues around liability if a product is misused or causes harm.
Ma said entities should have discussions around when AI is added to a service and how best to inform users of those changes so they can make choices around whether to continue its use. Alerts to changes should also be deployed in such a way that users cannot just dismiss them.
"I think all these things are kind of open issues, especially in light of the fact that these products and the way in which this technology is being integrated and used down the line is not clear or more clarified," Ma said.
She added, "these updates shouldn't be seen as like 'Oh it's just an alert.' I think what it is, is a continuous defining of your relationship with these tools. And I think part of the user interface should be reflecting that."
Part of the challenge is how quickly people are taking up AI after the generative AI boom, said Renee Shelby, a staff research scientist at Google. She said companies can build in guardrails to prevent certain uses and take precautions like building privacy-protecting measures into the AI product itself.
But planning for all possible misuses is difficult with the technology changing so rapidly. According to Shelby, even developers do not always know exactly how their AI makes decisions, what it is capable of or how people might use it.
"So, I think it's very hard to anticipate, if you're taking a developer's perspective," she said. "People will use things in ways that the developers do not intend, the way that people writing terms of service may not anticipate."
Guardrails.ai co-founder and CEO Shreya Rejpal indicated those instances can create hallucinations where they might not have existed otherwise.
She pointed to research from Stanford University, which found some legal AI tools made more errors than the developers' reported, but some of those issues arose from the tools being used differently than intended. A tool was meant to answer questions around specific legal topics, but testers could ask it any legal question they wanted, which sometimes lead to errors.
Rejpal said she often sees companies that do not have a vision for their ideal use of an AI product. But by thinking about those scenarios and what kind of risk taxonomy AI can engender, developers can build interventions into the system ahead of time to make sure it does what it purports to do.
"This mismatch in intended use of an application versus enforced use of an application means that your eventual customers are getting less value out of your product, or misunderstanding the capabilities of your product," Rejpal said. "With generative AI, constraints can be simply baked in via design choices, but have to be actively enforced with machine-learning technologies and baked into the models in the system as well."
Caitlin Andrews is a staff writer covering AI for the IAPP.