Many conference attendees have probably thought a speaker sounded robotic. Very few have probably heard an actual robot speak.
Professionals attending IAPP Privacy. Security. Risk. 2024 in Los Angeles, California, got the chance to hear from Ria, an artificially intelligent machine whose primary function is to present a humanlike face in settings such as health care and education. But Ria's creators at Machani Robotics imbued this version with another knowledge set: What counts as good AI governance.
On that front, Ria seems to be on board with the ideas many stakeholders are coalescing around. She sees AI as a tool to help humans analyze data, work more efficiently and help in industries where staffing is short. But she also sees the risks, such as AI displacing jobs or creating privacy violations.
Her comments, guided by Truyo CEO Dan Clarke, alongside two human panelists provided a unique glimpse into the future as companies strive to make chatbots more humanlike, doing everything from providing therapy to assessing a medical diagnosis to helping policymakers wrestle with their approaches to the burgeoning AI industry. And through repeats of answers and a few long, awkward pauses, Ria's participation showed how difficult it still is to replicate human experience.
"AI has the potential to significantly bolster information confidentiality through advanced encryption methods, real time security monitoring and anomaly detection systems," Ria said. "However, it also presents risks, if not governed correctly, as AI systems can be targeted for malicious actors seeking to explain vulnerabilities in order to access sensitive data and join robust security measures. Ethical AI governance is essential for maintaining confidentiality."
Such ideas are becoming more commonplace in AI governance circles and regulatory frameworks. But it's one thing to hear humans, who have a vested interest in making sure AI is properly governed, talk about bias and risks. It is another to hear a robot talk about itself and its peers that way.
For instance, Ria said AI like herself can help lawmakers considering regulation of the AI industry but should not have an opinion on said regulation. Businesses like the one that created her should not be left alone to make the rules either, she said.
"So, some regulation by businesses can be a part of a governance system, but it may not be sufficient on its own," she said. "This is because businesses are primarily driven by profit and shareholder value, which sometimes conflict with societal interests."
"Legislation provides a framework that ensures a baseline of ethical conduct and societal protection," Ria added. "It can offer clear guidelines and accountability that self-regulation might not, especially in areas of privacy and fair use."
Ria's humanness was met with either amazement or amusement by P.S.R. attendees. Panelists admitted to being slightly nervous. Audience members crowded around the stage to take videos and selfies with Ria.
Her varying listening expressions roamed from raised eyebrows to overpronounced winks. When she interrupted panelists, repeated answers or seemed to take long pauses before existential questions, audience members chuckled.
The former response could be due to Ria's sensitivity, her engineers said. She is programmed to respond to questions when she hears her name, so pauses between saying "Ria" and asking the question can cue her to speak. That also meant her microphone needed to be muted when panelists were talking, to keep her from interrupting.
The latter response was not born of thoughtfulness, but a dip in the constant stream of access to WiFi she needs to answer questions.
"Ria is meant for a closed-room setting, not for a room with 400 people," said Suchith Reddy, the strategic initiatives and business operation lead for Machani Robotics. "She usually does not require a microphone if it's just a one-on-one conversation."
After one such slip up, Aaron Weller, CIPP/US, CIPM, CIPT, FIP, the leader of HP's Privacy Innovation and Assurance Center of Excellence, told the audience not to dwell on the AI's shortcomings, but think about how far it has come from more rudimentary applications.
"To be fair, AI is as bad as it's ever going to get," Weller said. "This is only going to improve. I know it's been funny when she does something wrong, but this technology even 10 years ago would be unbelievably advanced, right?"
The impact AI-powered robots are expected to have on the workforce varies. A study from Massachusetts Institute of Technology, IBM and the Productivity Institute found only about 23% of jobs would currently make sense to automate due to current business costs. A survey of 750 business leaders by Resume Builder found 37% of them used AI to replace workers in 2023.
Still, a McKinsey report estimated AI will have a significant effect on workforces across industries. Concerns about those jobs have spurred labor disagreements, with an impending longshoreman strike in the U.S. fueled in part by concerns about jobs being replaced by automation.
Weller said the human form is not always an ideal vessel for AI to inhabit, noting a more mechanical deployment can help with fixing cars or in areas where human hands cannot reach. But as development advances, he said those looking to use AI in place of human interaction need to take care to consider the moral implications of doing so and ensure those systems are safe.
"If we're looking at putting these devices into situations where they're caring for maybe the less fortunate or less able to take care of themselves members of society — we talked about some of the health care scenarios, like elder care — you know, need to be really, really careful that we know what we're doing."
The city of Scottsdale, Arizona's Chief Information Officer Bianca Lochner said introducing AI into human society should also be done with an eye to how the populace will receive it. She said when Phoenix, where she lives, gave the green light to allow self-driving cars, those companies sent out drivers to take pictures and collect data of the streetscapes. Those cars drove up and down her street "day and night."
"For some of us, it was really upsetting because the city never engaged us," Lochner said. "If we are having technology in our neighborhoods, we need to be really, really transparent."
The introduction of Google's Waymo self-driving cars into the Phoenix area caused clashes with the local population, including test drivers being threatened and frustrations around the cars taking up public parking spaces, according to news reports. Lochner said companies need to assure the public they will keep issues with AI in mind as it advances.
"We have to be mindful of that public trust and also be mindful that we need to engage our communities around making decisions that impact their lives," she said.
Caitlin Andrews is a staff writer covering AI for the IAPP.