Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Ultra-low latency, quicker data processing and the ability to operate effectively even without internet connectivity are the hallmarks that make Edge artificial intelligence a compelling value proposition, one that promises to combine AI-driven decision-making with enhanced data privacy.
By processing data closer to the source and minimizing its transmission from end devices to a central cloud location, Edge AI potentially reduces its exposure to cyberattacks and lowers the risk of mishandling sensitive information.
However, this remarkable advantage can be offset by challenges intrinsic to a distributed computing approach. For example, resource limitations on devices and a heterogeneous network environment can expand the attack surface. On the other hand, countermeasures such as strict security policies and privacy techniques may backfire if they end up reducing model performance, increasing latency and compromising the overall user experience.
These and other issues make edge use cases a challenging arena for the deployment of trustworthy AI. It's an issue providers need to pay the most attention to, not only because stakeholders' trust, or lack thereof, is a significant determinant of users' adoption of technology.
For EU-based companies, deploying trustworthy systems has become an unavoidable business objective under the AI Act.
Also, the rapid proliferation of Edge AI presents as many opportunities for innovation as it raises questions about how to design systems that are both privacy-sound and effective problem-solving tools.
Initiatives like the EU-funded Manolo project and an expanding research literature are testament to a seemingly growing concern from businesses and regulators on how to deliver trustworthy solutions in cloud-edge distributed environments.
The business case and challenges of trustworthy Edge AI
Back in 2018, IBM released AI Fairness 360, a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models. It followed the publication of the company's principles of trust and transparency, which set the groundwork for a formal commitment to advancing trustworthy AI and AI governance.
These initiatives captured a broader, industry-wide recognition of the need for responsible technology development and articulated what should be expected of AI developers.
Along with many similar efforts, they also marked a turning point in how AI is perceived and approached today. Trustworthy AI has become a key word in business and policymaking, which crucially stretches beyond ethics and moral arguments.
Organizations have become acutely aware that trust deficits can hinder the return on investment of their AI initiatives and put end users off. This is especially true of Edge AI use cases like autonomous vehicles, remote surgery, industrial quality control or secure banking, where inaccurate outputs or unreliable applications can lead to disastrous consequences.
While it is routine today to refer to data privacy as one fundamental principle of trustworthy AI, this notion encompasses several different aspects and enterprise goals. The principles outlined in the Ethics Guidelines for Trustworthy AI published in the EU provide a good starting point for the analysis of trustworthy requirements in Edge AI applications.
From a user perspective, along with qualities of privacy and security, Edge AI systems must ensure availability, usability and explainability of AI's decisions. In other words, privacy and security must be designed in a way that doesn't compromise the user experience. To complicate matters further, different users may require different kinds of explanations, all of which must be delivered within the constraints of resource-limited devices.
From a technical perspective, trustworthy AI is defined by properties of accuracy, robustness and model transparency. The robustness of the system is especially relevant in a dynamic context such as that of Edge AI. Users need to feel confident that changes occurring in the environment don't affect the outcomes and performance of the model.
Finally, from a wider societal perspective, trustworthy AI at the edge should be law abiding, ethical, fair, accountable and environmentally friendly.
This complexity suggests the need for a more holistic, end-to-end approach to achieve trustworthiness in Edge AI, with a synergistic integration between data privacy, technical approaches, user experience and human-centric considerations.
Towards a tailored approach for trustworthy Edge AI
Currently, in the absence of standardized regulations and guidelines tailored specifically for edge intelligence, developers and organizations are left to refer to broader trustworthy AI frameworks, such as the U.S. National Institute for Standards and Technology, the Organisation for Economic Co-operation and Development's AI Principles, and the already mentioned EU ethical guidelines.
However, these frameworks must be adapted to account for the unique characteristics and limitations of the edge domain. Several strategic choices and trade-offs must be factored in.
One of the first options is model selection. State-of-the art lightweight models built with architectures that require less computational power — are compact enough to be efficiently deployed on a device and are more environmentally friendly, another key requirement of trustworthy AI, while performing well on specialized business-domain tasks.
Developers will also have to evaluate how to incorporate an AI governance layer, to ensure bias detection and mitigation before deployment, explainability and compliance monitoring. Models already built on a foundation of trust and compliance and that incorporate safety guardrails should be prioritized for this purpose.
Choosing a fit-for-purpose model and managing it all along its lifecycle is a good start, provided an effective data governance framework is in place, as well as unified data access and scalable storage — a crucial component to handle the hefty datasets required for training Edge AI models.
A solid data governance and compliance foundation in turn provides the necessary guidance for privacy protection measures that cover multiple touchpoints, from the input of data, to inferencing and output of the model prediction. It also enables regular reviewing and updating of compliance practices to meet data privacy standards.
At the same time, to preserve privacy, the communication between edge and cloud devices needs to be secure. Companies will have to codify new approaches that effectively handle the challenges of real-time, distributed, and resource-constrained environments, opting for lightweight security architectures, because traditional strategies may not be applicable to edge devices due to their computational requirements.
Another problematic, widely debated area concerns how to achieve explainability and interpretability without compromising performance.
As this nonexhaustive list suggests, the challenges of privacy and trust in Edge AI are far from trivial. They demand a strategic, multidisciplinary approach that keeps multiple, equally important objectives under consideration. This complexity might necessitate a tailored reelaboration of general trustworthy AI principles.
Ethical responsibility, privacy compliance and utility of AI systems are not opposing forces but should be intertwined and co-designed for effective and safe deployment of Edge AI.
Silvia Podestà is an advisory innovation designer and business technical leader at IBM.