Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
In mainstream discourse, AI is widely anticipated to displace certain professions, reducing the role of human expertise across a broad spectrum of industries.
This focus, while understandable, addresses only the most immediate and visible potential impact of automation, framing the issue as a zero-sum game between human labor and technology. Extensive research presents a nuanced reality where AI is more likely to automate specific tasks rather than eliminate entire vocations.
For professionals tasked with corporate governance, risk management and legal compliance, a more nuanced and pressing reality is emerging. The central challenge lies not in AI's ability to replace humans, but in the operational fragility and systemic vulnerabilities introduced when businesses become overly reliant on these systems.
New dependencies: Third-party risk and regulatory gaps
AI systems represent a powerful instrument for enhancing efficiency, but this very efficiency is paradoxical. As organizations embed AI into essential workflows, they create new dependencies and reshape the risk landscape. While AI-related EU legislation mainly focuses on ethical considerations, bias mitigation and safety compliance, and cybersecurity-related legislation that emphasizes digital resilience, these legislative frameworks do not explicitly address certain business continuity and disaster recovery issues arising from AI dependency or third-party lock-in.
This gap is often exploited in vendor relationships. Many AI solution providers follow a similar business model: they often offer services at a lower cost while seeking to leverage clients' data to enhance their own AI models. AI solution providers typically classify themselves as data processors under applicable EU laws, thereby shifting most responsibilities to their clients and often limiting or even excluding their own liability.
This not only poses confidentiality and legal risks to clients but also creates significant compliance challenges, including:
- Ensuring compatibility with the original processing activity — with reference to purpose limitation and further use.
- Safeguarding data subject rights, which requires a practical process for individuals to opt out. This feature is often not built into vendor solutions, resulting in privacy‑by‑design shortcomings.
- Establishing a valid legal basis for transferring client data and allowing AI vendors to use it for service improvement, which is often challenging, especially when considering ePrivacy Directive requirements where consent may be the only option.
The automation paradox: When efficiency creates fragility
One key aspect often unconsidered by internal adopters of AI solutions is business continuity.
As repetitive and analytical tasks are handed over to AI, many organizations may consider reducing their staff, gradually eroding their pool of human expertise. This phenomenon, described in the literature as skill decay, forms part of the so-called automation paradox. The more reliable and pervasive an automated system becomes, the less frequently humans are required to intervene, but when they do, it is usually under rare and critical circumstances for which their skills have atrophied.
In effect, reliance on AI without sufficient human practice and oversight can turn such systems into potential single points of failure. Should an AI application supporting a critical business process crash or behave unpredictably, the result may be an immediate operational shortfall with too few skilled staff to bridge the gap. Heavy reliance on AI without well-planned human or system backups can, therefore, leave organizations vulnerable when disruptions occur.
Building resilience: Adapting business continuity for the AI era
In unregulated industries, companies may not be legally required to identify their most critical business processes. However, it remains essential to determine which internal processes and procedures must continue to be supported by sufficient human resources, even after the implementation of AI solutions.
This can be achieved by identifying the organization's "crown jewels," assessing and evaluating internal processes and procedures, revisiting or developing plans for operational resilience, and evaluating vendors from both a business continuity and lock‑in perspective.
Regulated industries — such as banking, insurance, health care, energy and telecommunications — are mandated to ensure the high availability, reliability and integrity of critical services. While these sectors generally maintain robust business continuity and disaster recovery processes, the integration of AI into core workflows demands a renewed assessment.
For instance, a bank using AI for fraud detection or credit scoring must plan for scenarios in which the system fails or produces erroneous results. If human fraud analysts or credit officers have been reduced in reliance on AI, a sudden failure could allow fraud to go undetected or bring loan approvals to a standstill — an unacceptable risk both commercially and regulatorily.
Similarly, in health care, while AI‑assisted diagnostic imaging can enhance efficiency, hospitals must retain radiologists or alternative diagnostic systems to ensure continuity when AI is unavailable or unreliable. Regulators increasingly expect such safeguards, including human‑in‑the‑loop processes and clear failover procedures, to prevent customer harm and systemic risk.
A new standard for business resilience
The risk of AI failures disrupting critical operations is no longer theoretical and a growing concern for business continuity and AI risk professionals. While reinforcing business continuity and disaster recovery processes for AI is not yet a universal standard, it is gaining momentum as organizations recognize that new technologies require new safety nets.
Companies, particularly in regulated sectors, should adapt continuity planning by pinpointing potential AI failure points, preparing alternative systems or human backups, and rehearsing those responses.
Deploying AI is not merely an IT initiative but a strategic decision with business continuity implications. Best practice involves designing resilient AI architecture, maintaining human oversight and regularly testing fallback procedures.
Businesses that balance AI‑driven efficiency gains with robust redundancy and resilience measures will be better equipped to manage setbacks.
Conversely, replacing humans entirely without contingency plans risks severe operational disruption. Integrating AI into critical workflows must go hand‑in‑hand with reinforcing continuity strategies — a dual approach that is emerging as the new standard for business resilience.
Ádám Liber, CIPP/E, CIPM, FIP, and Tamás Bereczki, CIPP/E, are partners at BLB Legal.