Corporations around the world collaborated in the ambitious pursuit of creating the most advanced artificial intelligence system ever conceived, entrusted with the responsibility of overseeing a pivotal human space mission. Rather than celebrating the remarkable achievement of creating a new form of intelligent existence through the evolution of technology, the outcome was tragic. Four lives were lost, destroyed by this AI, leaving only a solitary human survivor, who, in a desperate act of self-preservation, had to disconnect the AI turned mad. The AI in question had purportedly undergone rigorous testing, or had it?

As AI advancement continues to accelerate, such events, vividly recounted in the visionary work of Arthur C. Clarke in his novel "2001: A Space Odyssey" may not remain confined to the realm of fiction. The novel describes events onboard a mission to Jupiter, where all systems are controlled by the AI, HAL 9000, and where these fictionally tragic events unfolded.

To prevent the potential hazards posed by emerging AI, the European Union took a pioneering step by introducing AI regulations through the AI Act draft in 2021.

While it is somewhat of a stretch to imagine that a manned space mission could be within the jurisdiction of the EU AI Act, it nevertheless provides a unique opportunity to examine this fictitious incident in the context of the proposed regulation.

The AI Act considers only so-called "high-risk AI systems." HAL undeniably qualifies as AI, and more precisely, generative AI, a subset of deep learning, and a category explicitly mentioned in the act's Annex I. This is evident from the technology's ability to generate speech indistinguishable from humans in its intelligence. HAL possesses the capability to write code, issue commands to industrial machinery and supervise medical equipment, and onboard Discovery One, it controls all the equipment, including the airlock door actuators and astronaut hibernation chambers.

Every AI system has users, and according to Article 3 of the proposed AI Act, astronauts fall under the definition of users with their corresponding responsibilities outlined in Article 29. The act notably underscores it is users who should be "using an AI system under its authority," which contrasts sharply with what occurred in the novel, given HAL's omnipotence and omniscience. If the thorough astronaut training couldn't prevent the incident, what can companies do with even fewer training resources? Nevertheless, user training should be an area of focus, empowering users to harness the potential of AI in a way that generates value while minimizing adverse consequences.

The act focuses on high-risk AI systems "that pose significant risks to the health and safety or fundamental rights of persons," while excluding "minimal risk" systems as outlined in Articles 5-6 and Annex III. In the novel, HAL interfaces with sensors and controls life-sustaining equipment, undeniably poses significant risks to astronaut well-being and fits multiple definitions of high-risk.

It's easy to mistakenly assume that HAL 9000 alone constitutes the AI system, yet as per the act, "the system" includes far more than just the AI component. A comprehensive definition of what constitutes an AI system can be found in the Joint Research Centre's report Cybersecurity of Artificial Intelligence in the AI Act, which states "An AI system … includes … interfaces, sensors, databases, network communication components, computing units, pre-processing software, or monitoring systems."

With this clarification in mind, Discovery One qualifies as a highly intricate high-risk AI system. As illustrations, HAL killed hibernating crew members by deactivating life support systems, and another was killed when HAL cut off their oxygen supply during a spacewalk. Even the climate control system must be considered part of the AI system, given its lack of isolation with fail-safe mechanisms and manual override capabilities. Can anything be excluded from the AI system scope? At most, possibly the spaceship sensors and the log storage.

According to the AI Act, providers design AI systems for their intended purposes. The manufacturers of the spaceship in "2001: A Space Odyssey" bear specific responsibilities as outlined in Article 16. Several of these responsibilities are particularly relevant: the establishment of a quality management system (Article 17), the preparation of technical documentation (Article 18), verification of conformity to requirements (Article 19), the incorporation of logging capabilities (Article 20), the formulation of a corrective action plan (Article 21), the implementation of incident reporting procedures (Article 62), and the post-market monitoring (Article 61).

AI practitioners should pay special attention to AI risk management systems, outlined within the proposed act's Article 9. I recommend reading the U.S. National Institute of Standards and Technology's AI 100-1 Artificial Intelligence Risk Management Framework and the NASA Risk Management Handbook. Its continuous risk management allows any risk management system to continuously evolve and adapt, useful even if you are not sending a space mission to Jupiter.

Programmers often have a tendency to blindly believe in the correctness of their creations, as illustrated in The Therac-25 incident, prompting the U.S. Food and Drug Administration to mandate independent failsafe mechanisms for high-risk medical equipment. Similarly, Discovery One engineers did not design a fail-safe mechanism for the airlock doors, a design that would have prevented HAL from opening both doors and decompressing the spaceship. By implementing an autonomous airlock sub-system with sufficient autonomy to ensure astronaut safety and keep air from escaping, while still being capable of receiving commands, this enhanced airlock system could have been excluded from the act's purview.

The cornerstone of any risk management is the accurate identification of the risks. Once identified, risks can be addressed and mitigated. Certain critically important risks, such as those related to the design of airlock doors, were not identified during the construction of Discovery One. Moreover, the risk associated with HAL malfunctioning was entirely ignored, and mission-critical components lacked manual overrides.

AI's potential for hallucination has become a concern with modern large language models, requiring additional controls to minimize the impact of hallucination. Creative prompt engineering alone does not ensure the accuracy of AI-generated responses and should be addressed in the development and deployment of generative AI-based systems.

Ironically, an identified risk of leaking classified mission details to the population and its botched mitigation by hiding these details from the crew while sharing it only with HAL ultimately triggered the incident in "2001: A Space Odyssey," what would be in clear violation of the AI Act's Article 13, which calls for transparency and the provision of information to users. Astronauts should have been briefed, and with their consent, all communication to Earth should have undergone data sanitization to prevent inadvertent leaks. This approach would have equalized the information between HAL and the astronauts, averting the internal conflict within HAL's generative AI. The design decision not to add manual overrides, the excessively centralized architecture of the spacecraft, and the extreme secrecy surrounding the mission all converged to the AI malfunction. The difficulty in manually deactivating the AI is what almost doomed the entire mission.

The AI Act's Article 10 emphasizes the importance of data governance. In the novel, the spaceship design follows other mandated best digital practices: user manuals (Article 11) are accessible in digital format for self-study, thanks to HAL's tutoring capabilities, and telemetry is meticulously gathered and recorded, as stipulated in Article 12, to a degree that allows Mission Control to conduct remote analysis and offer recommendations. Additionally, HAL can assess human mental health, as evidenced by its analysis of voice patterns in a crew member's request for manual control over the hibernation pods, the capability that requires HAL to be compliant with the EU General Data Protection Regulation. It is evident HAL underwent rigorous testing and received extensive training, meeting another requirement of the AI Act.

While endpoint logging is crucial for data governance and infosecurity, HAL's control of the transit link to Earth prevented astronauts from manually operating the satellite link. When Mission Control suggested deactivating HAL and shifting to remote control, HAL sabotaged further links with Earth. Generative AI posed additional logging challenges since large language models do not generate detailed telemetry for decision-making and data processing, and HAL is no better than modern LLMs.

Discovery One follows the AI Act's recommendations for resiliency. For instance, HAL boasts excessive redundancy in its neural network, ensuring that a single module failure does not impact its performance. Unfortunately for the deceased crew members, the Discovery/HAL system employed a centralized architecture, making HAL a single point of failure without the capacity for other controls to compensate. In contrast, decentralized designs like microservices enable individual services to have their own failsafe mechanisms and manual overrides. Implementing risk mitigation becomes more straightforward in such designs, as each service can be independently controlled.

In moments of crisis, it's often the human heroes who save the day. Unfortunately, a HAL-based system lacked human oversight and lacked manual overrides for critical functions, such as switching to remote ship control or enabling manual controls of the hibernation capsules, among others. The act's Article 14 offers valuable guidance for implementing manual oversight, entirely disregarded by the spaceship designers.

Cybersecurity should be a top priority for AI developers, even though it was not discussed in the 1968 novel. AI-specific requirements should be seen as complementary to, rather than a replacement for, the best cybersecurity practices. The Cybersecurity of Artificial Intelligence in the AI Act addresses these concerns. Modern cybersecurity has evolved beyond the traditional castle-and-moat approach to identity-based design. If the principle of zero trust had been applied, it could have mitigated the risk of both the AI and humans turning adversarial or becoming incapacitated. To ensure the resilience of complex multiagent systems, a range of solutions can be employed, from microservices to consensus mechanisms deployed in cryptographic systems without central authority. 

The design of an AI system should also incorporate governance over how data is used for training and over the AI output to enforce data access policies for AI users. In cases where the AI controls industrial or medical equipment, as demonstrated by HAL's capabilities, a comprehensive risk analysis must be conducted, and controls for the nonhuman-readable AI output must be built. Essentially, this approach applies the principles of zero trust to the AI, ensuring that even if the AI is compromised, the potential damage is limited.

Though exploring responsible AI is beyond the scope of this article, it is important to emphasize the significance of paying attention to bias in AI systems. Article 10 of the proposed AI Act mandates "examination in view of possible biases," which should be done at design phase and while in operation, continuously measured and monitored.

Key takeaways

Whether you are a novice AI practitioner or an expert, adhering to these principles will help organizations avoid AI disasters and enjoy the benefits of the AI revolution:

  • Define and control the boundaries of an AI system within an organization's infrastructure.
  • Implement continuous risk management within the organization's AI governance program.
  • Focus on cybersecurity and adhere to the principles of zero trust.
  • Avoid centralized designs to reduce blast radius and implement fail-safe mechanisms.
  • Understand how the AI is trained, and the data life cycle in and out of AI.
  • Ensure comprehensive immutable logging is in place.
  • Document the AI thoroughly and ensure users are trained to use it.
  • Implement continuous bias measurements following the principles of responsible AI.
  • Ensure proper human oversight and the availability of manual control.
  • Create in-house innovation "sandboxes" allowing employees to experiment with AI safely.

Establishing a well-functioning AI governance program is no easy feat. While recent AI hype has focused on technical capabilities, more attention should shift to AI governance. This analysis of the fictional, yet significant, incident with a malfunctioning AI aboard Discovery One, as seen through the lens of the proposed AI Act, can assist engineers and compliance practitioners in creating robust and future-proof AI systems.

The time to act is now. AI is swiftly becoming a cornerstone of competitiveness and success for every organization.