While artificial intelligence has been around for decades, it has only recently captured the public's imagination and entered the mainstream. The exponential growth of AI tools, including chatbots such as ChatGPT, Bard and Dall-E, Google's Magic Eraser for removing distractions and other people from personal photos, the University of Alberta detecting early signs of Alzheimer's dementia through your smartphone, or AmazonGo streamlining in-person grocery shopping, illustrates the continued spread of AI into all parts of our lives. And the opportunities and benefits seem endless.

However, not everyone sees the rapid pace of development through rose-tinted glasses. Some focus on the potential for serious misuse — not sci-fi Terminator-style robots — but privacy-eroding, bias-ridden algorithms that can spread mistruths or cause significant harm like never before. There has been a call for a moratorium on the technology. An open letter published in March 2023 called on "All AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4" with eminent signatories such as Elon Musk and Apple co-Founder Steve Wozniak, as well as AI experts. There have also been calls for governments and regulators to play catch up with the tech. Elon Musk was quoted saying, "Mark my words … AI is far more dangerous than nukes," while OpenAI CEO Sam Altman said, "If this technology goes wrong, it can go quite wrong," when he appeared before a U.S. Senate Judiciary Committee 16 May.

Meanwhile, others want to embrace and innovate, recognizing the world of possibilities this new tech brings alongside the economic benefits that have yet to be fully realized. Google Brain Founder and former Baidu Vice President Andrew Ng said, "AI is like electricity. Just as electricity transformed every major industry a century ago, AI is now poised to do the same."

Whichever side of the debate you land on, there seems to be a consensus that AI is a beast that needs taming — in the form of responsible regulation — yet, globally, no one agrees on how. While, perhaps unsurprising given our different cultural, philosophical, governmental, political and regulatory beliefs, the digital world knows no such boundaries, and, therefore, AI is a global issue. So where do we go from here?

Differences in approach

Given the varied worldwide approach, we have focused on the areas closest to home both geographically and some might say politically, namely the U.K., the EU and the U.S. While these jurisdictions started with markedly different approaches to this issue, we now see some convergence, particularly in the U.K. and the U.S. We will consider each in turn.

UK

The backdrop to the U.K.'s light-touch AI regulatory approach is the government's desire to ensure the continued success of the U.K.'s tech economy, sometimes referred to as the "European Silicon Valley." At the end of 2022, the sector had a combined market value of USD1 trillion, and the U.K. maintained its number one spot in the EU tech ecosystem, coming in globally at number three, behind the U.S. and China. 

The "forward-thinking approach to regulation encouraging digital innovation and competition," one pillar credited with the success of this market, is in stark contrast to the approach put forward across the English Channel in mainland Europe. What are the U.K. government's plans for regulating AI? The answer can be found in the long-awaited AI white paper, "A pro-innovation approach to AI regulation," published 29 March.

The white paper aims to "drive responsible innovation and maintain public trust in this revolutionary technology" by avoiding "heavy-handed legislation which could stifle innovation," and instead utilizing a more flexible approach. 

It is a high-level, light-touch approach built on a broad characteristic-based definition of AI, with "adaptivity" and "autonomy" being the characteristics in question, and five key principles supported by existing legislation and regulators, as opposed to new legislation or any overarching AI regulator at this stage (although this will be kept under review). The five principles are: 

  1. Safety, security and robustness.
  2. Appropriate transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

These "values-focussed cross-sectoral" principles build on the Organisation for Economic Co-operation and Development's values-based AI principles, "promoting the ethical use of AI." The government believes this principle-based approach allows for "agile and proportionate" regulation, while maintaining certainty and clarity for business. Given the economic value of the AI market, business-friendly regulation is at the forefront of many minds.

Reliance on existing U.K. regulators and their ability to cooperate in areas with cross-sectoral issues, as proposed in the white paper, is not new. One example is the Digital Regulation Co-operation Forum, established "to ensure a greater level of cooperation, given the unique challenges posed by regulation of online platforms." 

The existing regulators expected to be active in AI regulation include the Information Commissioner's Office, the Competition and Markets Authority, the Health and Safety Executive, and the Equality and Human Rights Commission. Over the next 12 months, these regulators will issue tailored and practical guidance to organizations on appropriate use of AI in their respective sectors. A central risk function will then be proposed to ensure any cross-sectoral issues don't fall between the cracks of each regulator's remit. Regulators will also issue tools and resources, such as risk-assessment templates, detailing how to implement the five key principles in their sectors.

Additionally, the government will establish a regulatory sandbox for AI. This key recommendation, made by Sir Patrick Vallance, will allow regulators to collaborate and directly support innovators by helping get AI products to market. 

While some, Including Musk and Altman, may say the U.K.'s approach is light on detail and, as a result, has no clear guardrails on how to deploy AI solutions, others believe it provides flexibility to keep up with the AI technology by choosing to focus on its features, rather than its methods and techniques. The business-friendly, pro-innovation message is also clear: the U.K. wants to be the leader in responsible AI, while recognizing regulation is necessary to maintain public trust.

Building on the responsible AI theme, the U.K. government announcedplans to host the first global summit on AI safety this autumn. The summit "will consider the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action. It will also provide a platform for countries to develop a shared approach to mitigate these risks." The summit was also backed by the U.S., with President Biden committing to "attend at a high level."

The acknowledgment that the borderless digital world needs countries to cooperate to regulate AI is clear, and, much like the calls from both private business and research communities, the global AI Safety summit will bring key countries, leading technology companies and researchers together to find a workable solution. Ultimately, and some say inevitably, it looks like more regulation is on the cards for the U.K. too.

EU

Over in mainland Europe, the approach to regulating AI could not be more different. The AI Act and the AI Liability Directiveare currently making their way through the EU legislative process. The AI Act will set out obligations before any AI product reaches the market and the AI Liability Directive will deal with civil claims for damages if any harm should arise as a result of the AI.

The AI Act

The AI Act adopts a risk-based approach, differentiating between AI that creates "unacceptable, high, and low or minimal levels of risk." The "unacceptable" uses will be prohibited, whereas "acceptable" uses will be subject to varying levels of obligations based on their risk levels.  

Much like the familiar extra-territorial effect of the EU General Data Protection Regulation, the AI Act will affect organizations within and outside the EU. It will apply to providers that offer their AI in the EU no matter their locations, have users within the EU, and even to providers and users of AI outside the EU when the tech's output is used in the EU. This broad scope, accompanied by the proposed fines of up to 30 million euros or 6% of worldwide annual turnover, whichever is higher, is certainly getting attention. 

While AI Act governance will ultimately fall at the relevant designated regulator's feet, a new supervisory board in the form of the European Artificial Intelligence Board has also been proposed. The EAIB will oversee the implementation of the AI Act and ensure uniform application across the EU. It would also be responsible for releasing opinions and recommendations, as well as providing guidance to national authorities. This sounds very similar to the role of the European Data Protection Board. Let's see how successful that turns out to be.

It remains to be seen if there will be a two-year grace period to implement the AI Act once it completes its legislative passage. While this was originally expected to enable organizations to adapt and accommodate new obligations, public awareness, usage of AI and the tech's rapid pace of development may have a bearing on this. 

The AI Liability Directive

And what of the new AI Liability Directive? On 28 Sept. 2022, the European Commission also published this proposal to adapt the EU noncontractual civil liability rules to ensure those harmed by AI would have access to the same protections as those harmed by other types of technology. The new rules enable victims of damage caused by the fault or omission of providers, developers and users of AI to receive compensation. This legislation is much earlier in the lengthy EU legislative process and will be subject to further negotiation before it reaches the statute books.

The EU approach

This more regimented approach is born from the desire to harmonize AI-related rules across the EU. There have been concerns about the approach, as it looks like there may be a large increase in the number of AI systems that fall within the "high risk" category and are subject to the most onerous obligations. One study by the Initiative for Applied Artificial Intelligence estimates this increasing from 18% to 40%.

Another recent development is the widening of the AI Act's scope by the Council of the European Union and the European Parliament, which will no doubt create much debate during the trilogue negotiations. The Council adopted its negotiating mandate 6 Dec. 2022 and Parliament adopted its negotiating mandate 14 June. These amendments resulted from the explosion of large language models, ChatGPT, Bard and Galactica to name a few. While the original proposal did not include specific provisions for general-purpose AI tech, in other words AI systems used for many different purposes or general purpose AI integrated into high-risk systems at a later date, the Council of The European Union put forward proposals to legislate "general purpose AI" by introducing various bespoke obligations. Without these general purpose AI provisions, there is a fear that existing EU Commission proposals could create a loophole for general purpose AI where the specific use of the AI would be regulated but not the underlying model.

The European Parliament has finalized its adjustments, proposing distinguishing general purpose AI from foundation models. General purpose AI can be used and adapted to a range of applications for which it was not intentionally or specifically designed. Foundation models are AI models trained on a broad range of data at scale, designed for the generality of output, and can be used in a range of distinct tasks. In particular, Parliament's proposals include obligations for providers of foundational models, guaranteeing "robust protection of fundamental rights, health and safety and the environment, democracy and rule of law." They also require those providing foundation models to "assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database."

The proposals impose more obligations in relation to the use of foundational models if they are forms of generative AI, including additional transparency requirements, such as an obligation to "disclose the content was generated AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training." While these proposals are undoubtedly important to regulating new and evolving AI, they show the speed at which AI has taken off and how difficult it is to find common ground for regulation.

Looking at the positions of the Council of the European Union, the European Parliament and the European Commission, the trilogue negotiations will be interesting! Views differ in numerous areas, ranging from what may be tense on some matters and polar opposite on others. Key issues to look out for include the definition of AI itself, provisions for general purpose AI and foundational models, biometric privacy protections, "high risk" use of AI and the percentage of the market that will be categorized as such, security versus fundamental rights, enforcement, and, of course, implementation timelines.

While the desire and value of harmonizing AI rules across the EU is recognized, it could be thought the need to define and categorize the tech, as currently proposed, is folly. The rapid pace of development and unimaginable potential uses makes it difficult to legislate in this way. While this approach may seem alien to those in the U.K. and the U.S. who are used to a common law legal system, this approach is unsurprising across the EU, where the civil law legal systems routinely codify laws. These trilogue negotiations will be pivotal for EU legislators working together to find a common approach to AI regulation.

US

While the U.S. may have joined the race at the start by passing the bipartisan National AI Initiative Act of 2020 in January 2021, establishing an overarching framework to "strengthen and co-ordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies, in cooperation with academia, industry, non-profits, and civil society organizations," commentators in the U.S. consider the country to be behind the rest when it comes to effective AI regulation.

However, the U.S. is back in the race now that the government adopted an approach more akin to the U.K.'s risk-based, sectoral approach. The responsibility is split across multiple federal agencies. There are a number of unique federal initiatives, such as the Blueprint for an AI Bill of Rights, the National Institute of Standards and Technology AI Risk Management Framework, the creation of a roadmap for a National AI Research Resource and the White House Executive Order to root out bias, protect the public from algorithmic discrimination, and address key concerns about AI and its uses, but there is currently no proposal for federal-level AI legislation like in the EU. The NIST's pragmatic AI Risk Management Framework and accompanying playbook are particularly interesting. These valuable resources "seek to cultivate trust in AI technologies and promote AI innovation while mitigating risk." The resources, developed with input from the public and private sectors, are intended to be "living documents" where collaboration is encouraged. The NIST said, "It is intended to build on, align with, and support AI risk management efforts by others." Adherence to the framework is voluntary, but it will be interesting to see whether this remains the case given recent developments.

While the U.S. approach is focused on promoting innovation and recognizing the opportunities of AI tech while also acknowledging and mitigating the risks, it is thought this piecemeal approach will lead states to take matters into their own hands. A more complex, and perhaps contradictory, approach to AI at the state level will make it more difficult and cumbersome for agile tech businesses to realize the potential of AI. The tech itself is complicated, can be challenging to understand and moves at such a fast pace that regulating it, let alone complying with such regulation, becomes a problem. Altman said, "For a very new technology we need a new framework." Further, he said he and other companies want to work with the government to create it.

Some lawmakers understand the importance of regulation at the federal level. It is, perhaps, with this in mind a recent U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing 16 May on the topic, titled "Oversight of A.I.: Rules for Artificial Intelligence." The hearing explored what AI regulation should cover. Altman, IBM Chief Privacy and Trust Officer Christina Montgomery and New York University Professor Emeritus Gary Marcus testified at the hearing. Altman favored regulatory guardrails that would allow the benefits of AI to be realized while minimizing the harms. Montgomery spoke about "risk-related regulation" with more oversight for high-risk AI than low-risk AI. This sounds similar to the EU approach of categorizing AI by risk, but the key here is ensuring not all AI is assessed as high risk. Marcus said he was there as a "scientist, as someone who has founded AI companies, and as someone who genuinely loves AI — but who is increasingly worried," and went on to talk about the need for big tech, governments and independent scientists to work together to regulate AI, suggesting a "global, international, and neutral" body to ensure AI safety.

On 23 May, shortly after the hearing, the White House released a fact sheet on steps to "Advance Responsible Artificial Intelligence Research, Development, and Deployment." The announcement included:

  • An updated roadmap to focus federal investments in AI research and development.
  • A new request for public input on critical AI issues.
  • A new report on the risks and opportunities related to AI in education.

The White House also held "listening sessions with workers" to hear employees' experiences of AI use for "surveillance, monitoring, evaluation, and management" by their employers.

It appears AI regulation has made it up the presidential (and political) agenda after a rather slow start, and more domestic and international initiatives may be announced in the coming weeks and months. One example is the U.S.-U.K. Atlantic Declaration, announced 8 June. The declaration contains references to AI regulation, as well as various international initiatives in this area including showing support for the U.K's proposed global AI safety summit. This bilateral agreement also reinforces the need to "accelerate our cooperation on AI with a focus on ensuring the safe and responsible development of the technology."

Conclusion

While conceptual parallels can be drawn across the U.K., EU and U.S., in that the regulation of AI should be risk based, the need for trustworthy AI and the recognition of the benefits of an international standard, the commonality ends there. The way to secure the balance between responsible and innovative use of AI for both private companies and the wider public benefit, while mitigating the risk proportionally for rapidly developing tech where all the future uses have not yet been imagined, means a big challenge lies ahead.

The super-charged race to regulate, coupled with the current economic reality, means countries will look to protect their citizens and businesses first and foremost, while trying to secure the coveted global number one spot for AI and its rich rewards.

A complicated picture emerges where over-regulation in one jurisdiction may increase the competitive advantage for a more light-touch regime in another, although whether regulating AI in such a manner is possible or palatable to the global community remains to be seen. A number of recent bilateral and multilateral trade agreements are including AI in their terms as the potential benefits become more and more apparent, e.g., the U.S.-U.K. Atlantic Declaration mentioned above. And, on 20 May, theG7 committed to working together to "Advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values." As U.K. Prime Minister Rishi Sunak said, "No one country can do this alone. This is going to take a global effort."

While global standards seem appropriate for a digital world that knows no borders, differing approaches mean agreement at this level will be a challenge, especially with the rapid pace of tech development versus the slower, high-level, committee-style negotiations at the international level.  

At the heart of the differing approaches lies the fundamental ideological difference between the innovative, business-friendly, light-touch regime and the protection of fundamental rights. While the tech is moving at such a fast pace and future uses are yet to be considered, let alone realized, the more flexible and adaptable light-touch approach seems to have the edge. The U.K. is also in the position of being unburdened by the need to seek agreement from multiple countries, like the EU, or multiple states, like the U.S., and therefore appears to be the most able to adapt at speed. However, the U.K. must not be complacent. It must capitalize on its ability to respond quickly where needed while not over-regulating AI by demanding it meets higher standards than other tech or products. This is a delicate balance that will undoubtedly shift over time. Global organizations will be tracking developments closely. Many will be caught by the extraterritorial application of the EU AI Act, so even if the U.S. and U.K. don't adopt a similar regime, they are likely to put a global standard that complies with the most onerous regime in place.

Even with clear rules on regulation, one also needs to be cognizant of the challenge of regulators taking different approaches in interpreting those rules, which makes compliance even more challenging. Just ask any privacy professional working across U.K. and EU jurisdictions!

All in all, an exciting period of challenge and opportunity lies ahead for the full potential of AI to be realized, while keeping privacy pros busy working on compliance with the various facets of global regulation as they emerge. Who will win the race to regulate?