As artificial intelligence transforms our organizations and the way we engage with the world, there is an urgent need to scale and professionalize the workforce that will be tasked with the practical application of AI governance. A professional AI governance workforce can navigate the sociotechnical challenges raised by AI systems to ensure AI is developed, integrated and deployed in line with emerging AI laws and policies.

Beyond mitigating risks, professionalization of AI governance will build the trust and safety to ensure AI fulfills its potential to serve society in positive and productive ways.

The world needs AI governance professionals

Clear rules of the road are important. But without people to put those rules into practice, rules are reduced to just words. History has shown early investment in a professionalized workforce pays dividends later. As lawmakers and society coalesce around the guardrails that are needed to mitigate AI risks, there is an imperative to train and empower the individuals who will put those guardrails into practice.

Tomorrow's regulations will rely on the professional structures we build today. Waiting until new laws are on the books or in effect to build an AI governance workforce will inexcusably delay the implementation of AI governance protections that are needed now.

Avoiding a workforce gap is paramount

Hundreds of thousands of AI governance professionals will be demanded in short order. If they are not available to steer development and deployment in a way we can trust, rebuilding later will be far costlier to organizations, economies and society at large. Even as many thoughtful individuals rise to the task of responsibly governing AI, a significant gap between the demand for such experts and the supply of qualified practitioners remains.

Leveraging existing professionals is only a partial solution.

AI governance, with its broad set of risks and equities at stake, demands skilled professionals who understand the scope and implications of the work at hand. Existing parallel professions, such as privacy, cybersecurity, data governance, risk management, compliance and organizational ethics, can be skilled up to rapidly fill the widening gap. Building achievable on-ramps for these individuals, and those from other disciplines, to evolve into AI governance roles will be essential.

Lessons can be learned from history. As the largest global association of privacy professionals — a field that didn’t exist a few decades ago — the IAPP observed the factors that shaped the growth of a new privacy profession firsthand.

Ten years ago, privacy and data protection was already a swiftly growing field due to decades of consensus on common principles, an expanding number of legal requirements and business pressure to demonstrate trustworthy practices. But nothing prepared us for the exponential growth in demand after regulators began requiring organizations to designate qualified professionals as responsible for meeting compliance obligations.

Laws like the EU General Data Protection Regulation recognized the value of the professionalized privacy workforce and, in turn, ensured the profession of privacy would continue to mature. The field of privacy is still scaling to meet the demand fostered by regulatory recognition of the importance of designating a responsible and knowledgeable professional. The workforce gap in privacy, similar to in the profession of cybersecurity, likely numbers in the tens of thousands globally.

Earlier recognition of the need for professionalization, and care in fostering its proper growth, would no doubt have lessened this ongoing workforce gap and avoided the dynamic that transpired under the GDPR, where organizations continued to play catch-up on compliance long after laws entered into effect.

Harnessing the benefits of professionalization

When professionalization is recognized by lawmakers and international bodies, and embedded within organizations, a professionalized workforce can play a powerful role in developing and implementing innovative governance solutions. Once established, with incentives aligned to spread adoption, the profession itself continues to standardize practices and rapidly responds to new risks. Such a professionalized workforce takes responsible governance from theory to practice more nimbly than can be accomplished by regulation and enforcement alone.

The factors for fostering professionalization are well understood. Both regulatory and soft law measures will play a role in professionalizing AI governance, through the creation of a standardized body of knowledge, a common lexicon, trainings, certifications and a community of practitioners.

  • Specialist knowledge. As compliance obligations mature, so will norms of practice updated through professional associations, government entities and international bodies. As with other multidisciplinary sociotechnical fields, mechanisms for rapidly sharing knowledge are necessary to adapt to evolving AI policy considerations.
  • Credentialing mechanisms. Modern professions rely on recognized certifications and licenses to verify expertise and standardize practices. Often such credentials are recognized by international accreditation bodies, such as the International Organization for Standardization, among others globally. AI governance will need to spread internationally recognized credentials that can foster a common parlance and a cross-functional skill set among practitioners.
  • Regulatory recognition. Regulation at all levels plays a role in driving the standardization of practice, bringing legitimacy and operational incentives, which help to close workforce gaps. Policymakers already recognize the value of requiring qualified and responsible officers to implement AI governance standards within organizations. For example, U.S. Executive Order 13960 requires federal agencies to specify responsible officials and provide appropriate training for AI governance. The proposed Canadian code of practice for generative AI systems envisages the designation of responsible staff, while drafts of the EU AI Act would require human oversight of high-risk AI systems with the necessary competence, training and authority.
  • Norms and ethics. Mature professions, such as law and medicine, have adopted professional codes of conduct to govern their members' behavior. Principles for ethical AI are already proliferating and organizations will continue to recognize the value of industry-leading practices as a business differentiator. Remaining accountable to these principles will depend on the work of credentialed professionals with a common understanding of values fostered by consistent communication and ongoing training.

A robust community of qualified AI governance professionals that can scale as quickly as the technology itself is a necessary precondition for AI governance. Until professional mechanisms become the foundation of daily practice, outcomes will vary widely, while the risks posed by AI will only intensify. Once established, a modern profession spreads trustworthy and standardized practices across industries and borders, while remaining adaptable to swiftly changing technologies and risks.