Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Artificial intelligence is no longer the exclusive domain of large enterprises. From retail and real estate to logistics and law firms, small- and medium-sized businesses are beginning to experiment with AI.

With three out of five small- and medium-sized businesses using or planning to use AI in some form or another in the next two years, competitive edge is a significant catalyst for AI adoption. More specifically, the appeal of AI to cut costs, increase efficiency, drive creativity and enhance decision-making is further amplified by McKinsey's 2024 "The state of AI" report, which found 78% of respondents from large competitors have already adopted AI.

Yet with the promise AI holds, few SMBs are prepared for the accompanying risks. Quickly adopting AI without a plan, adequate training or even rudimentary understandings of the everchanging AI regulation and compliance landscapes poses a significant AI management challenge for SMBs.

To further confound the matter for SMBs, the entire AI governance discourse is not tailored to them. In fact, the discourse originally emerged in the context of AI super adopters — enterprises that can afford to integrate AI across multiple departments, have the ability to train and hire specialized talent, and have the resources to strategically align AI's opportunities to organizational goals while outsourcing tailored frameworks for detecting and mitigating risk.

There is a significant gap between AI-driven enterprises that have an AI governance strategy in place and SMBs that are still undertaking their first AI journey. Few SMBs are prepared for the accompanying risks of onboarding AI; the imagery of AI governance often evokes unwanted images of committees, costs, red tape, additional needless bureaucracies and tailored frameworks.

However, SMBs can consider AI governance in another, more accessible way: managing AI responsibly. For SMBs, this does not require creating new departments or hiring ethicists and lawyers.

Rather, it calls for a practical approach that fits an organization's shape and size. Essentially, it is a way to ask the right questions, put foundational guardrails in place and grow an organization's AI capacity confidently. Because the adoption of AI only continues to accelerate, now is an important time for SMBs to begin the conversation of AI governance.

Delving deeper into AI governance

AI governance involves the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical. Its frameworks direct AI research, development and application to help ensure safety, fairness and respect for human rights.

To promote responsible and ethical practices throughout the life cycle of an AI system, it is necessary to make decisions for effective development, deployment and monitoring to ensure AI systems are performing as intended.

Moreover, AI governance includes oversight mechanisms that address risks such as bias, privacy infringement and misuse while fostering innovation and building trust. Organizations can rest assured that AI governance provides a structured approach to mitigate these potential risks.

One critical tool organizations can employ is creating solid AI policies and procedures. Establishing roles, expectations, rules and responsibilities is a reliable starting point for managing AI because it places important guardrails around how people think about and use AI.

Other tools include a strong data governance plan. AI needs to use large amounts of data to function properly — the more organized and clean the dataflows are, the better. And of course, we cannot forget about the importance of AI and data protection laws and regulations, which differ depending on jurisdiction. For example, the EU General Data Protection Regulation and AI Act contain data and AI governance requirements — the golden standards that other jurisdictions look to for guidance when legislating.

In a nutshell, AI governance establishes the necessary oversight to align AI behaviors with ethical standards and societal expectations to safeguard against potential adverse impacts. Not only do organizations achieve higher levels of compliance, but they also benefit from increased efficiency in developing and applying AI technologies.

Why is this important? IBM found "80% of business leaders see AI explainability, ethics, bias, or trust as major roadblocks to generative AI adoption. AI governance tackles these issues." Similarly, "investing in AI ethics has the potential to create quantifiable benefits."

To that end, it is important to embed ethics by design in AI projects from the outset. It may also be advantageous to create an ethics committee within the organization. This involves maintaining continuous involvement throughout the AI development life cycle using crucial milestones, providing ongoing assessments and feedback and ensuring major components of the ethics strategy are incorporated in the design.

The main approaches to AI governance

There are several approaches to AI governance, including: a rules-based approach with prescriptive regulations; a risk-based approach, prioritizing high-risk areas from minimal, limited, high and unacceptable; an outcomes-based approach, defining and achieving specific desired results; and a principles-based approach, establishing high-level ethical principles and values to guide AI development, deployment and use.

It is important to note that it is common for organizations to use a combination of these approaches.

Suggested best practices

Leaders of SMBs thinking about how to get started with AI governance should:

  • Identify where AI is currently being used in the organization. Ask the workforce to be honest here, as there are often hidden uses.
  • Create strong AI policies and procedures that set out responsibilities, roles and specific guardrails for the organization.
  • Train the workforce on basic AI literacy and the company's particular AI policies and procedures.
  • Monitor employees' compliance with the AI policy and enforce it as soon as there are any instances of noncompliance.
  • Monitor AI tools and systems and address any issues early.
  • Comply with all applicable legislation pertaining to data protection, AI systems, intellectual property, confidential information and data breach reporting.
  • Ensure proper documentation is completed.

Christina Catenacci is co-founder, vice president, chief privacy officer, chief AI office, managing editor and chief operating officer of voyAIge strategy.

Tommy Cooke, Ph.D., is co-founder, president and CEO of voyAIge strategy.