TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| AI governance as an enterprise business initiative Related reading: The top 5 best practices for AI governance compliance

rss_feed

""

""

Artificial intelligence governance is gaining momentum. On 30 Oct., U.S. President Joe Biden announced a sweeping executive order to provide more far-reaching federal scrutiny on the use of AI across industries. Earlier this month, at the IAPP's AI Governance Global 2023 in Boston, large companies spoke about the criticality of building formal governance processes around the development and deployment of AI systems to mitigate risk.

Conversations around the potential risks of launching AI have grown more sophisticated, nuanced and informed. The governance processes that enable evaluation of the growing body of AI use cases, however, are not settled. In short, how do organizations get an AI-governance process off the ground, who is responsible and where does the process sit?

While legal and compliance teams are rightfully focused on mitigating the data privacy, intellectual property and ethical risks associated with AI, governance is as critical to business value and efficiency as it is to limiting enterprise risk.

AI governance can improve organizational access to and adoption of AI systems, driving innovation and process efficiencies. Proper governance can also prevent the duplication of efforts in the development and adoption of AI systems, and force meaningful business evaluations of AI use cases prior to investing in them.

AI governance, therefore, should not be relegated to the compliance and risk function, but should be embraced as an enterprise-wide business initiative established by organizational leadership. This sentiment is echoed not just by leaders in the development of AI technologies, such as Microsoft, IBM and Google. At AIGG, organizations in other sectors, like Pfizer and Mastercard, said they are designing their own enterprise AI-governance processes with cross functional accountability to address the growing internal use of AI systems for common business functions.

Large organizations with a history of embracing AI often establish AI-governance offices to spearhead the creation of AI policies, guidelines and training. Uber, Microsoft, Google, Mastercard and Pfizer have all referred to these functions. However, even among these organizations, the actual implementation of the governance process falls on the teams accountable for AI deployment.

With this in mind, Collibra established a scalable framework that can work for organizations of any size, starting with the business sponsor proposing their AI use case to analytics, data offices and legal teams. The framework starts with the use case, rather than an evaluation of the data or algorithm alone, because a holistic picture of how AI is leveraged is needed to determine its value and associated risk.

To properly assess an AI use case, a business must evaluate the training and inference data, the algorithm, the purpose of the use case, the business value and the return on investment. Businesses must also evaluate the level of human oversight and the access, usage, performance monitoring, data storage, data retention and security protocols associated with the AI all together.

Business sponsors are asked to work in advance with the in-house analytics team or third-party vendors, as applicable, to gather sufficient information to complete a series of intake questionnaires documenting these factors. While this is a heavier lift for the business sponsor at the outset, the review and approval process is relatively painless when all the information is at the reviewers' fingertips.

Enterprise-wide workflows are relied on to guide the business stakeholder through their AI use-case proposal. Every stakeholder in the review and approval process must have access to all information pertinent to the proposed use case. My team, in charge of the legal, ethics and compliance review and risk assessment of AI use cases at Collibra, can reference the business and technical information needed to complete the evaluation.

The business case, the business value description and information about the training and inference datasets, their categories, sources, quality and lineage, as well as the AI models at the center of the use cases, are all available to the team. Having access to a single source of truth for an entire use case is valuable, as use cases can (and usually do) evolve over time, making it easier to modify these records once and keep all stakeholders informed. Further, a complete and detailed accounting of all AI use cases allows the organization to quickly adapt to the anticipated influx of AI regulations and standards as they arise.

As the frenzy to adopt AI within products and internal processes grows, business, data, legal and compliance teams all need to collaborate around each AI use case to promote productivity, efficiency, compliance and responsibility. A streamlined AI-governance process, framed as an enterprise business function where all stakeholders are accountable for their role and have visibility into all aspects of the process, is a recipe for the successful adoption of AI use cases.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment SAYYAPARAJU PANDURANGA RAJU • Nov 25, 2023
    Agree with author on meaningful insights and advocacy.  I would like to add the following:
    
    The fast paced adoption of AI in various sectors particularly the consumer facing ones warrant enterprises to formulate AI governance framework/policies that address the protection of personal data of consumers.
    
    The AI governance should be introduced/implemented such that they become key ingredient of enterprise data protection, information security, compliance and risk management policies.  All these to complement each other instead of working in silos in order to achieve objectives in a holistic manner.
    
    Enterprises to institute a team for AI governance that can be brought under either Chief Risk Officer or Chief Information Security Officer in dotted relationship with DPO, CCO and CTO to make its functioning more effective.  
    
    The regulations around AI are ideally to be light for data-light enterprises and heavy for data-heavy enterprises so that compliance costs remain affordable for small enterprises.