We are seeing accelerating expansion in the range and capabilities of machine aids for human decision making and of products and services embodying artificial intelligence and machine learning. AI/ML is already delivering significant societal benefits, including improvements in convenience and quality of life, productivity, efficiency, environmental monitoring and management, and capacity to develop new and innovative products and services.
The common feature of "automation applications" — the term we’ll use to cover diverse AI-/ML-enabled decision making, as well as products and services — is a combination of data, algorithms and algorithmic methods, code and humans, to derive insights and other outputs.
Humans are — or should be — the heart of automation apps, the heart in the machine.
Humans decide which datasets are discoverable, linked and analyzed.
Humans determine whether data, algorithms, algorithmic methods, code and people — as brought together and operating within technical, operational and legal controls, safeguards and guide rails specified by the same or different humans — are reliable and safe to produce appropriate outputs that lead to appropriate outcomes on humans or the environment. Humans, directly or indirectly, consciously or otherwise, determine what is appropriate.
Automation apps will only be safe, reliable and socially beneficial if the decisions that humans make are properly informed and responsible. Good governance should ensure that the right humans with the right skills and experience are brought together and empowered to make or influence the right (properly informed and responsible) decisions at the right time about design, deployment and context of use.
Each individual involved in the governance of automation apps brings a different perspective. Diversity matters. Diverse teams can find the evolving and fluid social construct to apply in making decisions about whether and how automation apps are designed, deployed and used.
Humans determine whether, how and in which particular contexts, an automation app is used. Automation apps will be safe, reliable and socially beneficial only if used within the range of appropriate contexts for which they were designed. Humans that actually care about ethical, fair and socially responsible outcomes should work out what are "safe outputs" and "safe outcomes," and that relevant users of automated apps know what is safe and what isn’t. Automation apps don’t need to be safe for all potential apps; there are considerations of cost and specialization. You don’t take a Ferrari bush-bashing, but you may drive a Jeep on a racetrack. We know this, but often we aren’t properly informed and can’t work out the capabilities and limitations of automation apps. Appropriate deployment and use require good governance and anticipation of the range of possible users and uses.
Over the last few years, many of us have spent a lot of time and expended many words in several languages on ethical principles and frameworks for AI/ML. Principles and frameworks can frame thinking by humans about automated apps. Structured thinking is required — the right humans empowered by the right entities to bring the right thinking to the right table at the right time. Principles and frameworks bring structure and reduce the risk that important factors are overlooked. Oversight and review mechanisms can then provide assurance that this thinking reliably happens or that an alert is raised whenever it does not — and that when (as it often will) something unexpected occurs during testing or deployment, any adverse effects are promptly addressed.
The word "governance" has negative connotations in some businesses. Some executives see governance as a compliance function staffed by people who are a cost center, not revenue enablers, predisposed to say "no," creating friction and delay in time-critical business decisions.
Good data and analytics governance should be none of these things. Governance professionals need to help other executives and stakeholders designing, developing and deploying automation apps to understand what good governance should look like, as implemented across a diversity of data ecosystems and analytics environments, and by diverse business and government entities.
Governance needs to move from a buzzword and tick-the-box compliance checklist to reliable and verifiable everyday practice for product design and development across entities big and small.
Good governance requires consideration not only of who makes the particular decisions that should be made. It also requires oversight, accountability and allocation of responsibility to individuals to ensure that such decisions are well informed and reliable. It requires plaudits when things are objectively and demonstrably assessed to go right and people who care about consequences when things go wrong. Good governance, therefore, requires consideration of how entities are structured and how humans work together at each of the project level and the entity level. It also requires appropriate internal and external monitoring, oversight and review.
Good governance needs to be closely aligned and compatible with existing process and project management methodologies, including the three lines of defense model. If data and analytics governance is not integrated into everyday practice for product design and development, it will not reliably assure good outcomes.
Assessment of benefits and risks of use of a particular automation app needs to take account of the range and possible magnitude of harms to humans, entities or the environment that might be occasioned by inappropriate use of automation apps. An outcome of good governance should be reasonably anticipated minimization of risks and harms. This is highly specific to the context of deployment and use and one reason why good governance of automated apps is a newly developed field, where good practice is not yet well developed or subject to broadly accepted industry standards.
Good governance is required regardless of whether the input is personal data about identifiable individuals, proprietary or business data or data about the living or physical environment.
Good governance is required regardless of whether the outputs or outcomes are controlled by businesses, government agencies, nonprofits and other civil society organizations, law enforcement agencies or intelligence organizations.
It is a truth universally acknowledged that poor data undermines the reliability of automation apps. It is increasingly recognized that code-embodying algorithms may also be discriminatory or produce illegal or unreliable outputs. Much has been written in recent years about each of these problems and how to assess whether they will arise. Surprisingly, little has been written about how to embed governance within data analytics environments and the entities that operate them, ensure that that the full range of concerns are anticipated and addressed, and responsibly manage any residual risks that remain after appropriate risk mitigation.
Attention has been focused upon the what and when, but not how.
Good governance of automation apps should become an enduring source of differentiation and competitive advantage for entities ready to demonstrate that it has been embedded and reliably adopted. Viewed using the long-term lens of sustainable business value, good governance usually makes good business sense. Many boardrooms and C-suites understand this already. However, many businesses are not yet sure how to make good data and analytics governance real.
How do we ensure good governance of automation apps?
Data standardization does not itself assure provenance, reliability and quality of data as an input for automation apps. Processes for assessment of "data quality" are more mature than processes for assessment of data analytics environments. However, good practice for the assurance of data quality remains largely focused upon data discoverability and readiness for ingestion, and not the suitability of data for use to create outputs that affect particular outcomes.
It is often hard to know what is going on. Volumes have been written about problems with the explainability of "black box" ML apps. However, in trying to determine industry best practices, there is a bigger problem. Many data analytics methods and processes are in and of themselves sensitive business information that can only be legally protected by retaining their character as trade secrets (confidential information). Businesses quite reasonably do not want to expose their proprietary processes and algorithms to competitors. Exposure may also enable algorithms to be gamed. As a result, there will likely continue to be limited transparency for developing good industry practice in operational data science.
As to outputs and outcomes, there is continuing debate about the appropriate range of practical, operational controls, safeguards and guide rails to ensure that outputs from automation apps will be safe, reliable and socially beneficial for use in particular contexts.
Whenever legal obligations or evolving standards for good behavior by entities are expected, four organizational requirements need to be addressed.
First, senior management and boards should understand what is expected of the entity and accordingly of their management and oversight, particularly around strategy and risks.
Second, there must be designation and empowerment of appropriately skilled individuals who can give effect to those obligations and standards. Responsibility, skills, incentives, sanctions, escalation criteria and reporting lines must be properly aligned. Controls, safeguards, guard rails and properly approved processes and procedures must be reliably embedded within an entity. Professed accountability of an entity, without corresponding allocations of responsibility to specified individuals, often will not lead to real accountability at all. If consequences are assessed by decision-makers as an externality for themselves or for the entity the likelihood of an irresponsible decision substantially increases.
Third, those individuals who are given responsibilities for identifying and mitigating AI/ML risks should be empowered with methodologies and tools that enable them to reliably and verifiably fulfill those responsibilities.
Fourth, data and analytics governance must be properly integrated into everyday business practices and project management methodologies.
These organizational requirements apply to all entities, regardless of industry sector, size or level of maturity in information technology. The ways of giving effect to these organizational requirements must, however, differ entity by entity.
Entities electing to develop or use automation apps have a wide range of capabilities and available resources. A startup will need to implement these requirements in a different way than a large organization that has highly systematized business and project processes. Even large organizations will need to evolve. Many do not have high levels of maturity in functions and processes for data and analytics, particularly (but not only) in relation to non-conventional sources of data, and data that is not clearly regulated personal information.
Implementation of good governance of automation apps often will require up-skilling of particular individuals within the entity, new allocations of responsibilities and reporting lines, and other changes to technology and entity governance. Just as evolving standards for privacy by design and security by design have changed requirements for information governance, implementation of good governance of automation apps requires an understanding by entities of the need for effective change management.
Good governance of automation apps should seldom be a matter for specialists alone. Often it can only be practical if the organizational requirements are addressed through multiple levels, roles and functions within an entity.
The pace of development, implementation and modification of automation apps is fast. Governance frameworks to ensure reliably good automation apps, tools and methodologies for assessing and mitigating risks of harms, are still developing. Each entity needs to ensure that its organizational arrangements catch up and keep pace with the entity’s AI/ML capabilities and its changing risk profile.
There are emerging opportunities for entities to differentiate themselves from competitors through good governance. Observe how some digital platforms are now advertising their commitment to privacy protective control of ecosystems of consumer data that they curate or enable.
With new legal requirements and rising stakeholder expectations about transparency in design and use of automation apps, it will become easier to identify the organizations that are laggards, recalcitrant in their governance practices. Some organizations may only react to public relations crises, simply address existing legal obligations, or otherwise, resist change. These organizations should and will be left behind.
Which entities will shine? And how quickly will there be naming and shaming of those entities that remain parked under a cloud?
Photo by Michael Dziedzic on Unsplash