TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Why we do not need to reinvent the wheel for an ethical approach to AI Related reading: US Senate hearing showcases consensus for AI guardrails

rss_feed

""

""

Artificial intelligence is expected to increase global gross domestic product 14% by 2030. The total AI investment surged to a record high of USD77.5 billion in 2021, up from USD36 billion last year.

However, harms associated with AI result from a combination of human and machine decisions, human training of machine learning and observational training of machine learning from the external world. It is not to say all AI has associated harms, but enough AI applications have the potential for harm that an ethical, trustworthy approach is needed.

Several experts in the AI world have called for algorithmic regulation and AI governance. Cathy O'Neil raised these issues in her book, Weapons of Math Destruction. She noted that algorithms perpetuate human bias and allow society to hide issues it wants to avoid in black boxes resulting in the automation of the status quo in the areas of insurance, jobs and more. O'Neil advocates for more proof by organizations about their claims, especially regarding harms.

More recently, Timnit Gebru, founder and executive director of the Distributed AI Research Institute, added her voice to this discussion by suggesting the burden should on be organizations to prove the benefit and absence of harm at the front end, rather on the consumer to show harm at the back end. As an example, she highlighted the wrongful arrests of Black men through facial recognition.

Governance can be one way to make these requests a reality. The need for governance has slowly gained a foothold, and governments and other organizations are starting to create AI governance structures.

Multilateral organizations like UNESCO and the Organization for Economic Co-operation and Development have published recommendations on AI ethics and principles for data governance. Countries across the world are considering, or have implemented, laws around AI risk management and data sharing. The EU has taken a strong stance on the importance of AI governance with its work crafting the EU AI Act. The U.S. recently published a Blueprint for an AI Bill of Rights. Organizations like Deloitte, IBM, Microsoft and academic institutions like Stanford and Duke University have all written about the need for AI governance.

The common theme across all these principles, proposals and policies is a focus on "trustworthy AI"  which includes the entire AI development lifecycle: System design, data collection, testing and implementation of AI-enabled tools, and the post-release assessment of the effectiveness of AI in the real world.

A broad understanding of trustworthy AI includes expectations of human agency, privacy, technical quality and robustness, transparency and accountability, as well as considerations of fairness and nondiscrimination. The goal of AI governance is to protect humans who will use or may be impacted by AI systems from being exposed to unnecessary risks, minimize negative outcomes and identify opportunities to maximize positive impacts. The EU summarizes this approach succinctly, as "AI for the common good."

While calls for governance are clear, there is a lack of standardization in its operationalization. Many proposals and principles are nonbinding and nonspecific, which creates ambiguity for guaranteeing the "trustworthiness" of AI. Private corporations create internal ethics review boards, which introduce even more inconsistency. There is a glaring need to provide not only oversight but consistency in the mechanisms of AI governance.

In response, some organizations have proposed standardized oversight frameworks.

Researchers at the Stanford University Human-Centered AI lab wrote an excellent piece earlier this year proposing an independent review board to develop community norms for releasing AI foundational models. Researchers at Cornell University have developed an Ethics & Society Review Board focused on mitigating negative ethical and societal aspects of AI research as a mandatory step for receiving grant funding — although some argue AI governance should not be limited to organizations that receive grant funding and should instead extend to private corporations as well.

The most compelling proposals for oversight are often simple. Author and computer scientist Ben Shneiderman suggests the "AI pragmatists" approach of finding oversight parallels in other industries and using them as blueprints to create AI governance. We tend to agree.

We propose that, in the context of AI-oversight, these blueprints can be found in health care research and data protection, specifically within an institutional review board and a disclosure review board.

An IRB is a committee that oversees and protects the rights of human research participants. IRBs are typical in academic settings. However, nonaffiliated private IRBs exist. These private IRBs often provide a quicker turnaround time but also charge for reviews, which raises some ethical concerns of fairness and robustness of oversight.

The specific makeup of an IRB varies between organizations and institutions. However, per federal regulations, each IRB is required to have at least five members with varying backgrounds, including scientific and nonscientific backgrounds, as well as an individual not affiliated with the research institution. This diversity of experience makes IRBs excellent at guiding research through risk management calculations and identifying potential issues within proposed research.

A DRB serves a purpose similar to an IRB. However, the primary focus of a DRB is to protect the privacy of individuals and groups contributing to the data set, while ensuring transparency of data. DRBs are created to balance the need for privacy with the call for transparency behind data and data-driven decisions. Many federal agencies establish DRBs, with the U.S. Census DRB acting as a template.

A typical DRB consists of diverse stakeholders, including representatives of potential users and groups represented within the data set, representatives of the organization's leadership team, a senior privacy official from the organization, outside experts and other subject matter experts. DRB committees have expertise in privacy-enhancing methods, such as deidentification and anonymization, and the ability to assess outstanding privacy risks. Importantly, DRBs are not federally required committees, nor do they typically hold veto power or final say in the implementation of a project — unlike IRBs.

The IRB/DRB model is a wonderful template for creating what we call an artificial intelligence review board — a distinct framework that satisfies the needs and requirements of robust AI-oversight in both the private and public sectors.

The goals of an AIRB and an IRB are aligned to protect individuals and broader groups from undue harm, while ensuring the innovation is completed transparently and ethically. However, the questions and challenges addressed by each board will differ.

Traditional IRB members tend to be experts in bioethics, medical ethics and social studies design, and they consider the impact of a study on the subject population. It may be appropriate for an AIRB to require representation from academics or professional researchers, similar to a traditional IRB/DRB. AIRB members need a robust understanding of various AI technologies such as natural language processing models, ML and computer vision. They need to consider the broad societal and environmental implications of not only the specific research but the potential applications of the findings.

An AIRB can also use DRBs as an effective model for balancing transparency and privacy.

Many of the calls for trustworthy AI emphasize the importance of transparency and accountability — both of which require a certain amount of disclosure from the models and data sets. However, complete or partial disclosure of data sets carries a significant privacy risk, as represented individuals and groups may be unduly exposed.

An effective AIRB has the skills to ensure attempts to be transparent do not carry a negative privacy impact. It is especially important for large organizations training multiple AI models on the same, or similar, data sets, and releasing transparency documentation that includes information on the data. These organizations need to be extra cautious to ensure separate snippets of data sets do not increase the risk of subject reidentification when combined.

AIRBs can create best practices by looking at these two models for guidance implementing ethics reviews and checkpoints into the AI-development lifecycle. As an example, an AIRB could set an expectation that certain transparency metrics and reports are embedded into the strategic product plan and provide check-ins to ensure transparency metrics and reporting were prioritized, delivered and up to standard. This flexibility of involvement mirrors a DRB, which can be involved throughout the development, testing and implementation process, rather than just during the initial review and termination like an IRB. Adding an AIRB review to the security and compliance flow is one way to operationalize it.

Additionally, while an AIRB would be specific to an organization, an industry-wide model should be promulgated with the interaction of trade groups and the federal government, i.e., the U.S. National Institute for Standards and Technology.  

This ensures the interoperability of AI systems and fosters public trust in AI development, while addressing industry and regulatory issues. It would parallel the frameworks of industry-wide standards and governance, such as in cybersecurity, banking and education.

Creating an industry-wide AIRB model would help ensure members of a particular AIRB are experts in the needs and risks of a specific industry. Traditional IRBs are known to slow down the research process by matters of weeks and months. This cadence will not be realistic in the agile AI development world. Dedicated AIRBs based on an industry standard model will ensure companies creating AI can move at an appropriate speed while receiving oversight.

We do not need to reinvent the wheel with AI governance.

The IRB/DRB model is a wonderful template for creating an AIRB — a distinct framework that satisfies the needs and requirements of robust AI oversight in both the private and public sectors, with only some necessary tweaks. Utilizing both for guidance allows AI governance to consider the possible human and technical harms respectively or combined.

The efforts behind AI governance should be viewed as evolutionary rather than revolutionary, fitting within the "pragmatist" approach. No novel overthinking is needed when proven solutions provide guidance. Taking the best from the IRB/DRB template and customizing it to fit AI provides a solution that can be implemented quickly, while improving on any historical weaknesses. 

AI governance can seem overwhelming, but it shares many of the issues associated with IRBs/DRBs and can use them as a way forward.

The views expressed in the paper are the authors and not representative of SAS Institute or Truveta respectively.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.