Responsible AI Management: Evolving Practice, Growing Value

This report was published by The Ohio State University in collaboration with the IAPP.

Published: June 2024Click To View (PDF)

Note: This report also has a landing page on Ohio State University's website.

People are increasingly coming to see artificial intelligence as a double-edged sword. On one hand it promises to enhance individual and business performance, cure diseases, solve environmental problems, and otherwise benefit humanity. On the other, it threatens to perpetuate bias, invade privacy, spread misinformation and, according to some, threaten humanity itself.

Faced with growing public and legal pressure, some businesses are taking steps to utilize AI in a more socially responsible way. They refer to these efforts as "responsible AI management." This report conveys the results of a survey-based study, conducted in early 2023, of RAIM practices at businesses that develop and use AI.

The study sought to answer the following three questions:

  • expand_more

  • expand_more

  • expand_more

Key findings


Large companies provided most of the responses.

Most respondents came from large companies, meaning those with 1,000 or more employees and/or USD10 million or more in annual revenue. This could mean larger companies are doing more with respect to RAIM and so were more likely to respond to the survey.

Respondents came from nine different industry sectors. This suggests the use of AI and the resulting need for AI governance is present in many sectors.

RAIM includes at least 14 actions.

The most commonly reported RAIM activities involved risk assessment, including evaluating regulatory risk and identifying harms to stakeholders, building an RAIM management structure, including a responsible official and RAIM committee, and adopting substantive RAIM standards, including AI ethics principles and RAIM policies. Respondents were much less likely to train employees in RAIM and to evaluate their employees' or their organization's RAIM performance.

Privacy experts are most likely to be responsible for RAIM, with others involved as well.

Of respondents, 60% said their organization had assigned the RAIM function to a specific person or people. The people performing this function held a variety of titles ranging from privacy manager to data scientist to responsible AI officer. Companies were most likely to assign the RAIM function to individuals with expertise in privacy, at 59.5%. The number of companies that identified more than one person involved in RAIM, and the wide variety of titles those individuals hold, suggests a cross-functional approach to RAIM may be useful.

Businesses believe RAIM is important to their company.

At 68%, the majority of the survey respondents said RAIM was either important or extremely important to their company. Nearly 90% either agreed or strongly agreed with the idea that companies should make a "meaningful investment" in RAIM.

RAIM creates substantial value for companies that invest in it.

All respondents reported their company gets at least some value from their RAIM programs, with almost 40% reporting they get a lot or a great deal of value. Companies with more developed RAIM programs reported gaining greater value, on average.

RAIM provides strategic value.

RAIM serves a strategic function, in addition to allowing a company to better attain its corporate values. It improves product quality, trust and employee relations, and reduces risk. The data suggests RAIM may produce the most value by improving product quality by promoting AI product innovation and better meeting customer expectations, not just by reducing negative impacts.

Specific RAIM activities may produce particular types of business value.

The survey data begins to tease out relationships between particular RAIM activities and specific types of value created. The data suggests companies that adopt RAIM policies experience greater increases in trust and those that adopt RAIM policies and require their suppliers to follow them report both increased trust and competitive advantage. In addition, businesses that attempt to identify whether their AI products and processes may cause harm to others or violate the law report greater increases in product quality. The small number of respondents precludes any definitive conclusions about these relationships. Additional research will be required to substantiate and explain them.

Implementation lags enthusiasm.

Most respondents said their RAIM programs were at an early stage of implementation and a majority described their company's process for making responsible AI judgment calls as ad hoc rather than systematic. This halting implementation stands in sharp contrast to respondents' belief in RAIM's importance and perceived value. This contrast signals further RAIM development may offer an opportunity to capture this value.