Artificial intelligence is all around us. In this very moment, automated recommendation and prediction tools are at play when we deposit money in the bank, receive health care services or even buy groceries. Despite efforts by some to dismiss predictions about the proliferation of AI as mere hype, the reality is, that future has already arrived.
Virtually every sector and industry has realized the utility of automation as a general-purpose tool — think facial recognition admission systems, chatbots, virtual assistants and image generators. Forbes Advisor estimates AI will see an annual growth rate of 37.3% by 2030, and AI market size is projected to reach USD407 billion by 2027, a four-fold increase from USD86.9 billion in 2022.
AI systems are becoming common tools employed across sectors, as millions of platform and application developers build their own automated systems and/or integrate tools like ChatGPT and Gemini into their products. As a result, AI is no longer the task of a singular technology industry. It is becoming a foundational infrastructure of 21st century society with digital privacy and civil rights implications.
The new AI paradigm is the often invisible underpinning of the "smart" technologies we rely on for work, transportation, fitness, entertainment and more — not unlike the digital transformation that stemmed from the global adoption of internet technologies. The widespread adoption of AI tools has resulted in a recent flurry of policymaking activity worldwide — even as various jurisdictions, particularly the U.S., continue to grapple with their privacy frameworks decades into the digital era. But developing effective AI policy requires seeing the technology for what it has become: a ubiquitous infrastructure touching virtually every sector and consumer market.
Creatively co-designing AI policy with the US public
As AI tools become embedded into the functioning of society, everyone faces exposure to their risks, from privacy to equity to accessibility. Yet, AI literacy is extremely low in the U.S. According to the National Artificial Intelligence Advisory Committee's recommendations on Enhancing AI Literacy for the United States of America, 20% of Americans still lack access to the internet and more than half of Americans are "more concerned than excited about AI."
To craft AI policies that allow society to harness the potential of AI while avoiding its pitfalls, we must ensure the voices shaping policies are representative of society at large, not limited to insular groups of domain experts, and policies are designed through a holistic lens rather than one focused on individual sectors. This is our best shot at ensuring the new AI paradigm's benefits outweigh the risks — for everyone.
The traditional policymaking approach in the U.S. prioritizes industry and government experts leading the drafting of specific policies per sector and then offering periods of public feedback. AI policy development would benefit from a different approach to public engagement, especially because the industries creating these technologies are still uncovering their capabilities — and risks — at the same time they are introduced to consumers.
Multifaceted programming that engages the public as co-participants in the learning process to help directly shape and create the functional design of AI policy would help ensure people and positive social impacts are at the center of policymaking. Engaging the public through arts and culture could ensure policy more effectively protects consumers because it is directly informed by the lived experiences, identities and perspectives of the average American consumer across varying demographics, regions and more.
There is proof of the efficacy of this type of engagement throughout U.S. history. In the 1930s, the Works Progress Administration projects helped foster national identity through public artworks, provided a visual vocabulary for social issues like worker's rights, and preserved local customs and culture during the Great Depression. Today, the National Endowment for the Arts works with artists to bridge the digital divide by helping underrepresented communities build capacity to generate their own digital content and goods, and encourage future opportunities and forward-thinking through programs like the STEMarts Lab and "tech-infused creative expression." Policymakers should proactively engage the arts and humanities in shaping the future of AI policy, too.
Arts, cultural programming for AI policymaking
In October 2023, U.S. President Joe Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to coordinate federal action on standards-setting around AI safety and security. The executive order asserts the U.S. as a global leader on AI policy and advancement. It directs several federal agencies to promulgate safety and testing standards for AI while adhering to eight principles and priorities. These include protecting consumer interests, promoting privacy and ensuring AI policies are consistent with advancing equity and civil rights.
These lofty goals cannot be fulfilled by relying solely, or even primarily, on technical expertise funneled through a sectoral lens. Rather, a multidisciplinary approach is required that meaningfully includes underrepresented and marginalized groups as co-producers of policy and tackles the challenge from a holistic viewpoint. As the executive order acknowledges, ensuring the responsible development, application and use of AI in the U.S. "demands a society-wide effort that includes government, the private sector, academia, and civil society."
The National Endowment for the Humanities responded to the executive order by establishing a research initiative, Humanities Perspectives on Artificial Intelligence. The NEH recognizes "questions about the ethical, legal, and societal implications of AI are fundamentally rooted in the humanities, which include ethics, law, history, philosophy, anthropology, sociology, media studies, and cultural studies." The departments and institutions charged with carrying out the executive order's directives — especially the Department of Commerce, the National Science Foundation, and National Institute of Standards and Technology — should proactively leverage the humanities research, workshops and digital humanities programs as a means for broader, accessible public engagement that does not require policy or technical expertise for participation.
Navigating this transformative AI landscape necessitates a commitment to incorporating the perspectives of individuals with disabilities, children, minorities and immigrants into the decision-making process. Arts- and humanities-based organizations offer access to these groups through cultural programs that do not require subject matter expertise to engage in conversations on AI. Incorporating perspectives from these communities is essential for crafting policies capable of addressing the unique needs and challenges faced by various social groups.
Leveraging arts, cultural institutions
To bring the executive order's call for a society-wide effort on building effective AI policy to fruition, U.S. policymakers should consider engaging the public through arts and humanities programming in addition to more established channels of engagement for policy, such as feedback periods.
Principles from the Blueprint for an AI Bill of Rights can be leveraged to examine broader social impacts of AI and craft policy from a holistic lens, rather than a sectoral one. To ensure equity and ethics are prioritized, collaboration should occur among stakeholders who are best situated and qualified to examine the social impacts of AI from plural perspectives. This includes working not just with the departments named specifically in the executive order, but also tapping into existing initiatives like the NEH's Humanities Perspectives, the Census Bureau's Advancing Equity with Data tools and the National Telecommunications and Information Administration's Digital Equity Act Programs to deliver on the promises of bridging the digital divide.
Definitions of "AI experts" should be expanded to include artists, humanists and sociologists, as well as individuals from vulnerable and underrepresented groups. Currently, voices from the tech industry or individuals with extensive education and involvement in white-collar occupations are the primary voices shaping discussions on AI. One area of opportunity in bringing the public into policy design includes more direct collaboration between the Department of Commerce, NIST and the NSF — among the leads named in the executive order for AI policymaking — and their arts and humanities counterparts, for example, the National Education Association, the NEH, the National Council on the Humanities, national museum networks and public libraries that regularly reach diverse American communities through public programming.
Members of the public should be proactively invited to help inform and co-produce policy proposals. Effectively and inclusively co-designing AI policy can be achieved through public programming with stakeholders who have demonstrated expertise in promoting equity, rights to privacy and consumer protection. Examples include the NEA, NEH and public education institutions with mandates that include public outreach and increasing access to technology to bridge the digital divide. Policymakers need to engage the public in the development of policy proposals, rather than waiting to solicit feedback during the public comment period once policies are already drafted. This will help ensure inclusive perspectives are built into the foundational policies for AI.
To quote former White House Office of Science and Technology Policy Director Alondra Nelson, we need to "devise inventive policy approaches that do not merely react to present challenges but anticipate future ones." Designing AI policy that anticipates the current and future challenges requires us to invest in imagining the full range of possibilities AI technologies could enable and also the possible harms.
Arts, literature, philosophy and social science offer different perspectives on AI, as do the voices of the underrepresented in our society. These groups must have a seat at the table from the start if we are to effectively mitigate risks and build "safe, responsible, fair, privacy-protecting, and trustworthy AI systems."
The new AI paradigm is here. Let's make sure it works for the benefit of everyone.