TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Tech | Lawmakers introduce House resolution on ethical AI Related reading: A window into proposed APRA from lead US Senate drafter

rss_feed

""

There are few signs pointing to U.S. regulation for artificial intelligence happening anytime soon. But two Congressional Democrats representing Silicon Valley and Detroit want ethical guidelines for AI, and they proposed a House resolution to get the ball rolling. Their ideas, which aim for AI system accountability and oversight, personal data privacy and AI safety, were endorsed by some of the giants of tech developing AI, including Facebook and IBM.

In late February, Reps. Brenda Lawrence, D-Mich., and Ro Khanna, D-Calif., both members of the Congressional Artificial Intelligence Caucus, unveiled 10 proposed guidelines for ethical AI in their House Resolution 153. For Lawrence, there was one key reason she wanted to make sure AI goes down the right path, and that’s the impact of AI on the future of work.

“Representing the city of Detroit, which has a workforce participation rate of 53.4 percent and one of the lowest rates of internet connection in the country,  I realized that advances in technology are helping the few and not the many,” Lawrence told Privacy Tech. “I introduced HRES 153 so we can help shape the dialogue and development of AI to work for all Americans.”

The future of work
Much of the media discussion around work as it relates to AI emphasizes potential job losses as automated systems replace human labor. The resolution reflects this, supporting the development of guidelines for ethical AI that creates “Career opportunity to find meaningful work and maintain a livelihood.”

In AI ethics circles, however, researchers and academics exploring the topic have analyzed the nearer-term impacts of automated systems on human labor and power dynamics within the workplace. For instance, a January report from Data and Society on “The Labor of Integrating New Technologies” suggested “that policymakers and advocates should keep watch of the unevenly distributed costs of experimenting and implementing new technologies,” noting, “Given that the labor of AI integration is often invisible or under-acknowledged, it is all the more important to ensure that those who work with and alongside AI systems are adequately protected, supported, and compensated.”

The resolution recommends a total of 10 issues be addressed in potential guidelines for ethical AI, including the need for transparent and explainable AI systems, information privacy and personal data protection, access and fairness in technological services and benefits, as well as accountability and oversight for automated decision-making.

The list also mentions “Engagement among industry, government, academia, and civil society.” Indeed, there is a growing movement among academics and tech rights advocates pushing for governments in particular to engage citizens and other stakeholders when considering the implementation of emerging technologies. 

Support from big tech
Tech firms and allies, some based in Khanna’s Silicon Valley district, supported the resolution. Facebook, a subject of near-constant scrutiny in relation to its data privacy practices, was one. “[A]s AI technology increasingly impacts people and society, the academics, industry stakeholders and developers driving these advances need to do so responsibly and ensure AI treats people fairly, protects their safety, respects their privacy, and works for them,” stated Kevin Martin, the company’s vice president of U.S. public policy, in a news release about the resolution.

IBM also supported the resolution, stressing the need for transparent and explainable AI to ensure public trust in AI systems. The company has hinged recent advertising efforts on the tech ethics cause, even mentioning bias in AI in a recent TV spot. Some tech ethics advocates questioned IBM’s ad message in a parody video, suggesting tech firms must back up public statements about ethics commitments with ethical practices. 

The Software Alliance, which works on behalf of members, including Microsoft, Oracle and Salesforce, also backed the resolution.

The global government AI ethics trend
National governments across the globe from Dubai to Finland to Singapore have addressed some of the issues mentioned in the House resolution in their own nonbinding directives aimed at guiding ethical approaches to development and implementation of AI. Yet, while a handful of individual lawmakers in the U.S. have highlighted AI ethics, the federal government has done little to address the need for fair, transparent and accountable AI development.

An Executive Order on AI strategy unveiled by the White House last month did not mention anything related to ethical considerations. A Pentagon AI strategy buried discussion of ethical goals beneath promises of rapid AI adoption and experimentation.

Meanwhile, whether the House resolution could lead to legislation seems unlikely at this early stage. “This resolution is a necessary first step,” Lawrence said. “We hope that these guidelines can serve as a standard for ethical practices and we hope that AI developers will work towards these goals. In the event that these measures are not being adhered to, we will look into best next steps to ensure transparency and accountability.”

photo credit: National Institutes of Health (NIH) Neurological connections via photopin (license)

Issues addressed in HRES 153
  1. Engagement among industry, government, academia, and civil society. 
  2. Transparency and explainability of AI systems, processes, and implications. 
  3. Helping to empower women and underrepresented or marginalized populations. 
  4. Information privacy and the protection of one’s personal data.
  5. Career opportunity to find meaningful work and maintain a livelihood. 
  6. Accountability and oversight for all automated decision-making. 
  7. Lifelong learning in STEM, social sciences, and humanities. 
  8. Access and fairness regarding technological services and benefits. 
  9. Interdisciplinary research about AI that is safe and beneficial. 
  10. Safety, security, and control of AI systems now and in the future. 
Comments

If you want to comment on this post, you need to login.