Artificial intelligence's capacity to upend employees' relationships with the workplace is on its way to becoming a reality. The real questions are how soon and at what scale will it occur.
An 31 Oct. hearing by the U.S. Senate Committee on Health, Education, Labor and Pensions' Subcommittee on Employment and Workplace Safety featured exploration on multiple angles to the potential conundrums AI could raise related to employees' workloads and general employment or hiring processes.
U.S. Sen. John Hickenlooper, D-Colo., said, for employees, "there's nothing to be afraid of" as AI models see more widespread deployment across a number of workplace functions. Senators' questions generally focused on how the technology can enhance worker productivity rather than replace them.
"We want to know how AI is being adopted, and how AI is being used in the workplace," Hickenlooper said. “That's going to help us understand how we can ensure that workers have sufficient training (and have) power to maximize the potential of this rapidly evolving technology."
Accenture Managing Director of Talent and Organization Mary Kate Morley Ryan shared results of a joint Accenture and World Economic Forum report analyzing the impact of large language models on jobs. The report examined more than 90,000 "individual tasks" across 867 occupations. The report separated the jobs into the categories of "potential for automation," jobs to be augmented by LLMs and those unaffected by LLMs. Overall, the report projected 40% of all working hours in the reviewed jobs would be impacted by LLMs.
Ryan said companies need to consider how generative AI will affect their work functions in three key ways: impacts on existing jobs, how to develop a talent pipeline for developing future AI-powered solutions and identifying the necessary skills deployed AI models will require of the human operators.
"The reality is we don't currently have the workforce we need to fill the jobs of the future," Morley Ryan said. "That's why we advise our clients to establish a skills foundation tailored to their organization to deconstruct the work to support human and machine collaboration, and re-architect strategic and operational talent practices."
Black Tech Street founder and Executive Director Tyrance Billingsley testified his organization is taking a "community-first approach" to develop and deploy AI models to support Black tech entrepreneurs and up-and-coming minority tech workers in Tulsa, Oklahoma. The initiative focuses on avoiding a reliance on Big Tech to develop AI systems that may produce biases against historically marginalized communities.
Billingsley said any future federal law regulating AI in a workforce context should acknowledge the mass deployment of AI technology as a "complex, socio-technical issue" while establishing a "worker-centered AI social contract." A framework should also incentivize training and education programs in marginalized communities to avoid creating a repeat of the so-called "digital divide" from the internet age.
"The workforce will be the first area where we truly see the transformative power of AI at scale, whether this be the innovation economy, the creative economy, or one of the many other facets," Billingsley said. "If the systems for AI in the workforce are designed in a human-centered way, AI can be a tool to fundamentally alter the socioeconomic position of marginalized communities in this country, or it could exacerbate preexisting inequities in a way that is almost irreparable."
To accomplish the goals Billingsley laid out, Baker McKenzie Partner and leader of its AI practice Bradford Newman said Congress must develop a "rational, risk-based" regulatory framework that is designed "cautiously, prudently" and in an employment-focused context.
Potential laws aiming to regulate AI technology holistically risk creating "onerous, vague and expensive compliance obligations," according to Newman. If the federal government fails to act and leaves regulating AI in any number of contexts to the states and localities, such as New York City, which he referred to as a "misguided, one-size-fits-all approach to AI regulation."
"Without additional funding and training, existing (government) agencies are not fully prepared to oversee and regulate this complex technology," Newman said. "Because of the significant civil rights implication of workplace technologies, AI legislation in the employment context is a prudent place to begin this journey."
Newman added any sort of onerous regulation can't be tailored to "only the largest developers and users" or else compliance costs will become a limiting factor. Such a scenario may create "a de-facto monopoly for the largest industry players."
If you want to comment on this post, you need to login.