Before U.S. Federal Trade Commission Chair Lina Khan detailed the agency's Section 6(b) investigation into five companies' artificial intelligence investments and partnerships with cloud providers during the commission's 25 Jan. tech summit on AI, she told a story about regulation.

Khan discussed airplane manufacturer Boeing and how the air travel industry's recent problems could be traced to the company's 1997 merger with rival McDonnell Douglas. The European Union and other officials had raised concerns the merger would stifle competition — something U.S. regulators dismissed, saying McDonnell was no longer a viable competitor, and even threatened possible trade sanctions, The New York Times reported. The merger has since been linked to the cost-cutting and quality issues at Boeing.

Khan said regulators made the same error in not preventing a few technology companies from securing market dominance in the internet sector in the mid 2000s. It led to the rise of business practices such as behavioral advertising that many regulators and privacy advocates say hurts personal security and market competition.

"The difference between Boeing and many of these companies," Khan said, referring to the technology industry, "is that there is simply no masking airplanes falling apart in the sky."

But Khan and other regulators taking part in the summit stressed how they aim to learn from a lack of regulation around privacy, data collection and social media to inform how AI issues will be tackled in the future. Their comments come as regulators and human rights advocates repeatedly sound the alarm over the lack of guardrails around how generative AI is used and trained, and signals a desire to prevent ethical issues with AI from becoming entrenched in the industry while it is still in its nascent era.

The FTC has taken significant actions around AI and privacy issues in recent months. It cracked down on RiteAid using facial recognition technology without safeguards and banned data aggregator InMarket from selling precise consumer location data. Its investigation into AI startup partnerships and investments such as the one between Microsoft and OpenAI was characterized as the first step of scouting for market consolidation and undermined competition. 

FTC Commissioner Rebecca Kelly Slaughter said regulators have "played catch up" in recent years as the extent of commercial surveillance within the social media and ad technology world has become more apparent. The negative effects of that era and unfettered activities are numerous, she said, including social media business models' reliance on selling private data and the damage social platforms can have on teenagers' mental health.

Slaughter opined a similar situation is beginning to take shape with AI. It begins with a few companies, already well-stocked with collected data they can lean on to train models, achieving market dominance.

"I'd like to see us write a different play than the one we saw unfold in the commercial surveillance era," she said. "Nothing about pervasive data collection, tracking the shape of social media, or the dominance of a few tech firms was inevitable." 

The U.S. has yet to see a comprehensive privacy or AI law, although the latter was the subject of a wide-ranging executive order last fall and several congressional hearings. But Slaughter said the agency has plenty of tools to make sure competitive factors like access to data, chips and processing power is more fair, such as existing copyright, antitrust and consumer data protection laws. She referred to the agency's power under FTC Act Section 6(b), which allows it to request companies provide details to understand market trends and business practices and is the fulcrum of its AI startup investigation.

"The way to stay on top of this quick moving market and avoid repeating the mistakes of the past is by using the full panoply of our statutory tools," she said. "There's no AI exception to the law."

Atur Desai, a deputy chief technologist with the U.S. Consumer Financial Protection Bureau, said his agency is exploring its own powers to regulate AI. It is studying how companies obtain and move the data they use to train AI models and launched rulemaking under the Fair Credit Reporting Act that could potentially extend its scope to data brokers.

Desai also pointed to two decisions within the last year where creditors using algorithms ran up against adverse action notice requirements under the Equal Credit Opportunity Act. In bothcases, the bureau found creditors must provide specific reasons for credit denials and not rely on a so-called "black box" model, a term used for when the reasoning behind an algorithm's decision-making process is not known.

"If using a technologically-complex model means that a company cannot comply with its obligations under federal consumer financial laws, they really shouldn't be using that model," he said.