As part of a 10-person panel here at the International Conference of Data Protection and Privacy Commissioners in Brussels, FTC Commissioner Noah Phillips was given a short amount of time to tackle a big question: “How should regulators supervise innovative technology?”

“My general answer,” he said, “in regulating the private sector's use of technology and AI is to do it only if necessary and then very carefully.”

While much of the discussion at the ICDPPC, which was centered around “Debating Ethics,” focused on potential harms and the worries created by developments in data analytics and artificial intelligence, Phillips made a point of raising the risk of focusing on the risks.

“Technology continues to hold promise for us,” he said, “and innovation is something that we want to foster. Technology is a tool. It can be used for good or ill, and in the main, I believe, it is mainly used for good. We have to keep that very much top of mind. When we think of what technology is used for, and whether for good or ill, we have to keep in mind that for hundreds of years of history, the profit motive has served the common good. It can better the lives of people.

“Of course,” he said, “technology is not without fault, and that’s why we have laws.”

Phillips outlined three main points that guide his philosophy on regulation.

First, he said, there has to be recognition of democratic values. “We as implementers of the law must understand and respect those who write the laws,” he said. “They’re the closest to the people and have the strongest mandate to make fundamental value judgments.”

Second, he said it’s vital that regulators “understand what we purport to regulate.” Regulators, he said, not only need to understand the harms that they’re targeting and trying to spare consumers from, but they also need to recognize the limits of their technological expertise. “There is a lot to know when it comes to technology.”

Finally, he said, “We must consider the costs and benefits of the regulation. … It's important to recognize that unduly prescriptive rules can have outcomes that we can’t predict, and entrench incumbents, which can hurt competition.”

Prescriptive rules have their place, he said. They can “not only guide the market, but achieve really good outcomes for consumers. You want to know what your interest rate is. You want to make sure that information about you is accurate. These are important things.”

However, he said, in the context of something like artificial intelligence, “which is rapidly developing … flexible standards can allow us to propose common values, but evolve with the technology, and I think that combination is critical.”

When taken in combination with remarks from Rohit Chopra at last week’s Privacy. Security. Risk. conference in Austin, Texas, it’s clear the FTC is beginning to think about privacy enforcement in combination with its competition remit.

Just as Phillips noted regulation’s potential for entrenching incumbents, Chopra repeated more than once the idea that the FTC must protect an America “where you can start a company in a garage and become big. … I want to make sure we’re living in an America that’s not isolated from the rest of the world.”

Along those lines of global cooperation, Phillips emphasized the potential for the ICDPPC’s ethics discussion to help bind the global digital community. “It’s a common moral language,” he said. “It’s about how businesses do what they do and how citizens interact with the world. Very little regulation will be effective if you don’t have people thinking about digital ethics at all of those levels.”