Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Insurers, in the neutral sense, are always discriminating between groups of people. Their core business is underwriting risks, estimating the expected claims cost through risk assessments.
Many insurers use artificial intelligence to underwrite risks or analyze past data and while such intelligent programs can make risk predictions more accurate, they may also have discriminatory effects on society.
In "AI, insurance, discrimination and unfair differentiation," we investigate how insurers apply AI, identifying two related trends — data-intensive underwriting and behavior-based insurance — and possible effects related to discrimination and unfair differentiation.
Data-intensive underwriting
With data-intensive underwriting, insurers use and analyze more data. With machine learning, insurers could find new correlations in data with which the insurer could more accurately predict the expected claim costs of a certain consumer group.
This form of underwriting comes with potential discrimination-related risks. A legal risk is that insurers could accidentally discriminate indirectly — disparate impact. For example, if an insurer uses a newly found correlation to set prices, they could inadvertently cause harm to certain ethnic groups, or other groups with legally protected characteristics.
There are other possibly unfair, or at least controversial, effects of data-intensive underwriting. First, consumers may be confronted with prices based on characteristics they have little influence on. In the Netherlands, for example, some insurers charged different car insurance prices based on the consumer's house number. Consumers can influence other characteristics more easily, such as the type of vehicle they buy.
Second, insurers could use certain consumer characteristics to set prices, while the consumer does not see the logic in it. AI systems are good at finding correlations in large datasets. In theory, an insurer could make a correlation between the chance that a consumer files a claim, and whether a consumer has an address with an even number, is born in a certain month, or that people spend more than 50% of their days on streets starting with the letter J. Third, insurers could introduce products that are only intended for specific groups. For example, Promovendum, a Dutch insurer, markets its insurances as being only for the highly-educated. Such practices can be controversial because the insurer targets specific groups of the population, thereby excluding the rest.
Fourth, insurance practices could reinforce inequalities already existing in society. This could occur if an insurer, on purpose or by accident, charges higher prices to lower-income individuals.
Behavior-based insurance
With behavior-based insurance, insurers can adapt the price to individual consumers in real-time, based on the consumer's behavior. Some car insurers offer a discount if the consumer agrees to being tracked by the insurer, using a device in the car, and drives safely. A health or life insurer could offer a discount to consumers who, according to their health tracker, walk a certain number of steps. In the U.S., some car insurers track consumers with an app.
An important difference between data-intensive underwriting and behavior-based insurance is that behavior-based insurance bases the price on an individual consumer's behavior rather than the characteristics of a group. Also, behavior-based insurance focuses more on preventing risks than on paying a consumer after a risk has materialized. Insurers could, at least in theory, prevent accidents by nudging risky drivers to drive safer.
The risks of behavior-based insurance are somewhat similar to those with data-intensive underwriting. For example, some people may be excluded from behavior-based insurance, like bad drivers for whom behavior-based car insurance might be so expensive it is out of their reach. In a hypothetical market where all car insurers offer behavior-based insurance, bad drivers may not have the ability to insure themselves affordably.
Behavior-based insurance could reinforce financial inequality. If wealthier people receive more discounts — based on their health trackers for example — they pay less.
Both data-intensive underwriting and behavior-based insurance could have several negative effects. It cannot be determined that data-intensive underwriting is more fair than behavior-based insurance, or vice versa. Many aspects of data-intensive underwriting and behavior-based insurance are unclear and deserve more research. For example, it is unclear whether people find behavior-based insurance less attractive than data-intensive underwriting because of the privacy interference that comes with it.
One thing is clear, however, public debate is needed to decide which AI-driven insurance practices are accepted in our societies.
Marvin van Bekkum is a Ph.D. candidate, Frederik Zuiderveen Borgesius is professor of ICT and Law and Tom Heskes is professor of data science at Radboud University.