The FTC is hosting a workshop on “informational injury,” asking about the “qualitatively different types of injuries” consumers experience “from privacy and data security incidents."

Last September, the FTC received one court’s views on this issue when it dismissed an FTC claim against D-Link, a maker of internet-connected cameras and routers for use in the home. The FTC claimed that D-Link had not taken reasonable steps to secure the devices, which allegedly harmed consumers. Rejecting this claim, the court noted that the FTC did not allege “any actual consumer injury in the form of a monetary loss or an actual incident where sensitive personal data was accessed or exposed. Instead, the FTC relies solely on the likelihood that [D-Link] put consumers at ‘risk’” arising from the ease with which remote attacker could allegedly compromise D-Link’s devices. Risk, to the court, did not constitute legally cognizable harm. (The court gave the FTC leave to re-plead, so it may yet try to bring D-Link’s allegedly insecure internet-of-things devices under the “unfair practices” rubric.)

Even if the court is right about what’s needed to state an unfairness claim under the FTC Act, it is obviously quite wrong about good data security practice. Good data security practice requires assessing threats and taking reasonable steps to mitigate them before  a problem occurs. It would be absurd for any practicing data security professional to advise a client that “mere” risks aren’t problematic, or that it’s okay to wait until harm has occurred to address them.

It would be absurd for any practicing data security professional to advise a client that “mere” risks aren’t problematic, or that it’s okay to wait until harm has occurred to address them.

This disconnect between the law applicable to the security of consumer IoT devices and good security practice illustrates why IoT security will be a public policy challenge going forward, even as IoT devices proliferate.

One issue is what the court focused on: Bad security practices create the risk  of harm before they cause actual problems.

Both the legal system and people in general fail to deal well with risk. People have a hard time deciding rationally what to do in the face of relatively remote, technically complicated risk scenarios. We’re all victims of survivor bias: “I’ve done this for a while and nothing bad has happened, so it’s okay to keep doing it.” As people buy more and more internet-connected gadgets that seem to work fine and that don’t cause immediate, obvious problems, it will be hard to convince people that data security problems matter.

From a policy perspective, even if people could process IoT security risks in a rational manner, the issues still presents a classic collective-action problem. Barring a consumer’s worst-case scenario, like ransomware locking down a computer, the problem of poor IoT security will seem insignificant compared to more salient public policy issues like guns or health care, and to more immediate problems of day-to-day life, like paying bills and getting kids’ homework done.

Individual consumers have scant incentive to effectively insist on better data security.

Moreover, one of the most serious problems with poor IoT security isn’t harm done to consumers themselves, but rather damage done to third parties when, for example, insecure IoT devices are recruited into botnets used for DDoS attacks. We can bemoan consumers’ lack of appreciation of their civic data security duties when they passively acquiesce in participating in these types of attacks, but the fact remains that acquiescing does not directly harm them.

The upshot is that we cannot reasonably expect consumers to take the lead on IoT security.

This is why the approach embodied in the bipartisan “Internet of Things Cybersecurity Improvement Act of 2017” is so interesting. The bill wouldn’t impose regulations on IoT providers or set any IoT security standards. Instead, it would forbid the government from buying internet-connected devices that don’t implement industry-standard security protocols, that contain known security defects, or that are not patchable when defects are found later on.

The bill may well never become law. But considering the political economy of IoT security, it’s telling that the factors that make individual consumers unlikely to demand IoT security, and that – as the D-Link case shows – make it challenging for the FTC to do so either, are reversed in the case of the federal government in its role as a consumer/purchaser of IoT devices. Specifically:

  • The government’s core missions – such as national security or financial regulation – require a focus on data security risks, not just on after-the-fact harm.
  • The government has enormous troves of information whose exfiltration would compromise its core missions, making it a target of opponents who will exploit any vulnerability. While individual consumers might rely on “security through obscurity,” the government can’t.
  • The government has experienced several large and embarrassing data breaches, so there is nothing hypothetical about its need for tighter information security.
  • The government has been the target of botnet-driven DDoS attacks and so is highly motivated to prevent them in general and to avoid being a source of them.

Because the government’s incentives as a buyer of IoT devices differ from those of individual consumers, it can lead by example, showing state governments, corporate entities – and even individuals – what good IoT security practices look like, suggesting that everyone should want something that meets industry standards, that’s free from known defects, and that’s patchable. Not just individuals, but larger institutions (governmental or private sector) can be expected to begin to ask the same questions as they purchase commercial IoT devices, to the extent they haven’t been already.

Once it’s laid out in black and white, as in the bill, it just makes common sense.

photo credit: gruntzooki Webcams, computer mall, Shenzhen, China.JPG via photopin