TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Tech | Encryption isn’t enough: Why conversational AI requires more Related reading: Why old-school government procurement needs a rewrite for new AI tech

rss_feed

""

Expectations of privacy have never been higher among consumers, which means privacy has never been a greater imperative for businesses and their executive teams. At the same time, companies are increasingly under pressure to improve overall customer experiences and customer service via advanced technologies and automation, including conversational artificial intelligence interfaces.

These solutions can deliver responsive, on-demand experiences to customers, as well as tremendous cost efficiencies for the enterprise. In opening up a new digital touchpoint with consumers, however, companies also introduce a new point of vulnerability to the enterprise with regards to data breaches and leakage.

During the course of a typical customer interaction, users must often provide personal, sensitive data to an organization in order to resolve their customer support needs. Likewise, to deliver the most value, conversational AI interfaces should be able to provide detailed account information back to the user.

In order to enable these exchanges, it is vital for conversational AI solutions to have a comprehensive security program in place to uphold customer privacy, maintain regulatory compliance, and protect the enterprise’s data. When executives select a solution for their enterprise, they must ensure the security program of a given provider truly checks these boxes rather than merely paying lip service to privacy.

The term “encryption” is tied closely with data privacy and for good reason. Encryption means information is converted to code when transmitted, then decrypted on the other end so that third parties that might intercept the information in transit would be unable to read it. Encryption is important, yes, but by itself, it does not ensure consumer privacy.

Personal data is not at its most vulnerable when in transit but rather when it is stored or displayed at one end or the other. Thus, for a conversational AI solution to truly protect a customer’s privacy, its security functionality must go beyond encryption.

First off, let’s get one fundamental principle out of the way. At no time should a conversational AI provider sell or use its clients’ data or clients’ customers’ data for marketing purposes. The only way a conversational AI provider should use the data exchanged between a company and its customers is to develop service enhancements that will benefit the clients and end users.

This is an important point to understand in an online climate where customers have become increasingly wary of how and when their data is used for targeted advertising. For better or worse, most people have accepted that in order to use platforms like Google and Facebook, they must allow their data to be used for ad targeting. This is not a scenario customers expect to happen when they engage with a company through its conversational AI interface. Executives must be sure the solutions they employ do not allow for such private data harvesting and use within their fine print. 

Even when data is encrypted, compliance with regulations or company policies often requires that personally identifiable information or any other sensitive data must be redacted before it is stored. Therefore, a conversational AI solution must provide redaction controls that prevent any data deemed sensitive from being stored in logs or any other records.

One important note here: There are points within a conversation where the customer is expected to enter sensitive data, and a solution’s redaction controls should act on this data. But the controls should also extend beyond this to apply to any inadvertent entry of sensitive data that occurs at any point in a conversation. For example, a customer might decide to enter his or her Social Security number without any prompt for it. The system should be able to identity such information and redact accordingly.

Robust network security is a base requirement for a conversational AI solution; however, there should be specific controls in place to protect against likely attack vectors, such as when malicious code is  downloaded to the user’s browser. The attackers can access and modify the code to secretly gather information directly from the person’s browser. In order to protect against such attacks, a conversational AI solution must have specific and continuous file integrity monitoring controls in place to protect this code.

Finally, let’s not forget that companies today are governed by more than customer expectations of privacy. In fact, the list of consumer data legislation continues to grow, from last year’s EU General Data Protection Regulation to next year’s California Consumer Privacy Act in the U.S. Today’s conversational AI solutions must demonstrate the ability to comply with these sweeping privacy regulations, not to mention more specific standards such as the Health Insurance Portability and Accountability Act and Payment Card Industry Data Security Standard.

Improved automation of customer service and customer experiences is imperative in our age of digital disruption, but the improvement of these processes cannot come at the expense of consumer privacy and data security. Executives much ensure the solutions they employ are up to the formidable tasks outlined above. Their customer relationships — not to mention their regulatory compliance — depend on it.

photo credit: deepakiqlect Artificial Intelligence - Resembling Human Brain via photopin (license)

Comments

If you want to comment on this post, you need to login.