The digital economy is at the center of a seismic change with the convergence of big data and artificial intelligence. The oceans of digital information and low-cost computing power are providing endless marketing opportunities. The rapid rate of innovation is proving to be one of the most transformative forces of our time. At the same time, we are faced with ethical dilemmas challenging users’ digital dignity and redefining privacy norms.

The past year will likely go down as the year of dubious ethics. From the data sharing and mining practices of social networks to questionable disclosures of mobile apps and platforms selling location data, everyone needs to be concerned. All too often these entities who were purportedly “stewards of our data” have acted unethically.

Moving past compliance
Unfortunately, marketing and privacy professionals have all too often focused on compliance, obeying laws that do not necessarily address right versus wrong. As we find with emerging technologies such as facial recognition, location tracking and AI, laws are reactionary while society pays the price. Conversely ethics are the moral principles that governs one’s behavior built upon on societal values and norms. Ethics need to be firmly rooted and aligned to societal expectations, or as some say, how one acts when no one is watching. If the recent privacy practices are an indicator of what lies ahead, we may be heading towards a “tragedy of the trust commons.”

Historical lessons

Regrettably we continue to find cases where ethics and profitability are diametrically opposed. Others justify their practices on what they perceive as the norm, not necessarily questioning the morality, societal impact or long-term consequences. Ethical lapses have been observed in nearly every industry. From the accounting practices of Enron, manipulation of emissions testing by auto manufacturers, to bankers opening more than 1.5 million accounts without customers’ knowledge. Perhaps the most troubling example is the wide spread deception recently exposed with Theranos, placing the health of patients at risk. While every startup has to sell on future capabilities, Theranos’ founder Elizabeth Holmes blatantly crossed the line between optimism and deception.

Over the past twenty-four months the power of data analytics has evolved significantly. It has redefined predictive behavioral analytics, extracting detailed inferences and correlations of users’ lifestyles. These technologies show great promise but only if they are used within societal values and norms and developed with an “ethical purpose.” They need to be grounded in and reflective of the ethical principles of beneficence and non-maleficence (do good and do no harm).

The question at hand is can industry and governments be trusted to responsibly regulate AI? Second, how can ethical guardrails be developed to help prevent abuse? If AI follows trade groups’ past arguments that their privacy practices do no harm, society may be in for a rude awakening. AI can be used to empower invasive surveillance, marginalize segments of society and erode one’s digital dignity.

There is no question AI will have a profound effect on how marketers engage consumers. With thoughtful planning and execution, consumers will get better and more relevant ads, content and services. This can be a win-win, but only if we get it right and address the ethical unattended consequences in advance. To address the long-term impact and keep regulations from stifling innovation, we need to re-evaluate our practices today. While executives of these companies need to be held accountable, every employee in the know has a responsibility to come forward and follow their own moral compass.

Taking action – Questions to ask yourself and your management

  1. Is your AI application designed to respect the autonomy and digital dignity of subjects?
  2. Do consumers realize a “real” benefit or are you creating asymmetries of power?
  3. Are your practices transparent and accountable with the ability to undo unintended harm?
  4. Have you designed your technologies to guard against unintended biases and unforeseen uses?
  5. What circuit breakers are in place to prevent and detect adversaries (and rogue employees) accessing or manipulating data and algorithms?
  6. Would you want your child or parents be subjected to these practices?

Working together we can shape our digital future and preserve ethics. As you plan for 2019, remain focused on the user and societal impact of your product or service. Recognize it is your duty to educate and inform users of their rights. Not unlike the medical profession, marketers and data scientists need to make a "Hippocratic oath" to do no harm!

photo credit: Piyushgiri Revagar Clever Cogs! via photopin (license)