Introduction of the algorithmic accountability into the U.S. data protection landscape is a timely and farsighted development. If passed into law as part of the Data Accountability and Transparency Act, it will operate as a defensive shield for protected classes and will bring more transparency, accountability and fairness into the artificial intelligence–powered decision-making process in the U.S.

Simply defined, algorithmic accountability or algorithmic transparency is a policy measure aimed at holding the developers and operators of automated decision systems responsible for the results produced by their preset decision-making systems. Developing an algorithm involves a human inputting a specific set of instructions to be executed in a particular order to result in a certain outcome.

Algorithmic accountability assigns accountability to a human for any inequitable outcomes produced by the ADS. When individuals provide their class-identifying information to data processors (or data aggregators under DATA), they expect, at a minimum, fair non-discriminatory treatment when the ADS processes such information. DATA, through the enforcement of algorithmic accountability compliance steps, will provide such protections.

Practical applications of ADS are increasing. The decision to approve a borrower for a loan to buy a home, consider someone for a job, or forward a consumer an online advertisement for a specific service may be decided by AI.

Delegating decision making to the ADS certainly has its benefits. It makes the process more efficient, faster and reduces costs associated with administering the decision making.

Conversely, it may also create an environment where bias and discrimination are fostered. Businesses may be able to advance digital redlining, a form of technological discrimination, to advance their intentionally or inadvertently prejudiced agenda hidden in the algorithms without well-defined rules of accountability.

Decisions made by the ADS may start affecting individuals even before they are born. For instance, information that someone is pregnant and about to give birth may alert an online marketing company to push online advertisements about specific health care providers offering their services.

If the household’s profile supplied to the marketing firm’s ADS includes information identifying members of a protected class in the household, the ADS may be programmed to determine what advertisements to send based on what the marketing company’s client, the health care facility, would prefer to service in their facilities.

Under DATA, a protected class is defined as “the actual or perceived race, color, ethnicity, national origin, religion, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability of individual or a group of individuals.” An objective is to prevent ADS from using protected-class identifiers in a manner that would result in discriminatory outcomes.

The DATA proposal defines a data aggregator as “any person that collects, uses, or shares an amount of data that is not de minimis” and “does not include individuals who process data solely for personal reasons.” Data aggregators utilizing ADS in their operations will be mandated to complete specific time-sensitive compliance steps, e.g., risk assessment, disparate impact evaluation, and a mandate to develop alternative and less discriminatory methodologies.

Resulting reports will be submitted to a newly established Data Accountability and Transparency Agency, which will be charged with developing detailed administrative and technical requirements for the implementation of the new rules. The effectiveness of algorithmic accountability in reducing unfair preferential favoritism remains to be seen and will likely depend on the agency’s resources to enforce compliance and its capacity to provide a well-defined implementation guide to the users of ADS.

Adverse decisions, e.g., denial of a loan or a job opportunity, made by the ADS based on some protected-class identifier, may cause the adversely affected individual privacy harm. DATA broadly defines such harm as an actual or a potential adverse consequence that causes an unfavorable outcome to an individual, group of individuals or society relating to, e.g., the eligibility to rights, benefits or privileges, direct or indirect financial or economic loss, physical harm to property, unfair or unethical differential treatment, psychological or reputational harm.

Data aggregators will be mandated to ensure that potential risks that may result in any type of privacy harm be avoided. Noncomplying users of the ADS may attract the agency’s attention and risk an investigation and/or a costly enforcement action.

The algorithmic accountability will mandate users of ADS to undergo two specific compliance steps: risk assessment and impact evaluation on bias and privacy. Businesses currently operating ADS will be required to complete a RA and submit a report to the agency within 90 days after DATA comes into effect. Organizations considering employing new ADS will be obligated to conduct an RA before deploying such systems. Submission of the periodic IEs will be required as determined later by the agency but will be no less than once a year.

Compliance with the RA mandate involves conducting a study and producing a report outlining the ADS development process. The report will include information on the ADS design, as well as training data and identification of potential risks to accuracy, bias, and privacy on individuals or groups of individuals. The primary purpose of the RA is to identify potential vulnerabilities, determine the likelihood of adverse consequences, and weigh in the best ways to address issues without delay to prevent privacy harm.

Safeguarding a fair use of information encompasses an ongoing oversight of ADS. Typically, the RA will identify data protection vulnerabilities and mandate the users of ADS to remediate potential issues before they are employed. The main goal of the IE is to ensure that new issues discovered after ADS deployment are timely addressed. Specifically, the IE will consist of the following four steps: (1) Evaluating the ADS for accuracy and bias affecting the protected classes; (2) evaluating the ADS privacy impact on the protected classes; (3) analyzing the effectiveness of any measures taken to remediate issues identified in any previously conducted RAs; (4) identifying measures to improve the ADS to minimize the risk to accuracy, bias based on the protected class, and the ADS privacy impact.

Algorithmic accountability is not meant to discourage or slow down the use of AI in optimizing processes for businesses and advancing digital innovation. The main objectives of algorithmic accountability are to bring more transparency in automated decision making, promote anti-bias awareness, and introduce reasonable controls to the ADS data-processing practices. Protected classes may not always be able to defend their interests when it comes to the ways information about them is processed. DATA places a burden on the users of ADS through the specific compliance steps discussed above to ensure that information identifying individuals as members of protected classes is not misused.

Photo by Michael Dziedzic on Unsplash