TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Top issues to address when using automated employment decision-making tools Related reading: The ins and outs of workplace privacy law: A chat with Zoe Argento

rss_feed

""

""

As we wait for the EU Artificial Intelligence Act to pass, AI enforcement is imminent in the U.S. On the federal level, we have both joint and individual statements from the U.S. Federal Trade Commission, Justice Department, Consumer Financial Protection Bureau and Equal Employment Opportunity Commission, as well as the White House's Blueprint for an AI Bill of Rights and follow-up Fact Sheet on new actions to Promote Responsible AI Innovation that Protects Americans' Rights and Safety.

At the state level, New York City's automated employment decision tool law, known as Local Law 144, specifically addresses the use of automated tools in decision-making.

If you are utilizing employment decision-making tools, you have considerable obligations. But what about if you are just the technology provider? You still have some obligations, but, at minimum, there are things you should be doing to allow your business-to-business client to be compliant with their obligations and choose to use your product.

Here are the top questions to ask yourself.

Have you conducted and publicly disclosed a third-party bias audit of your tool prior to its use?

Under New York City's Automated Employment Decision Tool Law, if the tool "substantially assists or replaces discretionary decision making," a third-party bias audit needs to be conducted. This must take place once every 12 months and cover specific items as further defined by the law.

Per the EEOC's guidance, an AEDT vendor should be able to answer employers when asked if steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a protected characteristic.

According to the White House AI Bill of Rights, systems should undergo pre-deployment testing, risk identification and mitigation, and implement ongoing monitoring that demonstrates they are safe and effective based on their intended uses. This includes things like proactive equity assessments, use of representative data and protection against proxies for demographic features, ensured accessibility for people with disabilities in design and development, pre-deployment and continued disparity testing and mitigation, and clear organizational oversight.

Organizations should perform independent evaluations and plain language reporting in the form of algorithmic impact assessments, including disparity testing results and mitigation information. This should be made public whenever possible. The concept was also reiterated in the White House Fact Sheet on responsible AI innovation.

You will likely need this for the EU under the AI Act as well.

After this, organizations also must ensure a policy for ongoing monitoring of these risks and impacts is put in place.

Have you conducted a data protection impact assessment?

The California Privacy Rights Act — which applies to employees — does not have regulations on this yet, but they are coming. Colorado's privacy law rules speak to this and can be used as the mode of assessment, for now.

Per Colorado's privacy law, controllers must conduct and document DPIAs where processing activities present heightened risks of harm to consumers. Profiling that presents a reasonably foreseeable risk of unlawful, disparate impact on consumers also presents a heightened risk of harm. The DPIAs must identify and weigh the benefits from the processing against the potential risks to the rights of the consumer associated with the processing, as mitigated by the safeguards the controller can employ to reduce risks.

It is also helpful to address, in documentation, whether the automated processing involves or is reviewed by humans, and should be tailored to the Court of Justice of the EU's SCHUFA decision.

 A corollary to conducting DPIAs is designing with data protection by design and default and refraining from a dark-patterns approach, which is also suggested by the White House AI Bill of Rights.

Do you have a detailed privacy disclosure that meets the requirements of the laws?

We already know the CPRA, like its predecessor the California Consumer Privacy Act, requires a full privacy disclosure for candidates and employees, which should also include processing that utilized automated decision-making.

According to NYC's AEDT Law, employers need to provide the disclosure at least 10 days in advance of AEDT use.

Per Colorado's rules and recent FTC enforcement actions, a disclosure is also required in advance of the collection of the relevant data. As the vendor, you need to provide, at a minimum, the disclosure to your employer client to facilitate their compliance.

Under NYC's AEDT Law, the notice must explain the AEDT will be used in connection with the evaluation, instructions for how an individual can request an alternative selection process or a reasonable accommodation under other laws, if available, and the job qualifications and characteristics the AEDT will use to assess the candidate. If requested, within 30 days of the request, notice of the type of data collected for the AEDT, the source of such data, and the employer or employment agency's data retention policy must be provided to a candidate or employee.

The White House AI Bill of Rights states organizations need to provide a meaningful understanding of how the system works and explain how and why an outcome impacting the individual was determined by an automated system, including when the automated system is not the sole input determining the outcome.

Do you have a data retention policy?

NYC's AEDT Law states, if requested, the employer must provide a candidate or employee with data retention information related to the tool within 30 days of the request.

Where the CPRA applies, this needs to be included in the notice at collection and be granular, as per the regulations and the recent Q&A document.

A data retention policy is also required by the FTC, including recently in the Easy Healthcare and Drizly settlements. Organization's need to include which information is retained, for how long, as well as why they are retaining it for this long. Also, pay special attention to any "indefinite" retention term.

Have you provided a way to opt out of automated decision-making/involve human intervention?

Per the White House AI Bill of Rights, individuals should be able to opt out, where appropriate, from automated systems in favor of a human alternative. Individuals should have access to a person who can quickly consider and remedy problems with automated systems.

According to Colorado's privacy law, users have the right to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer, including employment decisions.

As a company subject to these laws, we already know regulators are watching. However, even if the laws or guidance do not apply to you directly, as in the early days of the EU General Data Protection Regulation and CCPA, as a vendor, you should embrace the "GDPR/CCPA ready" slogans of old, and show your business clients using your product will put them in a position to use AI in a compliant manner.

Thank you to Fox Rothschild Associate Melanie Notari, CIPP/US, for help putting the materials for this article together.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.