TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | When is AI PI? How current and future privacy laws implicate AI and machine learning Related reading: EU aims to regulate AI ethics

rss_feed

""

GDPR-Ready_300x250-Ad

From health care to energy to emerging technologies, such as autonomous vehicles, the use of artificial intelligence has proliferated in almost every industry. Both startups and longstanding businesses are increasingly relying on AI and machine learning to maximize efficiency and achieve better outcomes. For example, the integration of AI is assisting the health care industry in diagnosing and treating cancer while also helping the financial industry update its credit-scoring model. Autonomous vehicle manufacturers are relying on AI to develop the cars of the future, entertainment services use AI to tailor and customize consumer music and video preferences, and machine learning is at the core of targeted advertising. Potential use cases for AI will only increase over time as the technology becomes more sophisticated and more mainstream among businesses.

Parallel to the rise in the use of AI has been the increased focus of regulators on protecting data privacy. The EU General Data Protection Regulation and California Consumer Privacy Act have been at the forefront of regulatory changes aimed at protecting consumer privacy, but regulators worldwide have passed legislation in recent years aimed at providing consumers with more knowledge and protection regarding their data, including in countries like Brazil and Thailand.

A common theme among comprehensive privacy laws, such as the GDPR and CCPA, is that, in their quest to provide consumers with overarching privacy protections, they incorporate broad definitions of what information is protected under the law. The GDPR, for example, protects “personal data,” which it defines as “any information relating to an identified or identifiable natural person.” The CCPA, meanwhile, covers “personal information,” which is defined as “information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household” and includes categories of information, such as unique identifiers and inferences, used to create a profile about a consumer.

These broad definitions of protected information under the GDPR, CCPA and likely other future comprehensive privacy laws create potential legal obligations for businesses utilizing AI technology. Machine learning, in particular, depends on large amounts of data being inputted into a system in order for the machine to accurately predict future outputs. While these inputs do not necessarily have to be personally identifiable information to be useful (it does not matter whose data is inputted so long as the data input properly reflects the measurement it is supposed to), the fact is that legal obligations under comprehensive privacy laws, like the CCPA and GDPR, can start as soon as “personal information” or “personal data” is collected or “processed,” even if the information is ultimately used for a non-identifiable purpose. This means that, in addition to accuracy and discrimination concerns, businesses utilizing AI must also take into consideration potential privacy law concerns.

To ensure compliance with privacy regulations, businesses relying on AI should be proactive in terms of understanding what data they collect and what their potential legal obligations are at the point of collection or when personal data is processed. In particular, businesses should be assessing whether the information they collect is regulated under the various privacy laws in the first place (based on the definitions of what information is covered) or if it falls outside the parameters of applicable regulations. (These definitions are also rapidly changing). Analyzing whether data that is used for purposes of AI development meets the definition of “deidentified” information under the CCPA or “anonymous” information under the GDPR, for example, can help a business understand if it has privacy law obligations with regards to its AI activities. This analysis can apply both when information is initially collected and afterward, to the extent that a business is able to convert personal information to a deidentified or anonymized form without running into technical, contractual or other legal hurdles. Companies should also note that the applicability thresholds vary under the different privacy laws and falling outside the purview of one regulation does not guarantee that you are in the clear from all potential privacy laws.

To the extent that a business comes out the other way and concludes that its AI activities are regulated by one or more privacy laws, it should take the steps to understand what exactly this means in the context of its AI activities. For example, there is a trend toward providing consumers with individual rights with regard to their data, including the right to delete their personal information. With regard to machine learning inputs, providing such a right to consumers may be practically impossible or significantly limit AI development. Businesses facing such a dilemma should understand what rules apply to them, whether they can or do process the information in such a way that the privacy laws do not apply to them (as discussed earlier), and whether there is some other exception that applies. Additionally, businesses should be aware of legal obligations with regard to the use of AI itself. For instance, the GDPR gives EU data subjects the right not to be subject to a decision that produces “legal effects” or “similarly significant” effects solely based on automated processing. This may affect, for example, a business that uses AI technology to determine loan or credit card eligibility. Not only does the business in this scenario have to think about its GDPR obligations in relation to the information that it uses to develop its algorithm, but the GDPR also requires the business to consider the effect that its use of AI technology has on consumers. Understanding these obligations and proactively addressing them can help businesses avoid the penalties associated with noncompliance.

The intersection of AI and data privacy regulations is not limited to comprehensive privacy laws, like the GDPR and CCPA. In the U.S., industries that have been historically regulated from a privacy perspective (and may be exempt from the CCPA in some capacity) may find themselves in violation of more established privacy laws. For example, those in the financial services industry that are relying on algorithms developed by third parties to underwrite insurance or make other credit-based decisions could find that they have obligations under the Fair Credit Reporting Act. These businesses could be considered to be “users” of “consumer reports” and thus would be subject to certain obligations under FCRA (such as to provide consumers with adverse action notices). The U.S. Federal Trade Commission highlighted the applicability of FCRA in the AI context in a blog post published in April of 2020 on best practices for businesses when using AI and algorithms.

Additionally, those businesses looking to revolutionize the health care space through the incorporation of AI may find themselves grappling with the U.S. Health Insurance Portability and Accountability Act. Health care mobile app developers looking to improve their algorithms by using health information obtained from hospitals or health insurance companies may have obligations as “Business Associates” under HIPAA, which would include signing a “Business Associate Agreement” that may limit how the data can be used. To the extent that an app developer uses health information that falls outside of HIPAA (because it is not obtained from a “Covered Entity”), that developer may still have obligations under the CCPA, depending on the business’s size and presence in California. Thus, getting out of one set of privacy law obligations may lead a business to fall into another. 

The proliferation of AI, increased focus of regulators on data privacy issues and trend for privacy laws to be more comprehensive has set the stage to where all businesses utilizing AI going forward will have to ask themselves about the applicability of the various privacy laws. This does not mean that AI development will be necessarily hampered. Instead, businesses will have to ensure they are checking the right boxes when developing or relying on AI technology.

Photo by Kevin Ku on Unsplash

 


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.