Artificial intelligence may be a useful tool to help increase productivity, but it can also create privacy risks. In the IAPP-EY Governance Report of 2019, 41% of respondents said their organization uses their standard privacy risk analysis to assess AI, and only 6% have AI-specific privacy safeguards in place. “These unique risks may not be properly assessed using a standard privacy risk analysis, and many privacy experts believe that responsible use of AI means having specific safeguards that extend beyond a standard privacy risk analysis,” former IAPP Legal Extern Chelsea Broomhall, CIPP/US, writes in this piece for Privacy Perspectives. Broomhall also touches upon the safeguards and guidelines organizations can adopt to analyze AI risks.
If you want to comment on this post, you need to login.