Dear privacy pros,
The topic du jour in the privacy world is the recent leak of the largest cache of data concerning offshore accounts and assets of the rich and powerful.
Aptly dubbed the Pandora Papers, the 2.94 terabytes of information and the resulting report from the International Consortium of Investigative Journalists cover a wider array of offshore service providers and jurisdictions than previous exposés, including the 2016 Panama Papers and the 2017 Paradise Papers. These revelations open up a Pandora’s box of financial secrets and will likely lead not only to direct repercussions for the elites involved, but also second order effects in the regulatory environment for financial reporting and tax regimes around the world.
From a privacy perspective, piercing the shield of confidentiality around these shady dealings should naturally raise some alarm bells. However, there are clearly sound countervailing arguments based on transparency and national interest that warrants the shining of much-needed light into the shadow financial system that enhances wealth inequality and, in some cases, enables the individuals concerned to hide ill-gotten benefits stemming from human rights abuses. The searing indictment from the Washington Post in the Dashboard below helps to frame the ongoing investigation in context by covering the human impact that some of these acts have had.
Another article in this week’s Dashboard that caught my eye is the Wired piece on Clearview AI’s enhanced facial recognition capabilities and ambitions. I have touched on the topic of biometric surveillance in previous introductions, so suffice it to say that while there are certainly arguments one can make to support the use of such technology, a careful assessment of the privacy impact against potential benefits needs to be undertaken before any deployment.
In this regard, this article on the concerns arising from the implementation of trials for a new face biometrics system at Wellington Airport is instructive. Readers may also wish to refer to the SCMP article about the ethical guidelines on artificial intelligence governance recently released by China’s Ministry of Science and Technology for further guidance.
As a new study from Cisco shows, consumers (unsurprisingly) want more privacy and control over their data, and more consumers are now willing to act to protect their own interests online. These findings have led Cisco to develop their New Trust Standard, a benchmark that assesses how trustworthy businesses are as they embark on digital transformation initiatives.
In Singapore, a consortium formed the Credence Committee with the objective of developing the Credence Data Trust Rating System, formally launched 27 Sept. Consortium members include testing and certification company TÜV SÜD, global professional services firm KPMG, law firm Drew & Napier, led by Credence Labs, a tech start-up with extensive government experience in developing IT industry and global consulting experience.
The Credence DTRS is an open and decentralized data trust rating system that measures compliance end-to-end, from adherence to relevant legislation and data policies to implementation of relevant processes and procedures. It incorporates and complements the Personal Data Protection Commission’s Data Protection Trustmark by extending beyond compliance with the Singapore Personal Data Protection Act and covering implementation details based on an organization's line of business, corporate governance, risk profile and business needs.
I hope this introduction has given you some food for thought and pointed out some rabbit holes for you to jump into. Thanks for reading and happy exploring!
If you want to comment on this post, you need to login.