TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Europe Data Protection Digest | Notes from the IAPP Europe Managing Director, 6 March 2020 Related reading: Google to delay Privacy Sandbox deployment

rss_feed

""

""

Greetings from Brussels!

It seems that all people can talk of lately is COVID-19. Governments and medical authorities around the world are grappling with their response planning and advisory positions, and it is varied and diverse. What I found of particular interest is how user technology platforms have sought to contain the spread of information, and more importantly, disinformation in light of the epidemic. What has been alarming in listening to news commentary over the last few days is how reliant we are on obtaining our updates and information from social media as opposed to reliable and objective sources, such as national or international health and governmental sources.

Evidence suggests tech platforms have adopted different and divergent policies on COVID-19 misinformation. I read in the U.K. press that Twitter has become a hotbed of “inaccurate and dangerous advice,” while others, such as WeChat, have restricted users’ ability to communicate about the outbreak at all. The epidemic is a relatively new test for the tech operators and how they can apply their misinformation policies. In Twitter’s case, for example, misinformation policies only kick in once there is deliberate “platform manipulation” — in other words, once there is coordinated efforts to spread malicious or misinformation typically in cases involving “rogue” states. In situations to the contrary, Twitter’s default policy is essentially one of minimal or no intervention. However, in light of COVID-19, coupled with the platform search functionality, it has introduced a pop-up message directing its users to credible national medical sites, such as the National Health Service in the U.K. or Federal Public Services in Belgium. Moreover, the company announced this week it would be removing inappropriate and opportunistic advertisements centered around the outbreak. Such measures should help moderate and steer users to credible information.

Facebook adopted a similar position this week, and if you search "coronavirus," you will be directed to similar trusted sources for updates on the situation in your local country. Last week, the tech giant also adopted a strong stance on banning advertisements for products claiming to cure or prevent COVID-19. In addition to this effort, Facebook also committed to give the WHO and other reputable organizations free information advertising to address the epidemic.

In other relevant news, The New York Times reported on the rollout last month of a mobile app for Chinese citizens for the state to monitor COVID-19 mobility and evolution. China has begun, what the Times refer to as, a “bold mass experiment in using data to regulate citizens lives.” The close contact detector software algorithm somehow determines through a system of “coded labels” — think traffic lights — if you should be quarantined or allowed into subways, malls and other public spaces. The app developed by the government with the China Electronics Technology Group Corporation relies on data from the transport and health authorities — sounds all very dystopian. It clearly raises questions over privacy and the extent of obtrusive surveillance practice in force. That said, taken the impact of the epidemic in China and undoubtedly the level of continued citizen concern, this new app might not appear all that controversial to citizens. However, on close inspection of the app software, it appears that once a user grants access to personal data, location and identifying data of citizens is also accessible by Chinese law enforcement — somewhat more controversial than at first glance.

Comments

If you want to comment on this post, you need to login.