Funambulism is the art of walking on a tight rope or wire and staying on despite the precarious balance challenged by external factors, like wind, and internal influences, like, in my case, fear of heights. I have noticed two interesting examples of funambulism in our privacy and artificial intelligence governance world this week.
The first one concerns Meta and came as a multiprong thread combining advertising technology and consent as a legal basis. As my colleague Joe Duball reported, the European Data Protection Board issued an urgent binding decision to ban Meta's data processing for behavioral advertising. Last December, the EDPB "clarified that contract is not a suitable legal basis for the processing of personal data carried out by Meta for behavioural advertising," EDPB Chair Anu Talus said. Meanwhile, Meta announced it is rolling out a subscription model for ad-free Facebook and Instagram services in the EU, with the understanding that a subscription model can be a valid form of consent for an ads-funded service based on prior Court of Justice of the European Union jurisprudence.
Sure enough, critics began to mount once the change was announced, particularly against the cost of the monthly subscription itself — ranging from 9.99 to 12.99 euros in the EU — deemed too expensive by some regulators. Civil society groups like privacy advocacy group NOYB also commented that moving to a "pay-for-your-rights system" will de facto put a price on an individuals' right to privacy and whether they can (or choose to) afford it. Feels a bit like doomed if you do, doomed if you don't. How long and sturdy is that rope that Meta — and regulators alike — are walking on?
The second example takes us to London. In the run up to the U.K. AI Safety Summit this week, there was some speculation on who was invited and who wasn't, who would go, and whether it was even a good sign to be invited? At the end of the day, the hype around the gathering culminated in the Bletchley Declaration on Artificial Intelligence Safety, signed by 28 countries and the EU. Signatories range from the U.S. and Australia to China and the Kingdom of Saudi Arabia, and France, Germany, Ireland, Italy, The Netherlands and Spain, as far as EU member states go.
The agreement is an interesting exercise as international declarations often tend to be an experience in tight rope walking, especially with such a wide spectrum of interests around the table. There are relatively consensual statements within the declaration, like this is a "unique moment to act and affirm the need for the safe development of AI," including in the public sector. Signatories recognize the need to address "the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection." Although they merely note the potential for "unforeseen risks stemming from the capability to manipulate content or generate deceptive content."
The declaration also underlines the importance of international cooperation to address risks arising from AI, noting common principles and codes of conduct should be considered. However, it clearly maintains an open door for regulatory divergence, stating a "pro-innovation and proportionate governance and regulatory approach" could include making, where appropriate, classifications and categorizations of risk "based on national circumstances and applicable legal frameworks."
The declaration suggests there will be a follow-up of some sort in 2024.
If you want to comment on this post, you need to login.