Funambulism is the art of walking on a tight rope or wire and staying on despite the precarious balance challenged by external factors, like wind, and internal influences, like, in my case, fear of heights. I have noticed two interesting examples of funambulism in our privacy and artificial intelligence governance world this week.
The first one concerns Meta and came as a multiprong thread combining advertising technology and consent as a legal basis. As my colleague Joe Duball said. Meanwhile, Meta announced it is rolling out a
The second example takes us to London. In the run up to the U.K. AI Safety Summit this week, there was some speculation on who was invited and who wasn't, who would go, and whether it was even a good sign to be invited? At the end of the day, the hype around the gathering culminated in the Bletchley Declaration on Artificial Intelligence Safety, signed by 28 countries and the EU. Signatories range from the U.S. and Australia to China and the Kingdom of Saudi Arabia, and France, Germany, Ireland, Italy, The Netherlands and Spain, as far as EU member states go.
The agreement is an interesting exercise as international declarations often tend to be an experience in tight rope walking, especially with such a wide spectrum of interests around the table. There are relatively consensual statements within the declaration, like this is a "unique moment to act and affirm the need for the safe development of AI," including in the public sector. Signatories recognize the need to address "the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection." Although they merely note the potential for "unforeseen risks stemming from the capability to manipulate content or generate deceptive content."
The declaration also underlines the importance of international cooperation to address risks arising from AI, noting common principles and codes of conduct should be considered. However, it clearly maintains an open door for regulatory divergence, stating a "pro-innovation and proportionate governance and regulatory approach" could include making, where appropriate, classifications and categorizations of risk "based on national circumstances and applicable legal frameworks."
The declaration suggests there will be a follow-up of some sort in 2024.