Greetings from Brussels!
Privacy-related issues are rarely out of the media eye these days, which continues to highlight how fundamental a concern privacy has become globally and particularly in the artificial intelligence field. Recently, as reported in the Financial Times, the European Commission has unequivocally joined the facial-recognition debate, turning its attention to the possibility of drafting new rules that would extend consumer rights to include and cover facial-recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of AI in today’s digital society as it becomes more pervasive.
A European Commission official told the FT the aim would be to limit “the indiscriminate use of facial recognition” by both private companies and public authorities alike. Any exceptions would be tightly controlled to ensure that citizen rights to privacy are reinforced and respected. Facial recognition and AI generally seem poised to become key battlegrounds where privacy is concerned as a follow on and in correlation with last year’s EU General Data Protection Regulation.
In an ambitious statement, the incoming president of the European Commission, Ursula von Leyen, has said she will unveil (draft) legislation within her first 100 days in office that will provide a “coordinated European approach on the human and ethical implications of artificial intelligence.” AI is clearly one of her priorities as mentioned in her “Agenda for Europe,” which documents and articulates the political guidelines for her forthcoming presidency. She has reasoned that “data and AI are the ingredients for innovation that can help us to find solutions to societal challenges, from health to farming, from security to manufacturing.” I would like to think that AI could also feed innovative solutions in the area of mitigating climate change as well — another pressing priority for our planet and future generations.
Generally speaking and adding some context here, a new tech-investment fund, alongside a regulation for AI, as well as a common European “space” for environmental data, are among several ideas being discussed inside the European Commission as its staff prepares for a new slate of commissioners taking office in November. In June, a high-level EU expert AI group working for the commission said new rules had to make clear when technologies were tracking targeted individuals or carrying out mass surveillance. More specifically, the group stated that practical means must be developed that allow meaningful and verified consent to be given to being automatically identified by AI or equivalent technology.
In coordination with EU efforts, at the national member state level, only France and Germany have adopted comprehensive national AI strategies to date (in 2018). A number of other countries, including Spain, the Netherlands and Ireland, have said they will have final strategies published by the end of 2019. However, several EU member states have stated they will need more time. The U.K., which is due to leave the EU 31 Oct., published an AI sector deal in April 2018.
A coordinated plan at the EU level is dependent on national AI strategies being adopted and shared within the Union. Without these plans, it will prove difficult to maximize investments and common approach. Moreover, these plans are necessary to continue to build and reinforce the regulatory environment within the EU. Over the past few years, concern in Brussels and member states has been rising about the potential impact of unregulated AI tools — potentially threatening privacy, security and democracy. These concerns are shared by its citizens. Trust and transparency will be key to the exponential promise that AI can deliver on.
If you want to comment on this post, you need to login.