This probably has something to do with survival instincts and evolution, but humans really love to focus on risks.

In today’s world, when so much attention is given to artificial intelligence, the risks associated with this technology seem especially hyped. Sure, we humans have got our fingers caught many times with new technologies, but the level of anxiety around the risks of AI is getting to science fiction movie levels.

Some recent policy and regulatory reactions to AI have gone as far as calling for outright bans on the development of AI to eliminate its potentials risks. Never mind the fact that AI is already embedded in our lives and its true potential is yet to be discovered. In an area of technological development with such complexity, nuance and myriad of applications, a wholesale ban is likely to prove as ineffective as it would be counterproductive.

It is obvious ensuring the responsible development and use of AI calls for a degree of regulation, but what if we regulated AI to help it achieve its potential rather than to stop it in its tracks? And what is AI’s potential other than powering killer robots, some of you may be thinking!

The potential of AI is actually to help humanity be more prosperous. Prosperity is about a lot more than economic growth, by the way. It is about widening opportunities for everyone and making the most of people’s talents, wherever they are. So the trick is to devise the means to achieve the best of all possible worlds: AI that helps us address the biggest challenges of our time — from improving health care practices and saving the planet from environmental threats to assisting with education and productivity — without contributing to inequality, discrimination and intolerable privacy intrusions.

As it happens, appropriate AI regulation has a lot to learn from current privacy regulation.

Data protection law has changed less radically since its inception more than four decades ago than the world around it, and the reason for that is the principles on which that law is based continue to be relevant today. Similarly, AI, and particularly its reliance on personal information for machine learning purposes, can effectively be regulated through the thoughtful application of long-standing principles. Well-known building blocks of sophisticated privacy frameworks around the world, not least the EU General Data Protection Regulation, are applicable to and compatible with the responsible development and use of AI.

Even the most advanced of those principles, such as data protection by design and by default can be embedded in practices like training algorithms as part of the machine-learning processes. Other more traditional principles in the context of privacy regulation, like transparency, fairness, proportionality, accuracy and data security are also very relevant to AI regulation and entirely compatible with it, as they have been with other equally transformative technologies in the past.

What about individuals’ rights? Again, despite the power and autonomy that define AI technology, there are practical ways to allow the exercise of rights such as access, deletion and human intervention. It’s all a matter of deploying the type of can-do attitude and pragmatism privacy professionals know so well.

In order to apply these principles and rights to an emerging field like today’s AI, it is crucial to understand — at least at a basic level — how this technology is being developed.

Take generative AI, for example, which is attracting so much attention at the moment. Those with responsibility for devising new regulatory frameworks or applying existing ones need to be given the opportunity to see beyond the hype and learn what is involved in training machine learning language models, so they understand what data is used and how. This kind of human learning will be extremely valuable to ensure this area of technological development is not surrounded by misunderstandings and fear, but managed in a realistic and beneficial way.

For this reason, it is also essential to approach AI regulation as a collaborative effort where industry, society, policy makers and regulators listen to each other and join forces to achieve the best and most sensible outcomes. Now is the time to truly educate ourselves on how AI is developed and used, come up with effective and practical tools to protect ourselves, and ensure responsible development and use is the overarching principle.