The Senate vote on 17 September concluded a process that began 23 April 2024, with the presentation of a bill that then passed through both houses of Parliament on several occasions. The law now consists of 28 articles that address, from a strategic and operational point of view, some of the most critical issues facing the country in the unstoppable rise of AI technologies.
Having defined the framework of general principles, the law devotes specific attention to crucial sectors such as work, health and justice. It also regulates the use of AI by minors and intervenes on the level of sanctions through criminal protection. The ball is now in the court of companies and public bodies that must comply with the provisions of the law.
It is too early to fully assess the effects and scope of this law, especially considering that several implementing measures are expected in the coming months. However, it is possible to reflect on a number of points. Internationally, Italy is in a privileged position as the first country to adopt a national regulatory framework governing the development, adoption and governance of AI systems. The new law aligns with and implements provisions of the AI Act and interacts with the EU General Data Protection Regulation and national regulations on the protection of personal data.
The Italian law also attempts to introduce forms of simplification and innovation. This is the case, for example, in the health care sector with the scientific research and experimentation provisions in the development of AI systems. This further demonstrates that even with provisions it is possible to support markets, to the detriment of those — even those at the highest levels of government like former Prime Minister Mario Draghi — who continue to think deregulation or even laissez-faire are the only possible ways to ensure the EU and the country’s development and competitiveness.
Data economy regulations exist not only to protect the fundamental rights and freedoms of citizens but also to support innovative companies and processes. Of course, this requires the accurate and consistent application of the rules. It also demands a decisive investment strategy — not only towards companies that must comply with the law, but also in support of those enforcing it.
The authorities — starting with the new national authorities designated for AI under the new law, namely the Agency for Digital Italy and the Agency for National Cybersecurity, but also the independent authorities, in particular, the data protection authority, the Garante, which will continue to deal with AI within its remit — must now, more than ever, be supported and strengthened.
This is because Italy’s system will be able to reap AI's greatest human and economic benefits through constructive and collaborative dialogue between businesses and authorities. This dialogue must be built and approached openly, without fear and with courage. This law is not a goal, but a starting point: The ability to transform this technology into a catalyst for collective well-being will now depend on the ability to grasp and support the balance between rules and innovation.
National governance system
The AI law, which is pending formal publication in the Italian Official Gazette, outlines a composite national governance system, introducing new bodies and assigning new powers to certain national authorities.
As in previous versions of the law, the Agency for Digital Italy and the National Cybersecurity Agency were assigned to be the national authorities for AI. The ACN will monitor the adequacy and security of systems and have legal authority over inspections; the AgID will manage notifications and promote safe use cases for citizens and businesses.
The Interministerial Committee for Digital Transition has been tasked with approving the national strategy for AI every two years, prepared and updated by the Presidency of the Council of Ministers responsible for technological innovation and digital transition and in agreement with the national authorities, the Minister for Enterprises and Made in Italy, the Minister of Universities and Research, and the Minister for Defence.
The Observatory on the Adoption of Artificial Intelligence Systems in the World of Work has also been set up at the Ministry of Labour. It is tasked with monitoring the impact of AI on the labor market.
In the health sector, the National Agency for Regional Health Services has been given the power to establish and update guidelines for anonymization procedures and the creation of synthetic data, subject to the opinion of the Garante.
The Garante retains all powers relating to data processing under the GDPR and national laws, which, as is well-known, form the basis of all AI activities.
Server location
The final text of the AI law confirms the possibility of installing AI systems on servers located outside the EU, both for public and private use. This initiative ensures greater flexibility and continuity in the use of cloud infrastructure while upholding highest standards of security and data protection.
However, the provision directing public administration using e-procurement platforms to give preference to AI system and model suppliers that guarantee the localization and processing of strategic data in data centers located in Italy remains in effect. This also applies to suppliers whose disaster recovery and business continuity procedures are implemented within data centers located in Italy.
Workplace
When using AI to support production, organizational, and management processes, the AI law requires employers and clients to ensure that such technologies are used in a manner that respects the confidentiality and physical and mental integrity of workers and the security of personal data. The AI law requires AI systems to comply with the principles of equal treatment, avoiding all forms of discrimination. It also refers to the requirement to inform workers of the use of AI in the cases and in the manner referred to in Article 1-bis of Legislative Decree No. 152 of 26 May 1997.
Health care sector
With regard to scientific research and experimentation for developing AI systems in the health care sector, the AI law declares that the processing of personal and special categories of data by various public and private entities is of significant public interest. The obligations to inform patients have also been simplified, providing for the possibility of secondary use of data, including health data, without the need to obtain a second consent, for scientific research purposes, provided that de-identification measures are applied.
Furthermore, without prejudice to the obligation to provide information, the AI law legitimizes the reuse of personal and health data to apply anonymization, pseudonymization or synthesis mechanisms. This is permitted provided the processing is done for the aforementioned purposes of scientific research or for the planning, management, control and evaluation of health care.
AI and minors
The AI law introduces specific protections for minors under the age of 14, for whom access to AI systems and the processing of related personal data may only take place with the prior consent of those exercising parental responsibility. In this way, protection is not limited to access to systems but explicitly extends to the processing of personal data associated with the use of AI, requiring companies to obtain formal authorization for both activities.
Criminal profiles
Finally, this law introduces new violations and aggravating circumstances. In particular, the AI Act focuses on transparency obligations for artificially generated content and introduces a prison sentence “for anyone who causes unjust damage to others by sending, delivering, transferring, publishing or disseminating images or videos of people or things generated or altered using artificial intelligence systems, designed to mislead as to their authenticity.”
Next steps
The approval of this important law represents a new dimension that organizations should consider in their AI governance and compliance programs.
It is therefore essential to plan and implement certain strategic measures, particularly in light of the upcoming AI Act deadlines. In this regard, organizations should consider proceeding with the following activities:
- Launch a multi-level training program to fulfill the AI literacy requirement for employees and collaborators, applicable since 2 February, in line with the European Commission’s recent guidance.
- Map all AI systems in use and their use cases, including references to relationships with suppliers and any associated risks.
- Conduct an assessment of each system/use case's compliance with the AI Act and implement the related requirements in line with the deadlines of the European law, taking into account the national regulatory framework.
- Establish and define the internal AI governance structures through a specific policy.
- Implement an AI procurement toolkit to regulate relations with AI system suppliers incorporating — particularly in the case of general-purpose AI — the recent measures taken by the Commission.
- Develop an AI-use policy governing the use of AI tools by employees and collaborators.
Rocco Panetta, CIPP/E is Country Leader, Italy and managing partner at Panetta Law Firm.