Resource Center / Resource Articles / Global AI Governance Law and Policy: Australia

Global AI Governance Law and Policy: Australia

This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in Australia. The full series can be accessed here.


Published: November 2025


Contributors:


Navigate by Topic

Australia's artificial intelligence regulatory journey has shifted from an early plan to introduce an EU-style, risk-based regime toward a more flexible, standards-led approach. What began as a move toward prescriptive guardrails and potential legislation has been seemingly overtaken by a focus on productivity, innovation and the use of existing legal frameworks. Yet this recalibration comes amid persistently low public trust in AI, creating a complex policy challenge: how to build accountability, safety and transparency without constraining the very innovation needed to realize AI's economic and social potential.


History and context

Australia's engagement with artificial intelligence builds on more than half a century of deep technology research. Australian universities and the Commonwealth Scientific and Industrial Research Organisation have long contributed to global advances in computer science, robotics, quantum computing, photonics, biotechnology and materials science. Despite strong research capability, commercialization has lagged. Limited venture capital, fragmented university-industry links and a relatively small domestic market have meant that many innovations were scaled offshore.

These structural realities have shaped Australia's broader economic profile: a nation whose prosperity rests on resource exports and advanced service sectors and a country whose “knowledge economy” has focused less on producing deep tech and more on adapting innovation to strengthen established industries. Artificial intelligence follows that pattern. Australia excels in applied domains such as mining, agriculture, health care and defense but remains a net importer of foundational AI systems and platforms.

As a result, Australia is unlikely to become a global powerhouse in AI model development. On the other hand, the country holds clear advantages in applied domains like mining automation, precision agriculture, medical diagnostics, climate science, defense and public-sector service delivery. The greatest economic benefits are expected from productivity gains and efficiency improvements rather than from AI exports.

Recognizing both opportunity and risk early, Australia was among the first countries to articulate principles for responsible AI. In November 2019, the federal government released Australia's Artificial Intelligence Ethics Principles, a voluntary framework covering fairness, transparency, privacy, accountability and human wellbeing. These principles laid the groundwork for subsequent policy, research and procurement guidance; they also signaled that AI should be pursued in line with public trust.

Institutionally, a key milestone came in 2021 with the creation of the National AI Centre under CSIRO's Data61 division to strengthen national capability and promote responsible adoption. The NAIC later moved into the Department of Industry, Science and Resources in 2024, reflecting the growing alignment between AI governance and economic strategy.

The DISR now leads AI policy as part of the broader industry and innovation portfolio. The department's posture has traditionally been risk-based, focusing on managing harms such as bias and misinformation while encouraging safe innovation. This was evident in the "Safe and Responsible AI in Australia" discussion paper published in June 2023 and its interim response that followed in January 2024, which proposed a risk-proportionate framework featuring mandatory safeguards for high-risk AI and voluntary guidance for lower-risk systems.

Australia also hosts a growing network of research and policy centres, including the Australian Institute for Machine Learning, the Responsible AI Research Centre (CSIRO, South Australian Government and University of Adelaide) and the Human Technology Institute at the University of Technology Sydney, each contributing to responsible-AI design and governance. States have also played a role, with New South Wales introducing one of the first frameworks guiding the ethical use of AI in government.

Importantly, Australia faces a pronounced trust deficit in AI adoption. According to a 2025 study by the University of Melbourne and KPMG, only 30% of Australians believe the benefits of AI outweigh its risks; just 36% of citizens trust AI systems more broadly. Approximately 78% of respondents expressed concern about negative outcomes from AI, and only 30% believe current laws and safeguards are adequate. This trust gap remains a central challenge for policymakers seeking to balance innovation with public confidence and adoption.


Approach to regulation

Australia does not have dedicated or overarching AI legislation. Instead, its regulatory approach relies on a combination of voluntary frameworks and existing non-AI specific laws. The government's position has evolved from a primarily risk-based lens toward one that increasingly seeks to harness AI's productivity and innovation benefits without stifling development.

Following the "Safe and Responsible AI in Australia" discussion paper, the government moved into a more specific phase of policy design.

In September 2024, the government released "Introducing Mandatory Guardrails for AI in High-Risk Settings," a proposals paper exploring possible ex ante obligations for high-risk AI applications. It asked stakeholders to consider what constitutes "high-risk AI," whether the proposed guardrails were fit for purpose, and how they should be implemented.

At a high level, the proposed guardrails focused on:

  • Accountability and governance across the AI lifecycle.
  • Privacy, data quality and data management.
  • Testing, assurance and ongoing monitoring of performance and safety.
  • Transparency, explainability and traceability.
  • Human oversight and contestability.
  • Security, integrity and record-keeping.

That same month, the government released the Voluntary AI Safety Standard to provide immediate, non-binding guidance for organizations. Closely mirroring the 10 guardrails proposed in the "Mandatory Guardrails for AI in High-Risk Settings" paper, the VAISS offered a practical preview of what future enforceable requirements might look like. These heavily drew on international benchmarks such as the EU AI Act, Canada's now defunct Artificial Intelligence and Data Act, ISO/IEC 42001, the National Institute of Standards and Technology AI Risk Management Framework and the National Institute of Standards and Technology AI Principles.


From regulation to recalibration

Based on the above, Australia appeared poised to quickly move toward a dedicated statutory regime, a lighter-touch analogue to the EU's AI Act.

By mid-2025, however, priorities shifted toward economic growth and productivity. Domestic productivity challenges and global developments have prompted Australia to reassess its posture. Within the context of the rapid rise of generative AI, early enthusiasm for EU-style regulation has given way to a more innovation-focused outlook. Moves by allies such as the U.K. to adopt approaches without prescriptive ex ante laws and the Trump administration’s explicit rejection of comprehensive AI regulation further influenced this pivot. Meanwhile, the EU AI Act's lack of global influence and mounting criticisms over its complexity and compliance burden also, no doubt, played a part.

As a result, the regulatory program was effectively paused and replaced with a broad review of existing laws and regulators. The emerging direction favours harmonisation of current frameworks over creating a new, centralized regime. The results of this review, alongside the forthcoming National AI Strategy due at the end of 2025, will determine whether further targeted reforms or coordination mechanisms are introduced. In the meantime, the National AI Centre published its Guidance for AI Adoption, a new framework replacing the VAISS, in October. While it remains too early to confirm whether this will define Australia's long-term approach, it may signal a broader shift toward standards-led rather than legislative regulation.


Public-sector governance

Progress has been slightly more tangible within the public sector where several frameworks already apply.

The Australian Government Responsible AI Policy sets minimum requirements for all Australian Public Service entities, such as mandating transparency statements and the appointment of accountability officers. The National Framework for Assurance of Artificial Intelligence in Government provides agencies with structured methods for AI assurance, testing and implementation to put the national AI Ethics Principles into practice. The Australian Government AI Technical Standard specifies design, testing and documentation requirements for AI use in government systems. The AI Data-Security Guidance was issued to address provenance, supply-chain integrity, data poisoning and model-manipulation risks.

Together, these and other instruments form a quasi-regulatory baseline that operationalizes the AI ethics principles within government practice.


Balancing innovation and protection

Australia's evolving approach seeks to balance risk management with innovation enablement. Having seemingly stepped back from a single overarching AI statute, the government appears to be intent on embedding AI oversight within existing legal and regulatory systems — a hybrid model designed for agility, coherence and international compatibility. This approach, however, faces a critical challenge highlighted above: low public trust in AI.

For AI to deliver productivity, innovation and economy-wide gains at scale, uptake is essential; uptake depends on confidence that AI will operate fairly, transparently, safely and accountably. Without that trust, citizens may resist AI-mediated decisions or services, undermining both investment and adoption. This trust gap lies at the heart of Australia’s regulatory tug-of-war. Policymakers must build frameworks that reassure the public without constraining innovation, making trust both a constraint and an objective of AI regulation.


Wider regulatory environment

While Australia has paused work on a stand-alone AI law, a wide range of existing legal frameworks already apply to the development and use of AI, including those governing privacy, consumer protection and product safety, discrimination and employment, intellectual property, and online safety, together with certain sector specific laws.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Latest developments

Australia's AI regulation journey has entered a recalibration phase rather than a standstill. While dedicated legislation remains on hold, the government and regulators are actively refining existing laws, standards, and strategies, laying the groundwork for a more integrated national approach to AI.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Recommendations on next steps

Adopting OECD guidance to strengthen AI regulations and adoption

As an adherent to the OECD AI Principles and a nation striving for international compatibility in its standards-led approach, Australia can strengthen its current regulatory framework by formally adopting specific tools and policy recommendations derived from the OECD's guidance. These additions would help address the nation's persistent public trust deficit while advancing its goal of building an agile and effective governance environment.

Strengthen transparency and accountability through tools

Australia's current framework relies on existing legal systems and voluntary guidance. The OECD offers concrete tools that Australia can incorporate to enhance accountability and public transparency, moving beyond general principles.

Australia can establish a formal mechanism or participate in existing frameworks to monitor and understand AI incidents. The OECD platform provides the AI Incidents and Hazards Monitor and focuses on AI risk and accountability. Integrating a mandatory incident reporting framework, especially for high-risk systems, would allow the government to track real-world harms — a necessary step for building public trust given that 78% of Australians are concerned about negative outcomes from AI.

Australia can require organizations developing advanced AI systems to participate in a structure similar to the Hiroshima AI Process Transparency Reporting Framework. This type of reporting facilitates transparency and comparability of risk mitigation measures across the industry, directly supporting the OECD value of transparency and explainability.

Formalize definitions for interoperability

Australia’s current approach involves reviewing and refining existing laws, such as consumer law and medical device regulation, rather than creating new legislation. To ensure its regulatory environment is interoperable and coherent, Australia can formally adopt the foundational OECD definitions.

Policymakers should explicitly leverage the OECD's definition of an AI system and the AI system lifecycle in their revised or new regulatory guidance. Countries, including the EU and U.S., use these definitions in their frameworks, making their adoption by Australia crucial for ensuring global interoperability. This would provide clear, internationally recognized terminology for industry and regulators, streamlining Australia's hybrid regulatory model.

Expand focus to key policy areas

While Australia's National AI Strategy is forthcoming at the end of this year, the OECD's detailed policy focus areas can guide the strategy's content, ensuring all critical aspects of AI governance are addressed.

The OECD specifically tracks AI compute and the environment. Australia can integrate policies to manage the environmental impact of large-scale AI computing, an area currently overlooked in legal review, which focuses primarily on consumer protection, privacy, and intellectual property laws.

Australia's focus on productivity gains aligns with the OECD's work on the Future of Work and the Work, Innovation, Productivity and Skills in AI program. The National AI Strategy should incorporate the OECD recommendation to build human capacity and prepare for labour market transition, ensuring the workforce is equipped for the changes AI will bring.

Given the rapid rise of generative AI, Australia should dedicate targeted policy guidance, drawing on the OECD's focus area for managing the risks and benefits of generative AI. This would complement the OAIC’s existing guidance on training generative models under the Privacy Act.

Reinforce value-based principles

Australia's earlier proposals included mandatory guardrails focusing on elements like security, integrity and testing. Even with the shift toward standards-led governance, the OECD AI Principles of Corporate Governance provide a robust foundation for reinforcing core values.

Australia can explicitly structure its Guidance for AI Adoption and existing sector-specific regulations, like those governing critical infrastructure, to more strongly reflect the OECD's value of robustness, security and safety. This is critical for ensuring that AI systems are reliable and resilient, particularly in high-risk applications like medical devices and financial services.

While Australia focuses on productivity, the OECD framework promotes inclusive growth, sustainable development and well-being. Australia could embed a framework requiring AI actors to demonstrate how their systems contribute to broad societal well-being and adhere to human rights and democratic values, including fairness. This approach would help counteracting known risks, like algorithmic bias, that are currently regulated primarily through existing antidiscrimination laws.


Full series overview

Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.

Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.

Global AI Governance Law and Policy

Jurisdiction Overviews 2025

The overview page for this series can be accessed here.


Additional resources

  • expand_more



Approved
AIGP, CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/CN, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 3

Submit for CPEs