Resource Center / Resource Articles / Global AI Governance Law and Policy: Australia
Global AI Governance Law and Policy: Australia
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in Australia. The full series can be accessed here.
Published: November 2025
Contributors:

Navigate by Topic
Australia's artificial intelligence regulatory journey has shifted from an early plan to introduce an EU-style, risk-based regime toward a more flexible, standards-led approach. What began as a move toward prescriptive guardrails and potential legislation has been seemingly overtaken by a focus on productivity, innovation and the use of existing legal frameworks. Yet this recalibration comes amid persistently low public trust in AI, creating a complex policy challenge: how to build accountability, safety and transparency without constraining the very innovation needed to realize AI's economic and social potential.
Australia's engagement with artificial intelligence builds on more than half a century of deep technology research. Australian universities and the Commonwealth Scientific and Industrial Research Organisation have long contributed to global advances in computer science, robotics, quantum computing, photonics, biotechnology and materials science. Despite strong research capability, commercialization has lagged. Limited venture capital, fragmented university-industry links and a relatively small domestic market have meant that many innovations were scaled offshore.
These structural realities have shaped Australia's broader economic profile: a nation whose prosperity rests on resource exports and advanced service sectors and a country whose “knowledge economy” has focused less on producing deep tech and more on adapting innovation to strengthen established industries. Artificial intelligence follows that pattern. Australia excels in applied domains such as mining, agriculture, health care and defense but remains a net importer of foundational AI systems and platforms.
As a result, Australia is unlikely to become a global powerhouse in AI model development. On the other hand, the country holds clear advantages in applied domains like mining automation, precision agriculture, medical diagnostics, climate science, defense and public-sector service delivery. The greatest economic benefits are expected from productivity gains and efficiency improvements rather than from AI exports.
Recognizing both opportunity and risk early, Australia was among the first countries to articulate principles for responsible AI. In November 2019, the federal government released Australia's Artificial Intelligence Ethics Principles, a voluntary framework covering fairness, transparency, privacy, accountability and human wellbeing. These principles laid the groundwork for subsequent policy, research and procurement guidance; they also signaled that AI should be pursued in line with public trust.
Institutionally, a key milestone came in 2021 with the creation of the National AI Centre under CSIRO's Data61 division to strengthen national capability and promote responsible adoption. The NAIC later moved into the Department of Industry, Science and Resources in 2024, reflecting the growing alignment between AI governance and economic strategy.
The DISR now leads AI policy as part of the broader industry and innovation portfolio. The department's posture has traditionally been risk-based, focusing on managing harms such as bias and misinformation while encouraging safe innovation. This was evident in the "Safe and Responsible AI in Australia" discussion paper published in June 2023 and its interim response that followed in January 2024, which proposed a risk-proportionate framework featuring mandatory safeguards for high-risk AI and voluntary guidance for lower-risk systems.
Australia also hosts a growing network of research and policy centres, including the Australian Institute for Machine Learning, the Responsible AI Research Centre (CSIRO, South Australian Government and University of Adelaide) and the Human Technology Institute at the University of Technology Sydney, each contributing to responsible-AI design and governance. States have also played a role, with New South Wales introducing one of the first frameworks guiding the ethical use of AI in government.
Importantly, Australia faces a pronounced trust deficit in AI adoption. According to a 2025 study by the University of Melbourne and KPMG, only 30% of Australians believe the benefits of AI outweigh its risks; just 36% of citizens trust AI systems more broadly. Approximately 78% of respondents expressed concern about negative outcomes from AI, and only 30% believe current laws and safeguards are adequate. This trust gap remains a central challenge for policymakers seeking to balance innovation with public confidence and adoption.
Australia does not have dedicated or overarching AI legislation. Instead, its regulatory approach relies on a combination of voluntary frameworks and existing non-AI specific laws. The government's position has evolved from a primarily risk-based lens toward one that increasingly seeks to harness AI's productivity and innovation benefits without stifling development.
Following the "Safe and Responsible AI in Australia" discussion paper, the government moved into a more specific phase of policy design.
In September 2024, the government released "Introducing Mandatory Guardrails for AI in High-Risk Settings," a proposals paper exploring possible ex ante obligations for high-risk AI applications. It asked stakeholders to consider what constitutes "high-risk AI," whether the proposed guardrails were fit for purpose, and how they should be implemented.
At a high level, the proposed guardrails focused on:
- Accountability and governance across the AI lifecycle.
- Privacy, data quality and data management.
- Testing, assurance and ongoing monitoring of performance and safety.
- Transparency, explainability and traceability.
- Human oversight and contestability.
- Security, integrity and record-keeping.
That same month, the government released the Voluntary AI Safety Standard to provide immediate, non-binding guidance for organizations. Closely mirroring the 10 guardrails proposed in the "Mandatory Guardrails for AI in High-Risk Settings" paper, the VAISS offered a practical preview of what future enforceable requirements might look like. These heavily drew on international benchmarks such as the EU AI Act, Canada's now defunct Artificial Intelligence and Data Act, ISO/IEC 42001, the National Institute of Standards and Technology AI Risk Management Framework and the National Institute of Standards and Technology AI Principles.
From regulation to recalibration
Based on the above, Australia appeared poised to quickly move toward a dedicated statutory regime, a lighter-touch analogue to the EU's AI Act.
By mid-2025, however, priorities shifted toward economic growth and productivity. Domestic productivity challenges and global developments have prompted Australia to reassess its posture. Within the context of the rapid rise of generative AI, early enthusiasm for EU-style regulation has given way to a more innovation-focused outlook. Moves by allies such as the U.K. to adopt approaches without prescriptive ex ante laws and the Trump administration’s explicit rejection of comprehensive AI regulation further influenced this pivot. Meanwhile, the EU AI Act's lack of global influence and mounting criticisms over its complexity and compliance burden also, no doubt, played a part.
As a result, the regulatory program was effectively paused and replaced with a broad review of existing laws and regulators. The emerging direction favours harmonisation of current frameworks over creating a new, centralized regime. The results of this review, alongside the forthcoming National AI Strategy due at the end of 2025, will determine whether further targeted reforms or coordination mechanisms are introduced. In the meantime, the National AI Centre published its Guidance for AI Adoption, a new framework replacing the VAISS, in October. While it remains too early to confirm whether this will define Australia's long-term approach, it may signal a broader shift toward standards-led rather than legislative regulation.
Progress has been slightly more tangible within the public sector where several frameworks already apply.
The Australian Government Responsible AI Policy sets minimum requirements for all Australian Public Service entities, such as mandating transparency statements and the appointment of accountability officers. The National Framework for Assurance of Artificial Intelligence in Government provides agencies with structured methods for AI assurance, testing and implementation to put the national AI Ethics Principles into practice. The Australian Government AI Technical Standard specifies design, testing and documentation requirements for AI use in government systems. The AI Data-Security Guidance was issued to address provenance, supply-chain integrity, data poisoning and model-manipulation risks.
Together, these and other instruments form a quasi-regulatory baseline that operationalizes the AI ethics principles within government practice.
Balancing innovation and protection
Australia's evolving approach seeks to balance risk management with innovation enablement. Having seemingly stepped back from a single overarching AI statute, the government appears to be intent on embedding AI oversight within existing legal and regulatory systems — a hybrid model designed for agility, coherence and international compatibility. This approach, however, faces a critical challenge highlighted above: low public trust in AI.
For AI to deliver productivity, innovation and economy-wide gains at scale, uptake is essential; uptake depends on confidence that AI will operate fairly, transparently, safely and accountably. Without that trust, citizens may resist AI-mediated decisions or services, undermining both investment and adoption. This trust gap lies at the heart of Australia’s regulatory tug-of-war. Policymakers must build frameworks that reassure the public without constraining innovation, making trust both a constraint and an objective of AI regulation.
While Australia has paused work on a stand-alone AI law, a wide range of existing legal frameworks already apply to the development and use of AI, including those governing privacy, consumer protection and product safety, discrimination and employment, intellectual property, and online safety, together with certain sector specific laws.
-
expand_more
Privacy
The Privacy Act 1988 (Cth) remains the primary law regulating the handling of personal information in Australia. The act is principles-based and is currently undergoing significant reform following the government's multiyear review, which commenced before the rise of generative AI. As a result, while no AI-specific amendments are currently proposed, many of the forthcoming changes would impact AI use and development. The first tranche of reforms, passed in 2024, introduced new transparency obligations around automated decision-making that will take effect in December 2026.
Australia's privacy regulator, the Office of the Australian Information Commissioner, has been proactive in interpreting the act in AI contexts and is actively regulating AI through interpretation and enforcement rather than waiting for dedicated legislation.
In October 2024, the OAIC released two companion guidance pieces clarifying how the act applies to AI. The first, Guidance on privacy and the use of commercially available AI products, is directed at organizations deploying AI tools. It emphasises due diligence when selecting vendors and outlines expectations for privacy-by-design, transparency and accountability.
The second, Guidance on privacy and developing and training generative AI models, is directed at developers and researchers. It emphasizes that even publicly available data may contain personal information and must be handled in accordance with consent, purpose-limitation and data-minimisation principles. It also cautions that AI hallucinations, or outputs inferring personal details, can themselves constitute the collection of personal information, underscoring obligations around accuracy, security and deletion of data no longer required.
The OAIC has also issued several landmark determinations relevant to AI powered facial-recognition technology, including Clearview AI in 2021, wherein scraping online images to build a FRT database breached Australian privacy law; 7-Eleven Stores in 2021; Bunnings Group in 2024; and Kmart Australia in 2025. All of these cases involved the unlawful collection of biometric information of customers by using FRT.
Additionally, the OAIC examined the use of deidentified medical imaging data from the I-MED Radiology Network shared with Annalise.ai for AI-training purposes. Following preliminary inquiries, the commissioner found that I-MED's deidentification methods and governance controls were sufficient for the data to fall outside the definition of personal information under the Privacy Act.
-
expand_more
Consumer and product liability
Artificial intelligence-enabled products and services are governed by Australia's consumer-protection and product-liability frameworks, principally the Australian Consumer Law under the Competition and Consumer Act 2010 (Cth). The ACL is principles-based and technology-agnostic, applying broadly to emerging products and services, including those incorporating AI.
At its core, the ACL prohibits misleading or deceptive conduct, unconscionable practices and false or misleading representations. It also provides consumer guarantees requiring goods and services, including those powered by AI, to be of acceptable quality, fit for purpose and accurately described. These duties extend to AI systems that make representations, recommendations or automated decisions. A business may contravene the ACL if an AI tool exaggerates its capabilities, obscures human oversight, or produces outcomes likely to mislead consumers.
Under the ACL's product-liability provisions, suppliers and manufacturers must ensure that goods, including AI-embedded software, are safe and free from defects. Where AI contributes to physical injury, property damage or financial loss, liability may arise under negligence, statutory guarantees or the product-safety regime. In practice, this can create shared accountability across the AI supply chain, from model developers and integrators to deployers and end users.
-
expand_more
Intellectual property
Australia's intellectual property framework presents several uncertainties for AI development and use. In 2022 and 2023, the Federal Court and Full Court confirmed that AI systems cannot be named as inventors under the Patents Act 1990 (Cth), finding that inventorship is limited to natural persons. This aligns with most common-law jurisdictions and means that patent protection for AI-generated inventions must currently be sought through a human intermediary.
In the copyright domain, governed by the Copyright Act 1968 (Cth), Australia follows a "fair-dealing" model rather than a broad "fair-use" regime. Without permission, lawful use of copyright material is limited to specific purposes such as research, criticism, news reporting and parody. None clearly extend to large-scale data scraping or model training, creating uncertainty for developers, publishers and rights holders. Questions also persist around authorship, ownership and liability for AI-generated works, particularly where human involvement is minimal or diffuse.
While these laws remain in force, their interaction with data-driven and generative AI systems is still evolving. As debates around copyright, data mining and licensing intensify, Australia's IP regime is emerging as a critical frontier for future AI regulatory reform.
-
expand_more
Online safety and deepfakes
Australia's regulatory response to AI-generated deepfakes spans both criminal and civil regimes. Offences for creating or distributing non-consensual intimate images, including synthetic or AI-generated content, are primarily contained in the Criminal Code Act 1995 (Cth). In parallel, the Online Safety Act 2021 (Cth) empowers the regulator, the eSafety Commissioner, to order the removal of intimate images or synthetic media that depict or simulate a person without consent. Collectively, these mechanisms position Australia among the earliest jurisdictions to explicitly regulate harmful or non-consensual AI-generated material.
-
expand_more
Discrimination and employment
Artificial intelligence-assisted decision making is also regulated under antidiscrimination and employment laws. Employers and service providers remain liable for algorithmic bias or discriminatory outcomes in hiring, promotion, credit, insurance or service delivery.
-
expand_more
Industry-specific regulation
Certain sectors are already governed by frameworks that intersect with AI deployment.
Artificial intelligence-powered medical devices are regulated under the Therapeutic Goods Act 1989 (Cth), which establishes the framework for assessing and approving software-as-a-medical-device systems. The legislation requires such systems to demonstrate safety, efficacy and performance consistent with their intended clinical purpose.
The Australian Prudential Regulation Authority integrates AI into its prudential risk management standards, requiring financial institutions to manage operational, data and cyber risks linked to AI use.
The Security of Critical Infrastructure Act 2018 (Cth) imposes an “all-hazards” risk management framework across a wide range of key critical sectors, including risks arising from AI systems.
The Australian Securities and Investments Commission requires financial services licences holders to implement and maintain adequate risk management processes and pr ocedures, which extends to AI.
Artificial intelligence's emergence has also prompted responses from legal regulators and courts. Several professional regulators have issued guidance on responsible AI use in legal practice, emphasising duties of competence, confidentiality and supervision. In one notable case in Victoria, a practitioner lost their principal practicing certificate after submitting AI-generated material containing false citations. Australian courts have likewise begun publishing practice directions on the use of AI by litigants.
Australia's AI regulation journey has entered a recalibration phase rather than a standstill. While dedicated legislation remains on hold, the government and regulators are actively refining existing laws, standards, and strategies, laying the groundwork for a more integrated national approach to AI.
-
expand_more
Review of existing laws
The government is currently reviewing whether existing laws can accommodate emergent AI risks. That review includes privacy, consumer protection, safety, antidiscrimination, product liability laws and the capacity of regulators to supervise AI-enabled systems. If these frameworks prove adequate, the original mandatory guardrails proposal may be abandoned in favour of a multi-regulator approach.
-
expand_more
Guidance for AI Adoption
In October 2025, the NAIC released the Guidance for AI Adoption, formally updating and replacing the VAISS. The new framework provides a practical, nationally consistent blueprint for organizations seeking to responsibly govern AI. It consolidates the VAISS’s 10 guardrails into six responsible AI practices that covers governance and accountability, impact assessment, risk management, transparency, testing and monitoring, and human oversight.
The first part of the Guidance for AI Adoption, “Foundations," is aimed at small and medium-sized enterprises; the second, "Implementation Practices," is intended for larger or more mature organizations. The guidance translates high-level principles into actionable steps. It is supported by a suite of resources, including an AI screening tool, policy and register templates, and reference materials outlining key definitions and risk mitigation measures.
At its core, the guidance encourages proportionate governance based on risk and fosters transparency and human accountability. While it is too early to say whether this guidance will define Australia's long-term regulatory approach, it signals a strong direction toward practical, standards-based governance as the government continues to shape its broader AI strategy.
-
expand_more
National AI Strategy
The government has committed to producing a National AI Strategy by the end of 2025. The strategy aims to set national priorities for innovation, adoption, international alignment and trust frameworks, integrating lessons from past consultations and sectoral reviews.
-
expand_more
Recently completed reviews
In July 2025 the Therapeutic Goods Administration published Clarifying and Strengthening the Regulation of Medical Device Software including Artificial Intelligence. The review found that Australia's risk-based, technology-neutral framework remains largely fit for purpose but needs refinement. The organization recommended updates to definitions, such as manufacturer, sponsor, supply, adaptive-AI/change-control provisions; clearer guidance for digital scribes and clinical-assist tools; and stronger evidence, monitoring and transparency requirements. Additional consultations are planned for 2025-26.
The Treasury's Final Report on AI and the Australian Consumer Law recommends targeted clarifications to clarify the ACL's application to AI systems, especially regarding definitions of goods/services, manufacturer liability and algorithmic representations. It emphasizes reinforcing the ACL's existing principles rather than introducing AI-specific laws.
-
expand_more
Intellectual property questions
Artificial intelligence and copyright have become central to current policy debate. Key stakeholders have agreed to explore compensation frameworks for the use of copyrighted material in AI training, signaling that licensing models may feature in future reforms. At the same time, the Productivity Commission has proposed a text-and-data-mining exception to the Copyright Act 1968 (Cth) to permit certain AI-training activities. That proposal has since been rejected by the federal government, which has ruled out any copyright carve-out for AI developers. The decision marks a decisive shift in bargaining power away from technology companies potentially hoping for a legislative reprieve and towards the creative and union sectors, which are now positioned to shape the contours of future frameworks. Additionally, the government has announced that a working group will meet imminently to consider whether Australia's copyright framework needs updating.
Adopting OECD guidance to strengthen AI regulations and adoption
As an adherent to the OECD AI Principles and a nation striving for international compatibility in its standards-led approach, Australia can strengthen its current regulatory framework by formally adopting specific tools and policy recommendations derived from the OECD's guidance. These additions would help address the nation's persistent public trust deficit while advancing its goal of building an agile and effective governance environment.
Strengthen transparency and accountability through tools
Australia's current framework relies on existing legal systems and voluntary guidance. The OECD offers concrete tools that Australia can incorporate to enhance accountability and public transparency, moving beyond general principles.
Australia can establish a formal mechanism or participate in existing frameworks to monitor and understand AI incidents. The OECD platform provides the AI Incidents and Hazards Monitor and focuses on AI risk and accountability. Integrating a mandatory incident reporting framework, especially for high-risk systems, would allow the government to track real-world harms — a necessary step for building public trust given that 78% of Australians are concerned about negative outcomes from AI.
Australia can require organizations developing advanced AI systems to participate in a structure similar to the Hiroshima AI Process Transparency Reporting Framework. This type of reporting facilitates transparency and comparability of risk mitigation measures across the industry, directly supporting the OECD value of transparency and explainability.
Formalize definitions for interoperability
Australia’s current approach involves reviewing and refining existing laws, such as consumer law and medical device regulation, rather than creating new legislation. To ensure its regulatory environment is interoperable and coherent, Australia can formally adopt the foundational OECD definitions.
Policymakers should explicitly leverage the OECD's definition of an AI system and the AI system lifecycle in their revised or new regulatory guidance. Countries, including the EU and U.S., use these definitions in their frameworks, making their adoption by Australia crucial for ensuring global interoperability. This would provide clear, internationally recognized terminology for industry and regulators, streamlining Australia's hybrid regulatory model.
Expand focus to key policy areas
While Australia's National AI Strategy is forthcoming at the end of this year, the OECD's detailed policy focus areas can guide the strategy's content, ensuring all critical aspects of AI governance are addressed.
The OECD specifically tracks AI compute and the environment. Australia can integrate policies to manage the environmental impact of large-scale AI computing, an area currently overlooked in legal review, which focuses primarily on consumer protection, privacy, and intellectual property laws.
Australia's focus on productivity gains aligns with the OECD's work on the Future of Work and the Work, Innovation, Productivity and Skills in AI program. The National AI Strategy should incorporate the OECD recommendation to build human capacity and prepare for labour market transition, ensuring the workforce is equipped for the changes AI will bring.
Given the rapid rise of generative AI, Australia should dedicate targeted policy guidance, drawing on the OECD's focus area for managing the risks and benefits of generative AI. This would complement the OAIC’s existing guidance on training generative models under the Privacy Act.
Reinforce value-based principles
Australia's earlier proposals included mandatory guardrails focusing on elements like security, integrity and testing. Even with the shift toward standards-led governance, the OECD AI Principles of Corporate Governance provide a robust foundation for reinforcing core values.
Australia can explicitly structure its Guidance for AI Adoption and existing sector-specific regulations, like those governing critical infrastructure, to more strongly reflect the OECD's value of robustness, security and safety. This is critical for ensuring that AI systems are reliable and resilient, particularly in high-risk applications like medical devices and financial services.
While Australia focuses on productivity, the OECD framework promotes inclusive growth, sustainable development and well-being. Australia could embed a framework requiring AI actors to demonstrate how their systems contribute to broad societal well-being and adhere to human rights and democratic values, including fairness. This approach would help counteracting known risks, like algorithmic bias, that are currently regulated primarily through existing antidiscrimination laws.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
- Australia
- Canada
- China
- European Union
- India
- Japan
- Singapore
- South Korea
- United Arab Emirates
- United Kingdom
- United States
- Supplementary article: AI governance in the agentic era
-
expand_more
Additional AI resources


Approved