The Australian government is the latest to unveil an artificial intelligence roadmap with a heavy focus on investment and economics after previously pursuing a more safety-oriented strategy.
The National AI Plan, announced 2 Dec., seeks to boost Australia's reputation as a place to invest in AI through digital and physical infrastructure while promoting goals to use AI to improve public services and AI adoption across the country. The plan also outlines how the recently announced AI Safety Institute will be used to monitor, test and share information on AI capabilities, risks and harms.
Rather than establishing mandatory guardrails for AI in high-risk settings the government was exploring last year, Australia will instead "continue to build on Australia's robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks," according to the plan.
It will do so in part through the work of the safety institute — which it has promised AUD29.9 million to launch in early 2026 — and notes the government "is committed to upholding international obligations, promoting inclusive governance and maintaining a resilient regulatory environment that provides certainty to business and responds quickly to new challenges."
"Guided by the plan, the government will ensure that AI delivers real and tangible benefits for all Australians," Minister for Industry and Innovation and Minister for Science Tim Ayres said in a press statement. "As the technology continues to evolve, we will continue to refine and strengthen this plan to seize new opportunities and act decisively to keep Australians safe."
The details
Rather than a concrete legislative vision, Australia's plan acts as a roadmap. The government plans to next develop data center principles — it was the second-largest destination for those in 2024 behind the U.S., according to research from Knight Frank — and create collaborative agreements with AI companies. It looks to expand access to AI skills and create workplace protections by collaborating with unions and civil society. And it wants to continue its engagement with international leaders on AI strategy.
The comprehensive vision for the future of AI comes after Australia's government previously provided some notable policy stances.
The prior offerings included a voluntary safety standard and guidance on how to use, develop and train generative AI models with privacy in mind. Australian Privacy Commissioner Carly Kind vowed to work with other countries to develop innovative and privacy protective AI.
However, Australian Competition and Consumer Commission Senior Investigator Rosie Evans, CIPP/E, wrote for the IAPP in March 2025 that those documents do not provide the legal certainty regulation would create.
"Without an enforceable regime specifically for AI, Australia may struggle to achieve the regulatory cohesion and effectiveness currently aspired to by government," she argued.
While the government does not plan to pursue AI Act-style regulation, the plan indicates "existing, largely technology-neutral legal frameworks" could apply to AI. Additionally, the government said it will respond to regulatory challenges as they come up.
With AI risks, Australia envisions a "whole-of-government" approach.
The government promises to pursue its public-sector plan, which was released 12 Nov. and includes measures like ensuring every agency has its own chief AI officer and will create a legal framework for using AI in services. This includes keeping a human involved in certain AI-assisted decisions and working with First Nation communities in how data is shared. And every organization using AI "is responsible for identifying and responding to AI harms and upholding best practice."
A changing global posture
Australia's plan marks the latest instance of global jurisdictions heeding growing concerns about AI regulation hindering investment and adoption of innovative technologies.
A consultation page for the mandatory guardrails proposal notes the national plan was formed, in part, due to feedback on the government's prior plan. The country's Productivity Commission also advised this summer against any regulation except as a last report to avoid Australia falling behind economically, ABC News reported.
Other jurisdictions have taken similar approaches.
The EU is considering an AI omnibus proposal which would push back AI Act compliance timelines for high-risk processing and pare back documentation requirements for small and medium-sized enterprises. The Trump administration's policies around AI has the U.S. exploring ways to stop state-level AI regulation and amend federal regulations, departing from the previous administration's safety-geared stance. And closer to Australia, South Korea's president recommended rolling back some AI regulations to make it easier for businesses to train models on data.
Early reception
The plan was supported by the Group of Eight, a cohort of Australia's research universities, with particular praise for its comprehensive nature and potential to boost the country's place in the AI ecosystem.
"Australia has deep capacity and capability in AI, through the research that we undertake and the graduates who not only know how to use AI — in whatever their field of endeavour — but are equipped to drive the future use of AI, ensuring we are not just a nation of AI adopters but of AI entrepreneurs," Chief Executive Vicki Thomson said.
The Australian Council of Trade Unions also welcomed the plan, saying its members were encouraged by the plan's promise to put workers first and urged the government to use its existing laws to hold big technology companies accountable.
"Workers are tired of being told by large tech companies that AI will bring improvements in the far distant future, when our rights and our jobs are under threat right now," ACTU Assistant Secretary Joseph Mitchell said. "The AI Safety Institute will play an important part in holding tech companies accountable for the products they are developing and ensuring they comply with all Australian laws before being placed on the market."
Caitlin Andrews is a staff writer for the IAPP.
