The Labor government has recently announced the launch of the Australian AI Safety Institute (AISI), a national watchdog dedicated to ensuring the safe and responsible development of AI in Australia. The government will begin rolling out the AISI in early 2026. The AISI has been developed to oversee the deployment and regulation of AI technology in Australia, and ensure companies are complying with Australian laws when developing or utilising AI technologies.
The AISI initiative complements the government’s finalisation and release of its National AI Plan (the Plan) which facilitates the expansion of Australia’s AI ecosystem. Together, the AISI and the Plan will aim to protect Australians from AI-related harms while ensuring innovation can prosper – a balance critical to keeping Australia internationally competitive and at the forefront of AI development.
As a whole-of-government hub, the AISI will perform several key functions including to:
Australia’s membership in the International Network of AI Safety Institutes bolstered the government’s work on establishing the AISI, giving it access to shared testing protocols, technical standards and risk-assessment frameworks developed with other leading AI nations. The AISI will not only continue these international partnerships but also work closely with domestic institutions such as Australia’s National AI Centre (NAIC).
The Plan outlines the roadmap to growing an AI-enabled economy. The Plan has three main goals to:
So, what does ‘support’ look like in the Plan? How exactly will the government grow the domestic AI sector and attract global investment?
Australia is one of the world’s leading data-centre destinations, thanks to our stable operating conditions, strong legal protections, renewable energy potential and available land. Between 2023 and 2025 more than $100b in projects were planned. To build on this momentum, the Plan ensures government support for major private investments. The Investor Front Door and Major Project Facilitation Agency will collaborate with proponents of nationally significant projects. Australia has already been attracting headline investments, for example Project Southgate and large commitments from Microsoft, Amazon and others. The government is also backing domestic AI growth financially – more than $460m is already available or committed across research grants, graduate programs, ecosystem building and SME adoption.
The Plan prioritises reskilling and labour mobility as AI changes job tasks, with particular focus on groups at higher risk of disruption – women, First Nations people, mature-age workers, people with disability and regional communities. AI adoption currently shows a regional–metro gap (about 29% of regional organisations versus 40% in metro areas), so existing initiatives – like the National Skills Agreement, FSO Skills Accelerator, First Nations Digital Support Hub and the Network of Digital Mentors – will be scaled to develop key skills and boost connectivity. Of note, Indigenous data governance is also being strengthened: the Framework for Governance of Indigenous Data (NIAA 2024) sets out principles for community engagement, consent, cultural protocols and collective rights when First Nations data are used in AI systems.
Of course, the government is also integrating AI into the public sector to improve efficiency and accessibility without leaving behind those delivering essential government services. The focus, among others, includes expanding whole-of-government tools such as GovAI, trialing generative AI in schools, and recently rolling out the APS AI Plan.
Notably, the government has reversed from its previous proposed approach to mandating strict guardrails for high-risk AI within the Plan, now prioritising domestic AI growth and global investment. Instead of introducing new AI specific legislation – which many feared could deter offshore investment and hinder innovation – the Plan relies on current laws and the AISI to manage AI risks and regulate AI development nationally. While certain legal updates and changes might specifically mention AI, the government maintains that existing laws are fit for purpose and should be applied to AI just as they are to other emerging technologies.
This approach reflects the global trend to embrace the economic opportunities afforded by AI rather than to impose overbearing restrictions.
The approach has already received some criticism in various corners where views are held that certain existing laws are themselves unfit for purpose, or subject to reforms which have not yet been implemented, despite being under review for a number of years – the Privacy Act Reforms being a case in point.
It remains to be seen whether the AISI will push to fast-track some of these new legal developments over its coming term in office – and specifically those that have a direct impact on the adoption of AI for business, but this is a likely move to ensure alignment across key laws directly affected.
The government’s view is that both the AISI and the Plan aim to strengthen public trust in AI technologies. While Australians remain optimistic about AI’s potential, the UTS Human Rights Technology Institute shows consumer trust in AI technologies currently remains low.
By launching regulatory initiatives specifically focused on promoting both safety and innovation, Australia has the opportunity to position itself as a leader in responsible AI governance. While no binding requirements have yet been issued, both the AISI and the Plan indicate stricter, although more remote and technical, oversight of AI technology and regulation in Australia.
Funding for the AISI will be detailed in the government’s next Mid-Year Economic and Fiscal Outlook. The initiative forms part of a long-term national strategy, alongside the government’s forthcoming APS AI Plan and the Data and Digital Government Strategy’s 2025 Implementation Plan, currently in development by the DTA and Department of Finance.
Together, these frameworks are intended to ensure the success of Australia’s digital transformation, with AISI as the newest dedicated measure specifically focused on protecting the safety of Australians as it relates to the use and development of AI.
If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.
Authored by:
Dudley Kneller, Partner
Sinead Lynch, Partner
Stephanie Rocher, Senior Associate
Laura Dowd, Seasonal Clerk
Maria Korotaeva, Seasonal Clerk