Button Text

News

Australia’s new AI plans: what they mean for the Australian Public Service

Written By

Alan Taychouri

5 December 2025

1

min read

Australia’s AI landscape just got an update with the release of two documents: the National AI Plan 2025, and the APS AI Plan 2025. Together, they signal a push for much faster AI uptake in government, anchored by stronger, more repeatable governance.

Australia’s new AI plans: what they mean for the Australian Public Service

If you’ve been trying to make sense of the Government’s AI direction lately, there are two new documents to digest:

  1. the National AI Plan 2025, which sets the whole-of-economy ambition, and
  2. the APS AI Plan 2025, which translates that ambition and encourages a substantial increase in AI use across the Australian Public Service (APS).

One answers “where is Australia heading?”; the other “how can the APS use artificial intelligence?”

Here’s our breakdown.

What the National AI Plan is actually doing

At a high level, the National AI Plan is framed around three goals:

  • Capture the opportunity - building the infrastructure, capability and investment settings Australia needs to grow an AI-enabled economy.
  • Spread the benefits - making sure adoption isn’t limited, and that AI helps lift productivity and inclusion across the country, including through better public services.
  • Keep Australians safe - responsible practices, regulatory settings, and international cooperation on safety and standards.

It’s a "north star" and a signal of where government attention and energy will go - growth, broad-based adoption, and safety - with improved public services explicitly treated as part of spreading the benefits.

What the APS AI Plan is actually doing

The APS plan is more hands-on, and it’s structured around three pillars:

  • Trust - transparency, ethics, governance and clear accountability.
  • People - lifting baseline AI capability across the service.
  • Tools - safe access to approved AI services and shared infrastructure.

The headline promise is pretty clear: every public servant should have safe access to AI tools and foundational learning, with strong governance as the foundation. Chief AI Officers are positioned as the internal leaders who make that governance real within agencies.

What’s especially important is the governance uplift. The APS plan builds on the Responsible Use of AI policy introduced from 1 September 2024 and then sharpens it with more concrete obligations, including:

  • every agency having a clear strategic position on AI adoption, communicated to staff
  • an accountable officer assigned to every in-scope AI use case
  • an internal register of in-scope use cases
  • mandatory use of a standard AI impact assessment tool
  • an AI Review Committee to oversee higher-risk proposals and drive consistent standards across government
  • clearer expectations of external providers who deliver services involving AI.

You can read that as government saying: “We want adoption to move faster, but not at the expense of trust or consistency.”

On tools, the plan commits to safe sector-wide access. Finance has separately flagged the rollout and expansion of secure, sovereign APS tooling (including GovAI and a government-controlled chat capability) as part of delivering that pillar.

So what has actually changed since 2024?

The 2024 policy baseline introduced core requirements around accountability, transparency and foundational training.

The 2025 APS plan then goes further by making several elements explicitly mandatory and systematised:

  • impact assessments are now required for in-scope use cases
  • accountability is tied to individual use cases, with named officers
  • internal registers become standard practice
  • the AI Review Committee adds a consistent cross-government checkpoint for higher-risk deployments
  • and agencies must articulate and share a clear internal position on AI adoption.

Less “what good looks like,” more "how to do it, repeatedly and safely."

If you’re an APS agency, here’s the practical takeaway

There are three short-term objectives.

1) Get your authorising environment sorted.

  • nominate/confirm your Chief AI Officer and decision rights
  • publish (internally) your agency line on AI use: what’s encouraged, what’s restricted, where staff go for help
  • make sure governance isn’t “special project” territory, it needs to be normalised.

2) Make governance repeatable.

  • embed the AI impact assessment into delivery lifecycles
  • stand up and maintain your internal register
  • assign accountable officers to in-scope use cases early
  • define escalation into the AI Review Committee for higher-risk work.

3) Build capability with a purpose, not just training for training’s sake.

  • add role-specific learning for policy, legal, procurement, audit, tech and service design
  • set up a small “AI helpdesk” function to coach teams and triage risk
  • focus on a small number of safe, measurable use cases first (drafting support, search/knowledge retrieval, summarisation, low-risk analytics augmentations).
  • agree success measures with service owners and Comms early, so wins are credible.

And don’t forget procurement hygiene: agency SOWs and data clauses need to line up with the plan’s expectations for providers using AI in service delivery.

What's missing from these new plans?

We've split this into two buckets.

1) Things the documents don’t spell out clearly.

  • Funding and resourcing: the APS plan materials don’t describe a specific funding envelope for experimentation or scale-up. That may widen gaps between large and small agencies unless something else fills the void.
  • Public performance measures: the National plan summary lays out goals but doesn’t set portfolio-level “success metrics” for improved public services.

2) Genuine design trade-offs still in play.

  • Central oversight vs agency autonomy: the Review Committee helps consistency, but too much central gating could slow delivery. The balance will determine whether adoption accelerates or bogs down.
  • Risk-based gating: impact assessments should focus effort where risk is higher. The trick will be avoiding unnecessary drag on low-risk, high-volume use (like everyday drafting support).
  • Tooling strategy: sovereign, shared services lift baseline safety and access, but agencies still face integration headaches and data-quality constraints. Those dependencies aren’t detailed in the public extracts, so agencies will need to work through them locally.

What to keep an eye on next

You’ll know this is working when you start seeing:

  • agencies publishing and standing behind internal AI positions
  • impact assessments and registers used as routine, not paper exercises
  • the AI Review Committee visibly operating, with early precedent-setting decisions
  • broader rollout of safe AI access and foundational learning across agencies
  • and most importantly: a handful of measurable, low-risk service improvements that can be replicated across government.

The real test now is simple: if agencies can turn these commitments into safe, measurable service improvements in the next 12 months, the Government will have earned the right to scale AI with public trust.