As an accounting firm coach, consultant and mentor it would surprise no one that one of the things that I get asked about a lot is AI and how to use it to good effect inside firms. I’ve also been a little disappointed in some instances that firms have not invested time in understanding this topic. AI has become an arms race and it is not going away. There are some challenges, but there are also lots of opportunities for AI to make accountants’ lives better. So with a bit of help from ChatGPT here are my thoughts on some of the foundations to have in place in your firm.

AI is increasingly being embedded in everyday tools such as Microsoft 365, Xero, MYOB, document management systems and practice management platforms. This is happening fast and it can be hard to keep up! Lean on the vendors and understand what they are doing.

I believe the firms gaining real value from AI, are not simply “turning it on”. They are deliberately putting foundations in place to ensure AI is used safely, confidently and productively. Achieving this requires more than curiosity—it requires structure.

Outlined below are the key elements firms should have in place to support effective and sustainable use of AI.

1. A Clear AI Use Policy (Practical, Not Legalistic)

An AI use policy is the starting point. Its purpose is not to restrict innovation, but to provide clarity and confidence and to put some guard rails in place..

A good policy should address:

  • What tools are approved (e.g. ChatGPT Teams, Microsoft Co-Pilot, specific tax or research tools)
  • What data must never be entered (client names, TFNs, financial statements, identifiable information unless explicitly approved)
  • Where AI output can and cannot be used (e.g. draft emails and summaries vs final tax advice)
  • Human review requirements—AI assists, it does not sign off

In the Australian context, the policy should reference:

  • Privacy Act obligations
  • Confidentiality under APES 110
  • Risk management requirements under APES 325

Importantly, the policy should be short, readable and reviewed regularly.  (AI is moving so fast I reckon a review every month or two is wise.) A policy no one reads is not a control—it’s a risk.

 

2. Training Focused on Practical Use, Not Just Awareness

Many firms run an initial “AI awareness” session and stop there. The firms seeing real gains invest in ongoing, role-based training.

Effective training includes:

  • How to write effective prompts (context, role, output format)
  • Where AI is strong (summaries, drafts, checklists, explanations)
  • Where AI is weak (technical interpretation, edge cases, judgement)
  • Real firm examples: emails, meeting notes, workflow instructions, file reviews

Training should differ by role:

  • Partners: strategy, risk, client communication
  • Managers: review support, delegation, quality control
  • Accountants: drafting, research support, explanations

AI capability is a skill, not a one-off event. I believe each and every person in an accounting firm will needed to develop the skill of working with AI tools.

3. Permission and Psychological Safety to Experiment

One of the most overlooked elements is permission to play.

Team members need to know:

  • They are allowed to experiment
  • They will not be criticised for “getting it wrong”
  • Learning is valued over perfection

Firms that progress quickly often:

  • Allocate short “AI sandbox” time
  • Encourage sharing of wins and failures
  • Normalise experimentation in team meetings

Without psychological safety, AI adoption stalls quietly. Don’t let that happen in your firm.

4. Paid Licences for Approved Tools (Not Personal Accounts)

Free tools are tempting—but they create risk, inconsistency and fragmentation.

Firms should invest in:

  • Paid, business-grade licences (e.g. ChatGPT, Microsoft Copilot, or potentially Gemini if your firm works in the Google eco system.
  • Centralised control of access
  • Clarity over data handling and retention

This achieves three things:

  1. Better security and compliance
  2. Consistent capability across the firm
  3. A signal that AI is a supported, strategic tool—not a side hobby

Work with your IT support people (internal and external) to get their input on what is appropriate from a security standpoint.

5. A Shared Prompt Library (Your Hidden Productivity Asset)

One of the highest-return investments is a central library of proven prompts.

These might include:

  • Client email templates (extensions, queries, explanations)
  • File review checklists
  • Meeting note summaries
  • First drafts of advice letters
  • Training explanations for less experienced team members

Over time, this becomes institutional knowledge, not individual IP. It also dramatically reduces variability in output quality. A simple SharePoint or Teams channel is often sufficient.

6. Clear Review and Quality Control Expectations

AI does not remove professional responsibility.

Firms should be explicit that:

  • AI output is a draft, not an answer
  • Review standards are unchanged
  • Responsibility remains with the accountant or partner

Some firms introduce:

  • Mandatory “AI used” flags on workpapers
  • Checklist items confirming review
  • Guidance on verifying sources and assumptions

This protects both the firm and the profession.

7. Alignment with Workflow and Practice Management

AI works best when embedded into how work already flows.

Examples include:

  • Using AI to summarise client meetings into CRM notes
  • Drafting task instructions inside practice management systems
  • Preparing first-pass file reviews before manager sign-off


If AI sits outside normal workflows, usage remains sporadic.

8. Leadership Modelling and Sponsorship

Finally, AI adoption is cultural.

When partners:

  • Use AI themselves
  • Share examples openly
  • Talk about learning, not just efficiency

…it signals that AI is part of the firm’s future, not a passing experiment.

Team members take cues from behaviour far more than from policies.

AI Readiness Checklist

(Use as a quick diagnostic – Yes / In progress / No)

Governance & Risk
AI use policy in place and understood
Clear rules on client data and confidentiality
Mandatory human review of AI output
Alignment with APES 110 and APES 325

Tools & Access
Paid, business-grade AI tools approved
No reliance on personal AI accounts
Centralised access management
Understanding of data storage and retention

Capability & Culture
Staff trained in practical prompting
Permission to experiment safely
Leaders actively use AI themselves
AI learning shared across the firm

Knowledge & Workflow
Shared prompt library exists
AI embedded into daily workflows
Consistent tone and quality of output
AI knowledge treated as firm IP