AI has officially crossed the line from “interesting” to “expected.”
Clients often assume you’re using it, and your competitors already are. The truth is, AI has become table stakes, and the firms that refuse to touch it will increasingly feel slower, more expensive, and harder to scale.
But there’s a second truth that matters just as much: unmanaged AI is a growing liability for all businesses, accounting firms included.
According to Gartner, unmanaged or “shadow” AI is already present in nearly 70% of organizations, and security or compliance incidents linked to it could affect thousands of businesses over the next few years.
In a recent episode of Anchor’s Prep Your Firm webinar series, Loss Prevention Director & A&A Specialist at CAMICO Mutual, Duncan Will, discussed this at length: sitting on the sidelines is no longer a safe option, but neither is using AI without guardrails.
The safe route is using AI with the same discipline you use everywhere else in your practice: confidentiality first, review always, and clear rules for your staff.
AI is a tool. Treat it like one. The problems start when the tool becomes a shortcut.
Key takeaways
- AI is no longer optional, but unmanaged AI is a liability: Firms that avoid AI fall behind, and firms that use it without controls take on unnecessary professional risk.
- Confidentiality is non-negotiable: You can’t treat a public AI tool like a private workspace, so client data must be protected by policy and settings.
- “Set it and forget it” is a trap: AI tools can change their defaults with updates, so your data controls need to be verified regularly.
- AI output needs review discipline, not trust: AI drafts can sound right even when they’re wrong, so due care and professional judgment still sit with your team.
Why AI suddenly feels “mandatory”
A couple of years ago, a lot of firms could still say, “We’re watching it. We’ll adopt when it’s safer.” And that sounded like a perfectly prudent posture.
But that’s no longer the environment. AI has already started touching the day-to-day mechanics of firm delivery, and it’s showing no signs of letting up.
What changed is the baseline. Clients have been trained by every other industry to expect faster turnaround, cleaner communication, and fewer back-and-forth steps. Meanwhile, staff have started using AI in their personal workflows, which means they bring those expectations into your firm whether you formalize it or not.
In his session, A CPA’s Guide to Avoiding Trouble in 2026, Will isn’t telling firms to adopt every new AI tool. He’s saying the firms that wait will be competing against firms that can deliver the same work faster and more efficiently.
And in a profession where clients already push on fees, responsiveness, and speed, “we don’t use AI” is not a positioning statement. It’s an explanation you’ll have to keep giving, over and over, while other firms quietly get faster.
That’s why AI feels mandatory now. Not because it’s perfect, and not because it replaces judgment, but because the market is moving. The real decision is whether your firm deliberately adopts AI, or whether it seeps in informally without standards.
The real liability: Client confidentiality meets default settings
If you want one line that sums up professional risk in the AI era, it’s this:
Client confidentiality is fundamental. AI does not get a special exception.
Will’s warning here was direct. You can’t share client information with a large language model without ensuring you’re not creating a confidentiality problem for yourself or your client. In public accounting, that’s not “nice to do.” It’s the foundation.
So where do firms usually get burned? Assumptions.
- Someone assumes their AI tool is private.
- Someone assumes “we turned that setting off once.”
- Someone assumes that a paid plan means their data isn't used in ways they didn’t intend.
- Someone assumes the team knows what not to paste into a prompt.
Then an update rolls out, and the defaults change.
Will called out a specific behavior pattern that’s worth turning into policy language: set it and forget it does not work. Some AI tools change settings when versions update. You might have disabled “train on my data,” and a later update flips that default without making it obvious.
The point isn’t the exact menu path. It’s the habit: your firm needs a recurring check. Not a one-time setup.
Treat AI tools the same way you treat software permissions and security controls. Verify them. Re-verify them. Document who owns that check.
AI can sound certain even when it’s wrong
The second liability is quieter, and it’s already showing up in how people work.
AI is confident.
Will reminded the audience of something every firm owner has seen: an AI response can be wrong and still sound completely sure of itself. That tone is part of what makes it dangerous under pressure. When the output reads clean and assertive, people trust it. They move on. They ship it.
That’s where professional standards collide with modern speed.
You’re still required to exercise due care and professional judgment. AI doesn’t reduce that obligation. If anything, it increases the need for review because it introduces a new failure mode: a plausible answer that’s still incorrect.
This is also where “AI as a shortcut” sneaks in.
During busy season, the temptation isn’t to be reckless, but sending something that looks right because time is tight becomes harder to resist.
Will said he sees that exact rush-to-solve mindset show up in claims history, and he expects AI to make it worse. Not because AI is evil, but because it’s efficient, and efficiency amplifies existing habits.
If your culture already leans toward rushing, AI speeds it up.
The rule that keeps you safe: Use AI like staff work
Here’s a practical way to frame it for your team.
AI output isn’t an answer. It is a staff draft.
If a junior staff member hands you a draft engagement letter clause, a client email, or a technical summary, you don’t rubber-stamp it. You review it. You apply judgment. You adjust it based on the context that the junior staff member lacks.
Will used a similar mental model. Take the AI suggestion the same way you would take a suggestion from someone helpful but inexperienced. It might be good. It might be incomplete. It might miss the nuance that matters. Your job is still to own the final call.
This helps us arrive at the heart of the issue: AI must be treated as a tool, not a shortcut.
Tools make you faster. Shortcuts make you sloppy.
A firm that uses AI well isn’t the firm that uses it everywhere. It’s the firm that knows exactly where it belongs, what can go into it, and how outputs get reviewed before they become client-facing.
What “AI governance” looks like in a real firm
Many firms hear “governance” and picture bureaucracy. That’s not the goal.
The goal is a light system that prevents predictable mistakes.
Will’s broader risk management guidance in the session, even when he was talking about quality management standards, translates cleanly to AI adoption. The strongest theme was that modern systems aren’t static. They’re iterative. They require evaluation, adjustment, and documentation.
That mindset is exactly what AI needs inside a firm.
Here’s what that looks like in plain English:
- You assign ownership. Someone is accountable for approved tools, settings, and updates.
- You make rules explicit. Staff should not have to guess what is allowed.
- You build review into the workflow. AI cannot be the last step. It can be step one.
- You document decisions. If something goes wrong, you want to know why and fix the root cause, not just scold someone.
- You include junior staff. They see the “real workflow” and where shortcuts happen.
None of this requires a 30-page policy. In fact, a long policy is often a policy that never gets read. A tight, usable AI policy is usually a one-pager plus examples.
Don’t let “we’re careful” be your only control
Most firms believe they are careful. That’s not a control. Carefulness is a mindset. Controls are behaviors you can verify.
This is where many AI discussions go off track. Firm leaders say, “We told the team not to put client info into AI.” Then they move on.
Meanwhile, a staff member uses AI to rewrite an email and pastes in a full client paragraph without thinking. Or someone tries to “summarize” a sensitive document to save time. Or someone uses AI to draft an engagement clause and forgets to validate it against firm standards.
These aren’t edge cases. They’re normal human behavior under deadline pressure.
If you want AI to reduce work without increasing risk, you need controls that survive busy season.
That usually means:
- Clear examples of what counts as “client confidential information” in prompts
- Approved tools and approved settings, with someone responsible for verifying them
- A review expectation that AI-assisted client deliverables get the same review as non-AI work
- A simple escalation rule: if you are unsure, ask before you paste
Again, the point is not perfection. It is consistency.
Make AI part of your engagement letter story
One of the most actionable ideas in Will’s session was this: if your firm uses AI, say so in your engagement letters, in a way that protects you and sets expectations.
This doesn’t mean oversharing. It means you’re not letting AI usage become a surprise later, after a client is upset and looking for leverage.
A broader engagement letter lesson from the session applies here, too. The purpose of documentation isn’t just to be understood. It’s to prevent misunderstanding.
That principle matters even more with AI because clients have their own assumptions about what AI means.
- Some clients hear “AI” and assume you are replacing humans.
- Some assume you are sending their data to public tools.
- Some assume AI means lower fees.
- Some assume AI means faster turnaround with no risk.
If you don’t define the boundaries, the client will.
Treat AI as part of your service delivery method. Clarify the limits. Reinforce the fact that professional judgment remains with your firm.
The reputation test: Speed isn’t the product
The session closed with a mantra that fits this topic perfectly: you can sell your services daily, but your reputation only once. No shortcuts.
This is the real tension AI introduces in accounting firms. It can help with productivity. It can help with drafting. It can help you scale certain repetitive tasks. But none of that is worth it if it weakens the two things firms actually sell:
Trust and judgment.
The firms that win with AI will be those that stay conservative where it matters and aggressive where it’s safe. They’ll automate the repeatable work, keep humans accountable for the decisions, and protect client data like it’s a crown jewel, because it is.
AI isn’t optional. Neither is discipline.
A practical starting point for the next 30 days
If you want to act on this without turning it into a massive internal project, here’s a simple implementation path based on the session’s core themes:
- Start with one approved AI tool. Do not start with five.
- Confirm settings and document them. Assign an owner and a recurring verification cadence.
- Write a one-page acceptable use guideline. Include examples of what cannot go into prompts.
- Add a review rule. If AI touches a client-facing deliverable, it must be reviewed like staff work.
- Add a short engagement letter clause. Keep it plain English and consistent with how you actually operate.
- Run one training session. Make it practical. Show good prompts, bad prompts, and what “confidential” really means.
- Then revisit in 60 days. AI changes quickly, and so do defaults.
That loop is the difference between AI as a tool and AI as a liability.
Make AI less risky by making work more systematic
One reason AI becomes risky within firms is that too much remains ad hoc. Engagement terms live in PDFs. Billing happens inconsistently. Invoicing depends on someone remembering. Payment follow-through depends on awkward manual steps.
When your systems are loose, people reach for shortcuts. AI becomes one more shortcut.
Anchor helps firms tighten the system between “work performed” and “cash collected” by connecting proposals, agreements, invoicing, payments, and amendments into a single automated flow. That means fewer manual steps, fewer opportunities for things to be missed, and more consistency in how client work becomes billed work.
If you want to reduce operational chaos while you adopt AI, start by systemizing the workflows that create the most stress. Billing and collections are usually at the top of the list.
FAQ
Is using AI in an accounting firm inherently risky?
Using AI is not inherently reckless. The risk comes from unmanaged use: putting client information into tools without confidentiality controls, trusting outputs without review, and letting staff guess what is acceptable.
What's the biggest AI risk for firms right now?
Client confidentiality and overreliance. Confidentiality failures create professional exposure. Overreliance leads to technical errors that look “clean” and are shipped under deadline pressure.
How should firms review AI output?
Treat it like staff work. AI output is a draft. It requires due care, professional judgment, and review before it becomes part of a client deliverable.
Should firms disclose AI use to clients?
If your firm uses AI in ways that could affect deliverables or client expectations, it is smart to address it in engagement documentation, in plain English, consistent with your actual process.
What’s a simple first step to adopt AI safely?
Pick one approved tool, verify and document settings, publish a one-page acceptable use guideline with examples, and enforce a review rule for AI-assisted client-facing work.
Want to learn more? Book a call with one of our advisors and see how Anchor helps firms operationalize billing so getting paid is predictable, not reactive.


