Considering that some are claiming that artificial intelligence could bring us biological immortality by 2030, it would not seem like a major leap for some to think it could independently complete an advisory engagement end to end without any human input today. While this is not actually the case, those who fervently believe this to be true may be less likely to call their human accountants when, in their mind, a chatbot would do just fine.
Joe Woodard, head of accounting coaching and education firm Woodard, said that this is part of how AI is slowly eating into accounting advisory services. The threat is not so much that the boss replaces everyone with an AI chatbot, but that the client decides an AI chatbot is good enough for their purposes and so never calls their accountant in the first place.
“Among ChatGPT adopters — which, let’s remember, is still a small percentage of the business community — people are asking AI first. If they have gray areas or need a judgment call, then they reach out to their CPA … . This is how AI is undercutting the value proposition of accountants. CPAs are increasingly becoming a second opinion rather than the first source of expertise,” said Woodard.
(See our feature story, “Staying ahead of AI.”)
This is a bad idea for several reasons. One is the obvious fact that the less a client calls their accountant, the fewer opportunities there are to offer the services that keep firms running. But there is also the fact that people think AI can do things it simply cannot do today, which can lead to poor decisions that could have been averted if only they’d consulted their human advisor. Hannah Dameron, an estate attorney with ArentFox Schiff who has spoken at accounting conferences on how AI is impacting her field, noted that people who try to draft wills using AI might be very disappointed when they test it in court.
“If you put in a ‘Draft a will for me’ type of prompt, it might not actually be appropriate for your situation and might not be up to date depending on when tax laws have changed or probate laws have changed. So I think there is still great value in working with [human professionals]. AI just might create a challenge in communicating that with the public more broadly,” she said. What is needed is, first, education on what exactly AI can and cannot do. Media portrayals have given people the impression that AI is close to fictional entities like Iron Man’s Jarvis or 2001’s HAL 9000, which simply does not match reality. And even when bringing things a little closer to Earth, people still may not realize that, as powerful as AI is today, it still comes with real limits that require human compensation.
“Right now, all we have is narrow AI, which means AI can answer highly specific questions. For example, if I ask, ‘What should my firm’s headcount be?’ AI can generate an informed response. But it can’t yet think holistically like a human does — it can’t ask leading questions, collect relevant information beyond the prompt, or make judgment-based decisions. AI can provide an analysis, but it’s not an advisor,” said Woodard.
David Zweighaft, a partner with RSZ Forensics, added that clients need to be aware that AI can be biased and, in the case of generative solutions clients might access via public models, extremely inconsistent. Given that generative AI works on a probabilistic basis, by necessity if you give it two different prompts you can get two different results. “We need to be able to look at AI-generated output either that we commissioned or that the client has relied on and say, ‘OK, here’s the same data set; let’s run this through a different AI platform and see if you get the same results.’ … We can use AI to do that just as a safety check for us,” he said.
David Nelson, an estate planning specialist with Top 25 Firm Aprio, said that ensuring that accountants remain the first opinion, not the second, means being proactive. Professionals should not be waiting around for their clients to call them, and then lament that they’re using a chatbot instead. They should be taking initiative to reach out in response to changing circumstances that might affect them.
“For example, if a client’s net worth is rising rapidly and there’s potential for increased estate tax, it’s important that our colleagues in other departments plant the idea: ‘Hey, you might want to talk to our estate planning team.’ That way, clients aren’t suddenly faced with estate planning questions and turning to the internet for answers instead of calling us. It’s about being proactive,” said Nelson.
He said that misinformation has become a challenge, as he has had clients coming in with ideas they got from ChatGPT or other tools, which he said can have outdated information. Chatbots might bring clients a rough approximation of an answer, but it may lack nuance, and so it is on professionals like him to explain the complexities. Still, it is better for clients to come in and have this talk than to simply accept the AI answer and not come in at all.
“That said, I don’t mind if a client comes in with their own idea. I haven’t personally encountered resistance when explaining why an AI-generated response might not fully address their needs. But AI’s growing presence in society means we’ll likely see more of this,” he said.
Lari Masten, a valuation specialist who heads Masten Valuation, raised a similar point. She talked about a client who relied on an AI’s answer, much to his own detriment. She said it’s not so much that he didn’t want to call her at all, but it was the weekend and he was under intense time pressure, so he consulted a bot instead and went with its suggestion.
“And then later in the week, I hear from him and he’s like, ‘yeah, by the way,’ and what he did was not what I would have ever recommended. And I said, ‘I wouldn’t necessarily have recommended that because here are the other things that you weren’t considering, right?’ and it was thankfully not something that was a costly thing for him, but he was just like, ‘My gosh, I didn’t think about that,'” she said. However, while she acknowledges that hers might be an unusual position, she doesn’t mind if people refer to AI on simple matters that are easily looked up online and save the complex stuff for her. She’s not eager to spend all day answering basic questions when she could be doing higher-value work. “I don’t have any issue at all, to be honest, with a client that has a question, but they can go ask chatGPT or whoever they’re using. Because if it’s something simple and they can get the answer, that’s great. Because that frees me up; I can do better work. I can do higher-level stuff. I mean, I don’t want to be answering things like, ‘What’s the maximum amount that I can get this year?’ Go look it up. That’s easy, so I’m fine with that because then that makes my conversations with clients more meaningful,” she said.
(Read more: “AI in advisory: What work is at risk?“)
In both Masten and Nelson’s cases, they leveraged existing relationships to prove their value proposition as human professionals. Woodard said that this is ultimately what accountants will need to do if they want to remain the first opinion, because if their idea of advisory consists of just handing people an analytics report, there’s not much of a relationship to utilize, and AI will indeed start eating into their client base. This was always a good idea, but Woodard said that now it is essential.
“AI will not disrupt advisory that is built on relationships, strategic insight, and deep business understanding. If your advisory work is truly advisory — guiding clients through long-term decision-making — it will endure. However, if your advisory service is transactional or just financial reporting with a cover sheet, that is highly disruptable. Our thought leadership has aligned around this: Advisory must take place in the context of a relationship. It must be strategic, rather than transactional. Up until now, that was a best practice; going forward, it will be a necessity for survival,” said Woodard.