Connect with us

Accounting

Agentic AI: The next big thing for accountants?

Published

on

As the world continues to digest the rise of generative AI, agentic AI lies waiting on the bleeding edge, and while few accounting firms are using it at the moment, major players in the space have already made significant investments in what they believe to be the next step in the AI revolution. 

Very broadly, an AI agent is software that is capable of at least some degree of autonomy to make decisions and interact with things outside itself in order to achieve some sort of goal—whether booking a flight, sending a bill or buying a gift—without needing constant human guidance.

The concept of an AI agent is not new, as computer scientists and software engineers have been using the term for years, and such agents are already used in commercial applications — Sage Copilot, for instance, uses purpose-built AI agents each with their own area of specialization whose efforts are coordinated Copilot, which acts as an interpreter between the AIs and the human users who are requesting they perform a task. 

AI using AI
AI using AI

Antony Weerut – stock.adobe.com

Given this one might wonder why interest in agentic AI and AI agents seems to have risen only towards the end of last year (at least as measured by the volume of Google searches on the subject). One answer is that while agents have been used for years, advances in generative AI made them much easier to create and deploy, according to Hamid Vakilzadeh, an accounting professor at University of Wisconsin at Whitewater who has written extensively about AI.

“Agents have been around but we had to program them in a logical format. But because of large language models who can understand natural language, it makes it much more flexible to create very sophisticated systems without having to know so much coding, and it’s much easier to implement on a larger scale,” he said. 

He contrasted this with classic AI models. 

“If you look at a machine learning thing like recommendations, those are pretty sophisticated, they’re pretty useful in today’s market in entertainment but those’re not really doing a task, they make up a menu like if you need to find Christmas movies. They don’t make a decision by themselves, you make the final decision that I am going to see this, they propose but you make the decision. [In contrast] an AI agent can accomplish a task,” he said. 

Beyond this, advances in generative AI have also made AI agents themselves more effective in the field. Pascal Finette, co-founder and “chief heretic” at tech advisory firm Be Radical, contrasted this with robotic process automation. While he said RPA is not to be underestimated, even today, it tends to be very rigid in its setup, operating mostly on if/then/else principles that work very well for defined use cases but struggles in the face of unstructured data or unusual edge cases. Agents, bolstered with generative AI, become much more flexible.

“The reason I think why this is happening is we now have this superpower of an LLM which allows us to look at the world and look at data in a much more unstructured way and still get some really interest insight from it which we can then use to automate stuff, to execute on our behalf. … the beauty of LLMs and gen AI is it has the flexibility to be able to actually create meaningful interactions,” he said. 

David Wood, an accounting professor at Brigham Young University whose research also heavily involves AI, noted that ‘agents’ can be thought of as a framework for applying technology; agents are programmed to do a task, and they can use other tools to accomplish that task, and so rather than being some sort of evolution from traditional RPA or generative AI, an agent can be thought of as something that will use RPA and generative AI. 

“This is a different framework for how we do programming. We program an agent to do something, it could be to use generative AI, it could be to do a machine learning algorithm, it could be to simply change the color of the font. You can program an agent to do what you want and agents can work together or even compete against each other to do something, so it is not just a generative AI topic but highly valuable now because agents can use generative AI,” he said. 

This increased flexibility has led to major investments in the technology from significant players. Big Four firm KPMG, for example, announced in October a minority equity investment in Ema, an agentic AI startup building universal AI employees as part of the firm’s overall vision of action-oriented assistants working seamlessly alongside and augment human teams. 

Around the same time, accounting solutions provider Thomson Reuters announced it had acquired Materia, a U.S.-based startup that specializes in the development of an agentic AI assistant for the tax, audit and accounting profession. This transaction, which is complementary to Thomson Reuters AI roadmap, accelerates Thomson Reuters vision for the provision of generative AI tools to the professions it serves.

That same month, Microsoft announced the addition of its own agentic AI capacities, namely the ability for users to create their own autonomous AI agents with Copilot Studio as well as the release of ready-made in Dynamics 365 that can handle things such as sales, finance and supply chain management. 

Despite these high profile announcements, though, the field is very young, with many applications still in the experimental phase. Finette said it isn’t even necessarily bleeding edge so much as jagged edge. However, based on announcements like these, it appears this is the direction the AI community wants to go next. 

Wood agreed, saying there are not a lot of agentic AI solutions right now that are fully production ready, but he sees great potential in the technology once it grows to maturity. For example, many accounting firms bill on how time is spent, which can be very time consuming to effectively track. Agentic AI would be able to observe an accountant work on company A for 45 minutes and company B for 60 minutes and bill accordingly. He said this might lead to people getting rid of all timekeeping because a computer can do it for them. 

He also raised the idea that it could greatly increase efficiency for audits. Imagine, he said, if an agentic AI bot could automatically do most audit confirmations, send them to the humans for approval, and flag the things it couldn’t do itself, “so you could build tools for end to end processes to do full tasks together.” 

Finette also saw great potential, saying it could act like a full AI worker capable of complex tasks. He said people eventually should be able to go to their AI and say they’re having a meeting in two days with someone and they need a flight and a hotel within their preferred parameters (e.g. cost, distance, etc.) The AI would then perform all the research, compare prices, maybe even generate its own spreadsheet to aggregate all the options, then make a judgment call on which flight to book and which hotel to reserve and actually do it. While an agent might struggle with novel tasks, for the most part it should be able to handle most of the routine work. 

“You can translate that into a tax practice, where you have these complex workflows which a human breaks down into individual steps, each influencing the next: you take a document, extract the info from the document, put it into your accounting system, classify it, and do the booking in the system. All of this in theory agentic AI should be able to do for you,” said Finette.  

Other accounting-specific applications he could envision include anything that has to do with data entry, reconciliation of accounts and classification of information in systems, as well as expense management, which he said is already semi-automated already. 

“Right now it is semi-automated where you upload something into Expensify or something and it does image recognition on those expenses and pulls them in, but in the future it should do the report for me, there’s no reason why it should not take all this information and put the report together and submit it on my behalf,” he said. 

However, Wood warned that agentic AI will still carry many of the risks that generative AI has today, especially the risk of making up information or being inconsistent with its outputs. While companies might promise a genie-like wish fulfillment, it will be especially important for people to understand the limitations of this technology. 

“These systems interact, you wont always get a deterministic outcome like they think computers will generate, its not a calculator, so if you give an agent the ability to be creative, sometimes it might produce output A and sometimes produce output B and in accounting and business that can be a great strength, like in marketing, but when you do a tax form you don’t want that, you want income to be correct every single time. So the risk is that everyone gets hyped up and excited and applies it in the wrong place, you gotta use these tools and understand their strengths and weaknesses,” he said. “It’s sort of like gen AI right now, they think it will solve everything, but it solves these specific sets of issues and problems, so knowing where to use it and how to use it will be important.” 

Finette agreed that the tendency of generative AI to make up information would still be a risk, and that there will likely be a lot of hype trying to minimize this risk as well. But he also noted that the fact that agents can actually act semi-autonomously and make decisions means the consequences of these risks can be bigger. 

“In the flight booking use case, do you really trust the AI to actually book the flight for you? Will you be on the right flight at the right time? This is a silly example but a very real one,” he said. “The other [risk] is when you let AI make ‘moral’ decisions like letting AI do promotion decisions or suggesting out of 100 people here are the people who are top performers, where you get into issues like bias, which we know exists in AI. So all the issues we have with AI will be amplified with agentic AI.” 

While theoretically these AI agents will be supervised by humans, Finette wondered about the degree to which people will actually do so, especially when AI can be so convincing in its reasoning even when wrong. 

“These systems are so overly confident in their responses it is hard for some humans to step back and say don’t trust it. We all have experiences where you use ChatGPT and it tells you something wrong but they tell it to you in such a convincing way that if you didn’t have the knowledge you’d take it as gospel. … It is amplified if you let the system execute on this information. The human challenge is, and there are a bunch of research papers showing AIs are as convincing or even more so than humans, we need to get our workforce to understand that they should tread with caution and not let the AI bully you into a corner,” he said. 

Continue Reading

Accounting

IAASB tweaks standards on working with outside experts

Published

on

The International Auditing and Assurance Standards Board is proposing to tailor some of its standards to align with recent additions to the International Ethics Standards Board for Accountants’ International Code of Ethics for Professional Accountants when it comes to using the work of an external expert.

The proposed narrow-scope amendments involve minor changes to several IAASB standards:

  • ISA 620, Using the Work of an Auditor’s Expert;
  • ISRE 2400 (Revised), Engagements to Review Historical Financial Statements;
  • ISAE 3000 (Revised), Assurance Engagements Other than Audits or Reviews of Historical Financial Information;
  • ISRS 4400 (Revised), Agreed-upon Procedures Engagements.

The IAASB is asking for comments via a digital response template that can be found on the IAASB website by July 24, 2025.

In December 2023, the IESBA approved an exposure draft for proposed revisions to the IESBA’s Code of Ethics related to using the work of an external expert. The proposals included three new sections to the Code of Ethics, including provisions for professional accountants in public practice; professional accountants in business and sustainability assurance practitioners. The IESBA approved the provisions on using the work of an external expert at its December 2024 meeting, establishing an ethical framework to guide accountants and sustainability assurance practitioners in evaluating whether an external expert has the necessary competence, capabilities and objectivity to use their work, as well as provisions on applying the Ethics Code’s conceptual framework when using the work of an outside expert.  

Continue Reading

Accounting

Tariffs will hit low-income Americans harder than richest, report says

Published

on

President Donald Trump’s tariffs would effectively cause a tax increase for low-income families that is more than three times higher than what wealthier Americans would pay, according to an analysis from the Institute on Taxation and Economic Policy.

The report from the progressive think tank outlined the outcomes for Americans of all backgrounds if the tariffs currently in effect remain in place next year. Those making $28,600 or less would have to spend 6.2% more of their income due to higher prices, while the richest Americans with income of at least $914,900 are expected to spend 1.7% more. Middle-income families making between $55,100 and $94,100 would pay 5% more of their earnings. 

Trump has imposed the steepest U.S. duties in more than a century, including a 145% tariff on many products from China, a 25% rate on most imports from Canada and Mexico, duties on some sectors such as steel and aluminum and a baseline 10% tariff on the rest of the country’s trading partners. He suspended higher, customized tariffs on most countries for 90 days.

Economists have warned that costs from tariff increases would ultimately be passed on to U.S. consumers. And while prices will rise for everyone, lower-income families are expected to lose a larger portion of their budgets because they tend to spend more of their earnings on goods, including food and other necessities, compared to wealthier individuals.

Food prices could rise by 2.6% in the short run due to tariffs, according to an estimate from the Yale Budget Lab. Among all goods impacted, consumers are expected to face the steepest price hikes for clothing at 64%, the report showed. 

The Yale Budget Lab projected that the tariffs would result in a loss of $4,700 a year on average for American households.

Continue Reading

Accounting

At Schellman, AI reshapes a firm’s staffing needs

Published

on

Artificial intelligence is just getting started in the accounting world, but it is already helping firms like technology specialist Schellman do more things with fewer people, allowing the firm to scale back hiring and reduce headcount in certain areas through natural attrition. 

Schellman CEO Avani Desai said there have definitely been some shifts in headcount at the Top 100 Firm, though she stressed it was nothing dramatic, as it mostly reflects natural attrition combined with being more selective with hiring. She said the firm has already made an internal decision to not reduce headcount in force, as that just indicates they didn’t hire properly the first time. 

“It hasn’t been about reducing roles but evolving how we do work, so there wasn’t one specific date where we ‘started’ the reduction. It’s been more case by case. We’ve held back on refilling certain roles when we saw opportunities to streamline, especially with the use of new technologies like AI,” she said. 

One area where the firm has found such opportunities has been in the testing of certain cybersecurity controls, particularly within the SOC framework. The firm examined all the controls it tests on the service side and asked which ones require human judgment or deep expertise. The answer was a lot of them. But for the ones that don’t, AI algorithms have been able to significantly lighten the load. 

“[If] we don’t refill a role, it’s because the need actually has changed, or the process has improved so significantly [that] the workload is lighter or shared across the smarter system. So that’s what’s happening,” said Desai. 

Outside of client services like SOC control testing and reporting, the firm has found efficiencies in administrative functions as well as certain internal operational processes. On the latter point, Desai noted that Schellman’s engineers, including the chief information officer, have been using AI to help develop code, which means they’re not relying as much on outside expertise on the internal service delivery side of things. There are still people in the development process, but their roles are changing: They’re writing less code, and doing more reviewing of code before it gets pushed into production, saving time and creating efficiencies. 

“The best way for me to say this is, to us, this has been intentional. We paused hiring in a few areas where we saw overlaps, where technology was really working,” said Desai.

However, even in an age awash with AI, Schellman acknowledges there are certain jobs that need a human, at least for now. For example, the firm does assessments for the FedRAMP program, which is needed for cloud service providers to contract with certain government agencies. These assessments, even in the most stable of times, can be long and complex engagements, to say nothing of the less predictable nature of the current government. As such, it does not make as much sense to reduce human staff in this area. 

“The way it is right now for us to do FedRAMP engagements, it’s a very manual process. There’s a lot of back and forth between us and a third party, the government, and we don’t see a lot of overall application or technology help… We’re in the federal space and you can imagine, [with] what’s going on right now, there’s a big changing market condition for clients and their pricing pressure,” said Desai. 

As Schellman reduces staff levels in some places, it is increasing them in others. Desai said the firm is actively hiring in certain areas. In particular, it’s adding staff in technical cybersecurity (e.g., penetration testers), the aforementioned FedRAMP engagements, AI assessment (in line with recently becoming an ISO 42001 certification body) and in some client-facing roles like marketing and sales. 

“So, to me, this isn’t about doing more with less … It’s about doing more of the right things with the right people,” said Desai. 

While these moves have resulted in savings, she said that was never really the point, so whatever the firm has saved from staffing efficiencies it has reinvested in its tech stack to build its service line further. When asked for an example, she said the firm would like to focus more on penetration testing by building a SaaS tool for it. While Schellman has a proof of concept developed, she noted it would take a lot of money and time to deploy a full solution — both of which the firm now has more of because of its efficiency moves. 

“What is the ‘why’ behind these decisions? The ‘why’ for us isn’t what I think you traditionally see, which is ‘We need to get profitability high. We need to have less people do more things.’ That’s not what it is like,” said Desai. “I want to be able to focus on quality. And the only way I think I can focus on quality is if my people are not focusing on things that don’t matter … I feel like I’m in a much better place because the smart people that I’ve hired are working on the riskiest and most complicated things.”

Continue Reading

Trending