Connect with us

Accounting

Responsible AI in accounting: Addressing firms’ top 5 concerns

Published

on

Generative artificial intelligence is making inroads into the accounting industry, promising to greatly increase efficiency and productivity while offering real-time, deep insights that help improve performance. As firms deal with labor shortages and expand their services amid elevated client expectations, they are avidly exploring AI’s possibilities.

AI doesn’t come without caveats, particularly for accounting firms that work with highly sensitive personal and financial information of their clients. Although Gen AI’s potential benefits are considerable, firms should proceed cautiously and understand its impact on business.

For all of its potential, AI may not immediately solve all of the industry’s challenges. As the initial excitement subsides, it’s critical that IT teams ensure that any AI initiatives align with the objectives of their stakeholders — including the firm itself, clients and regulatory bodies. 

The steps to implementing responsible AI

Building a responsible AI strategy starts with a clear understanding of the specific problems or opportunities the firm aims to address with AI, coupled with a commitment to educating leadership and employees on what AI can and cannot achieve. This foundation ensures AI is implemented and used thoughtfully, with resources aligned to deliver maximum impact. 

Accounting firms also need a strong data and analytics strategy to ensure their data is well-structured before implementing AI. Structured data is the backbone of responsible AI, enabling faster, more accurate insights and transforming data into a powerful decision-making tool. Without it, AI risks stumbling on inconsistencies and poor-quality data, leading to misguided outcomes and wasted resources. In short, well-structured data unlocks AI’s full potential.

Once these fundamentals are in place, firms can assess their current maturity and readiness for AI implementation. Using a Capability Maturity Model specific to knowledge work automation provides a structured framework for this purpose, helping firms evaluate their competencies across five key considerations when adopting new technologies:

  • Information strategy;
  • Governance/resourcing;
  • Technology/IT infrastructure;
  • Level of automation; and.
  • End-user capabilities.

By using the model, firms can identify their capability levels in each category, ranging from beginner to advanced. For example, in the area of information strategy, a firm with minimal IT and business alignment may be considered a beginner, whereas one with integrated alignment across IT, business and executive functions may be classified as more advanced.

Responsible AI will prioritize safety, transparency and trustworthiness. Firms need to strike a delicate balance between innovation and security, which first requires a thorough evaluation of data connectivity, curation, and confidentiality. 

To properly incorporate responsible AI, there are five essential areas accounting firms should consider:

Protecting client privacy

Because safeguarding client information is the foundation of building trust with clients, privacy protections must be a top priority when accounting firms add solutions to their tech stack or develop new tools.

Firms can ensure they meet client expectations of confidentiality by practicing techniques like data minimization, ensuring firms handle the least amount of information required for a specific purpose. That can reduce the risk of data breaches, privacy violations and misuse.

Firms should also never share client information on public platforms like ChatGPT, which are vulnerable to cybersecurity threats that the firm has no control over.

Guarding against bias

An AI model trains by analyzing enormous volumes of data and applying what it learns to perform its tasks. Data scientists and developers need to be wary of the information they use to train and create AI algorithms. If biases exist in the training data, those biases will be replicated in the AI model’s work and generate unrelated or incorrect information. 

For example, a model may be trained to scrutinize a particular account that has a history of misstatements while overlooking new accounts in the current year. Or it may apply a biased risk profile to particular groups of clients based on historical data rather than client-specific information. IT teams should scrutinize inputs and outputs regularly to detect biased results.

Promoting trust through transparency

AI’s performance should not be a mystery; the models used by accounting firms should be simple, auditable and explainable. Explainable AI methods and tools can show how AI arrives at its decisions, allowing humans to understand the outcomes or identify and address potential issues. Establishing this level of transparency will help foster and demonstrate trust and respect with customers, users, and stakeholders.

Enforcing accountability

Better transparency enables better accountability. A user or group of users — which can include developers, deployers and even end users — should be assigned to regularly monitor and audit the firm’s AI models. They should be able to explain the rationale behind the AI’s outputs and perform updates or make adjustments to correct issues or errors. 

Redefining roles

The truth is that AI isn’t going to replace accountants, but it will redefine their roles. AI has the power to transform the way accountants work, freeing employees from mundane tasks to drive growth. Accountants need to grasp the power of pairing their expertise with AI and learn to work with it to improve performance and efficiency.

AI will need accountants to provide extensive monitoring and oversight. But by taking over a lot of routine tasks that accountants spend time on now, AI will allow them to focus on more complex high-level initiatives. In the process, AI will help alleviate the labor shortage and could improve firm retention.

Future-forward accounting firms can reap immense benefits from GenAI as they embark on their digital transformation journey. However, they need to ensure they protect privacy and security. Implementing AI within a capable knowledge work automation framework can, for example, help ensure that data remains confidential, stays within internal system boundaries and that employees have access only to the data they need.

Making sure AI models are trained on complete, bias-free data. Having accountants monitor AI’s outputs can maintain transparency and ensure efficient, effective use of the technology. AI is part of the path forward for the industry, but firms need to be sure they step carefully.

Continue Reading

Accounting

IAASB tweaks standards on working with outside experts

Published

on

The International Auditing and Assurance Standards Board is proposing to tailor some of its standards to align with recent additions to the International Ethics Standards Board for Accountants’ International Code of Ethics for Professional Accountants when it comes to using the work of an external expert.

The proposed narrow-scope amendments involve minor changes to several IAASB standards:

  • ISA 620, Using the Work of an Auditor’s Expert;
  • ISRE 2400 (Revised), Engagements to Review Historical Financial Statements;
  • ISAE 3000 (Revised), Assurance Engagements Other than Audits or Reviews of Historical Financial Information;
  • ISRS 4400 (Revised), Agreed-upon Procedures Engagements.

The IAASB is asking for comments via a digital response template that can be found on the IAASB website by July 24, 2025.

In December 2023, the IESBA approved an exposure draft for proposed revisions to the IESBA’s Code of Ethics related to using the work of an external expert. The proposals included three new sections to the Code of Ethics, including provisions for professional accountants in public practice; professional accountants in business and sustainability assurance practitioners. The IESBA approved the provisions on using the work of an external expert at its December 2024 meeting, establishing an ethical framework to guide accountants and sustainability assurance practitioners in evaluating whether an external expert has the necessary competence, capabilities and objectivity to use their work, as well as provisions on applying the Ethics Code’s conceptual framework when using the work of an outside expert.  

Continue Reading

Accounting

Tariffs will hit low-income Americans harder than richest, report says

Published

on

President Donald Trump’s tariffs would effectively cause a tax increase for low-income families that is more than three times higher than what wealthier Americans would pay, according to an analysis from the Institute on Taxation and Economic Policy.

The report from the progressive think tank outlined the outcomes for Americans of all backgrounds if the tariffs currently in effect remain in place next year. Those making $28,600 or less would have to spend 6.2% more of their income due to higher prices, while the richest Americans with income of at least $914,900 are expected to spend 1.7% more. Middle-income families making between $55,100 and $94,100 would pay 5% more of their earnings. 

Trump has imposed the steepest U.S. duties in more than a century, including a 145% tariff on many products from China, a 25% rate on most imports from Canada and Mexico, duties on some sectors such as steel and aluminum and a baseline 10% tariff on the rest of the country’s trading partners. He suspended higher, customized tariffs on most countries for 90 days.

Economists have warned that costs from tariff increases would ultimately be passed on to U.S. consumers. And while prices will rise for everyone, lower-income families are expected to lose a larger portion of their budgets because they tend to spend more of their earnings on goods, including food and other necessities, compared to wealthier individuals.

Food prices could rise by 2.6% in the short run due to tariffs, according to an estimate from the Yale Budget Lab. Among all goods impacted, consumers are expected to face the steepest price hikes for clothing at 64%, the report showed. 

The Yale Budget Lab projected that the tariffs would result in a loss of $4,700 a year on average for American households.

Continue Reading

Accounting

At Schellman, AI reshapes a firm’s staffing needs

Published

on

Artificial intelligence is just getting started in the accounting world, but it is already helping firms like technology specialist Schellman do more things with fewer people, allowing the firm to scale back hiring and reduce headcount in certain areas through natural attrition. 

Schellman CEO Avani Desai said there have definitely been some shifts in headcount at the Top 100 Firm, though she stressed it was nothing dramatic, as it mostly reflects natural attrition combined with being more selective with hiring. She said the firm has already made an internal decision to not reduce headcount in force, as that just indicates they didn’t hire properly the first time. 

“It hasn’t been about reducing roles but evolving how we do work, so there wasn’t one specific date where we ‘started’ the reduction. It’s been more case by case. We’ve held back on refilling certain roles when we saw opportunities to streamline, especially with the use of new technologies like AI,” she said. 

One area where the firm has found such opportunities has been in the testing of certain cybersecurity controls, particularly within the SOC framework. The firm examined all the controls it tests on the service side and asked which ones require human judgment or deep expertise. The answer was a lot of them. But for the ones that don’t, AI algorithms have been able to significantly lighten the load. 

“[If] we don’t refill a role, it’s because the need actually has changed, or the process has improved so significantly [that] the workload is lighter or shared across the smarter system. So that’s what’s happening,” said Desai. 

Outside of client services like SOC control testing and reporting, the firm has found efficiencies in administrative functions as well as certain internal operational processes. On the latter point, Desai noted that Schellman’s engineers, including the chief information officer, have been using AI to help develop code, which means they’re not relying as much on outside expertise on the internal service delivery side of things. There are still people in the development process, but their roles are changing: They’re writing less code, and doing more reviewing of code before it gets pushed into production, saving time and creating efficiencies. 

“The best way for me to say this is, to us, this has been intentional. We paused hiring in a few areas where we saw overlaps, where technology was really working,” said Desai.

However, even in an age awash with AI, Schellman acknowledges there are certain jobs that need a human, at least for now. For example, the firm does assessments for the FedRAMP program, which is needed for cloud service providers to contract with certain government agencies. These assessments, even in the most stable of times, can be long and complex engagements, to say nothing of the less predictable nature of the current government. As such, it does not make as much sense to reduce human staff in this area. 

“The way it is right now for us to do FedRAMP engagements, it’s a very manual process. There’s a lot of back and forth between us and a third party, the government, and we don’t see a lot of overall application or technology help… We’re in the federal space and you can imagine, [with] what’s going on right now, there’s a big changing market condition for clients and their pricing pressure,” said Desai. 

As Schellman reduces staff levels in some places, it is increasing them in others. Desai said the firm is actively hiring in certain areas. In particular, it’s adding staff in technical cybersecurity (e.g., penetration testers), the aforementioned FedRAMP engagements, AI assessment (in line with recently becoming an ISO 42001 certification body) and in some client-facing roles like marketing and sales. 

“So, to me, this isn’t about doing more with less … It’s about doing more of the right things with the right people,” said Desai. 

While these moves have resulted in savings, she said that was never really the point, so whatever the firm has saved from staffing efficiencies it has reinvested in its tech stack to build its service line further. When asked for an example, she said the firm would like to focus more on penetration testing by building a SaaS tool for it. While Schellman has a proof of concept developed, she noted it would take a lot of money and time to deploy a full solution — both of which the firm now has more of because of its efficiency moves. 

“What is the ‘why’ behind these decisions? The ‘why’ for us isn’t what I think you traditionally see, which is ‘We need to get profitability high. We need to have less people do more things.’ That’s not what it is like,” said Desai. “I want to be able to focus on quality. And the only way I think I can focus on quality is if my people are not focusing on things that don’t matter … I feel like I’m in a much better place because the smart people that I’ve hired are working on the riskiest and most complicated things.”

Continue Reading

Trending