Connect with us

Accounting

AI Leaders on: 2025 and AI regulation

Published

on

While AI is still in the wild west phase that many new technologies go through, as the technology has spread there have been increasing calls from both organizations and individuals to make the field slightly less wild. Not so much that it completely kills the innovation and vibrancy of this burgeoning field, but enough that series players will feel safe entering this space without being worried they’re putting themselves at risk. 

In particular, our experts are interested in measures that can improve the transparency and accountability of AI systems, such as clear labeling of AI-generated content, the ability to trace the model’s decision-making process, and disclosure of the data and algorithms involved. There was also strong support for ensuring these systems are explainable and, especially important for the accounting community, auditable. 

“An AI regulation that emphasizes transparency in the training of large language models (LLMs) would be highly beneficial. Understanding how these models are trained, including the data sources and methodologies used, is crucial for ensuring accountability and trust in AI systems. This transparency would be particularly advantageous in fields like accounting, where leveraging AI to enhance audit quality requires a clear understanding of how AI decisions are made,” said Mike Gerhard, chief data and AI officer with BDO USA. 

Respondents also expressed strong support for regulations aligned with principles-based or risk-based approaches, such as the EU AI Act, which focus on safety, fairness and non-discrimination while still providing space for innovation. This is especially important given the stakes involved with AI’s ascendency, especially for traditionally marginalized communities. 

“I believe we need to get ahead of the eight ball when it comes to the ethical issues stemming from AI’s inherent bias problem. When we let AI perform tasks such as sifting through resumes, making creditworthiness decisions, or assessing job interviews, we ought to be sure it does so without (hidden) biases. Part of this problem is on the vendor side, but part of this ought to be codified (and thus protected) by law,” said Pascal Finette, founder and CEO of training and advisory firm Be Radical. 

At the same time, virtually everyone cautioned against going too hard on regulation, especially at this early stage of the technology’s evolution.

“As further governance emerges, I hope we don’t see overly restrictive rules that stifle creativity and progress. Rather, I’d love to see further regulations that strike the right balance between ensuring the ethical and secure use of AI while encouraging innovation. Public-private partnerships and feedback loops from organizations doing the assessments will be crucial in getting that right,” said Avani Desai, CEO of Top 50 firm Schellman.

Will we see more focus on AI regulation in 2025? Well, the only thing we know for sure is we don’t know anything for sure. But we can make educated guesses. While no one outright said we’d definitely see new regulations rolled out, some predicted scandals that would likely draw attention to the need for further oversight for AI systems. 

“AI’s capability will continue to evolve. The cost of using AI (e.g., Open AI’s API service) will continue to go down. There will be more AI applications. At the same time, we will also see more AI-related negative incidents, particularly those that raise important ethical concerns and debates,” said Abigail Zhang-Parker, an accounting professor at the University of Texas at San Antonio. 

Overall, when asked for their most confident predictions, many said the widespread integration of AI into workflows will accelerate, especially given the rising prevalence of autonomous AI agents with limited decision-making power. The rise of these virtual workers are widely predicted to increase productivity and efficiency at firms. At the same time, some experts warned how this might shift employment dynamics, as well as increase risk of ethical dilemmas. 

“I am confident that AI will either reduce the number of new hires the largest accounting firms plan to hire or lead to further staff reductions, if not both. The largest firms have planned for this stage of AI for years and they thought this day would come sooner. They know they can do more with less. I’m also quite confident we’ll see a scandal where a firm misuses AI or subjugates its judgment to AI that leads to a fraud or material error getting through an audit.  We’ve already seen this occur in the legal field. It’s only a matter of time until it happens to an accounting firm,” said Jack Castonguay, a Hofstra University accounting professor and the vice president of learning and development at Surgent. 

In this, the second of three parts, we look at our experts’ answers to: 

  • What is an AI regulation you’d love to see? What is an AI regulation you’d hate to see?
  • What AI prediction for 2025 are you most certain of? Something you are very confident we’ll all see next year?

We’ll have our third and final part—where we get into one of the more esoteric aspects of AI—next week.

Continue Reading

Accounting

IAASB tweaks standards on working with outside experts

Published

on

The International Auditing and Assurance Standards Board is proposing to tailor some of its standards to align with recent additions to the International Ethics Standards Board for Accountants’ International Code of Ethics for Professional Accountants when it comes to using the work of an external expert.

The proposed narrow-scope amendments involve minor changes to several IAASB standards:

  • ISA 620, Using the Work of an Auditor’s Expert;
  • ISRE 2400 (Revised), Engagements to Review Historical Financial Statements;
  • ISAE 3000 (Revised), Assurance Engagements Other than Audits or Reviews of Historical Financial Information;
  • ISRS 4400 (Revised), Agreed-upon Procedures Engagements.

The IAASB is asking for comments via a digital response template that can be found on the IAASB website by July 24, 2025.

In December 2023, the IESBA approved an exposure draft for proposed revisions to the IESBA’s Code of Ethics related to using the work of an external expert. The proposals included three new sections to the Code of Ethics, including provisions for professional accountants in public practice; professional accountants in business and sustainability assurance practitioners. The IESBA approved the provisions on using the work of an external expert at its December 2024 meeting, establishing an ethical framework to guide accountants and sustainability assurance practitioners in evaluating whether an external expert has the necessary competence, capabilities and objectivity to use their work, as well as provisions on applying the Ethics Code’s conceptual framework when using the work of an outside expert.  

Continue Reading

Accounting

Tariffs will hit low-income Americans harder than richest, report says

Published

on

President Donald Trump’s tariffs would effectively cause a tax increase for low-income families that is more than three times higher than what wealthier Americans would pay, according to an analysis from the Institute on Taxation and Economic Policy.

The report from the progressive think tank outlined the outcomes for Americans of all backgrounds if the tariffs currently in effect remain in place next year. Those making $28,600 or less would have to spend 6.2% more of their income due to higher prices, while the richest Americans with income of at least $914,900 are expected to spend 1.7% more. Middle-income families making between $55,100 and $94,100 would pay 5% more of their earnings. 

Trump has imposed the steepest U.S. duties in more than a century, including a 145% tariff on many products from China, a 25% rate on most imports from Canada and Mexico, duties on some sectors such as steel and aluminum and a baseline 10% tariff on the rest of the country’s trading partners. He suspended higher, customized tariffs on most countries for 90 days.

Economists have warned that costs from tariff increases would ultimately be passed on to U.S. consumers. And while prices will rise for everyone, lower-income families are expected to lose a larger portion of their budgets because they tend to spend more of their earnings on goods, including food and other necessities, compared to wealthier individuals.

Food prices could rise by 2.6% in the short run due to tariffs, according to an estimate from the Yale Budget Lab. Among all goods impacted, consumers are expected to face the steepest price hikes for clothing at 64%, the report showed. 

The Yale Budget Lab projected that the tariffs would result in a loss of $4,700 a year on average for American households.

Continue Reading

Accounting

At Schellman, AI reshapes a firm’s staffing needs

Published

on

Artificial intelligence is just getting started in the accounting world, but it is already helping firms like technology specialist Schellman do more things with fewer people, allowing the firm to scale back hiring and reduce headcount in certain areas through natural attrition. 

Schellman CEO Avani Desai said there have definitely been some shifts in headcount at the Top 100 Firm, though she stressed it was nothing dramatic, as it mostly reflects natural attrition combined with being more selective with hiring. She said the firm has already made an internal decision to not reduce headcount in force, as that just indicates they didn’t hire properly the first time. 

“It hasn’t been about reducing roles but evolving how we do work, so there wasn’t one specific date where we ‘started’ the reduction. It’s been more case by case. We’ve held back on refilling certain roles when we saw opportunities to streamline, especially with the use of new technologies like AI,” she said. 

One area where the firm has found such opportunities has been in the testing of certain cybersecurity controls, particularly within the SOC framework. The firm examined all the controls it tests on the service side and asked which ones require human judgment or deep expertise. The answer was a lot of them. But for the ones that don’t, AI algorithms have been able to significantly lighten the load. 

“[If] we don’t refill a role, it’s because the need actually has changed, or the process has improved so significantly [that] the workload is lighter or shared across the smarter system. So that’s what’s happening,” said Desai. 

Outside of client services like SOC control testing and reporting, the firm has found efficiencies in administrative functions as well as certain internal operational processes. On the latter point, Desai noted that Schellman’s engineers, including the chief information officer, have been using AI to help develop code, which means they’re not relying as much on outside expertise on the internal service delivery side of things. There are still people in the development process, but their roles are changing: They’re writing less code, and doing more reviewing of code before it gets pushed into production, saving time and creating efficiencies. 

“The best way for me to say this is, to us, this has been intentional. We paused hiring in a few areas where we saw overlaps, where technology was really working,” said Desai.

However, even in an age awash with AI, Schellman acknowledges there are certain jobs that need a human, at least for now. For example, the firm does assessments for the FedRAMP program, which is needed for cloud service providers to contract with certain government agencies. These assessments, even in the most stable of times, can be long and complex engagements, to say nothing of the less predictable nature of the current government. As such, it does not make as much sense to reduce human staff in this area. 

“The way it is right now for us to do FedRAMP engagements, it’s a very manual process. There’s a lot of back and forth between us and a third party, the government, and we don’t see a lot of overall application or technology help… We’re in the federal space and you can imagine, [with] what’s going on right now, there’s a big changing market condition for clients and their pricing pressure,” said Desai. 

As Schellman reduces staff levels in some places, it is increasing them in others. Desai said the firm is actively hiring in certain areas. In particular, it’s adding staff in technical cybersecurity (e.g., penetration testers), the aforementioned FedRAMP engagements, AI assessment (in line with recently becoming an ISO 42001 certification body) and in some client-facing roles like marketing and sales. 

“So, to me, this isn’t about doing more with less … It’s about doing more of the right things with the right people,” said Desai. 

While these moves have resulted in savings, she said that was never really the point, so whatever the firm has saved from staffing efficiencies it has reinvested in its tech stack to build its service line further. When asked for an example, she said the firm would like to focus more on penetration testing by building a SaaS tool for it. While Schellman has a proof of concept developed, she noted it would take a lot of money and time to deploy a full solution — both of which the firm now has more of because of its efficiency moves. 

“What is the ‘why’ behind these decisions? The ‘why’ for us isn’t what I think you traditionally see, which is ‘We need to get profitability high. We need to have less people do more things.’ That’s not what it is like,” said Desai. “I want to be able to focus on quality. And the only way I think I can focus on quality is if my people are not focusing on things that don’t matter … I feel like I’m in a much better place because the smart people that I’ve hired are working on the riskiest and most complicated things.”

Continue Reading

Trending