Connect with us

Accounting

Inside Schellman’s journey to provide certification for ISO 42001 AI framework

Published

on

As a technology assurance specialist, Top 50 firm Schellman was already well familiar with AI when it captured the public’s attention a few years back. But as clients began making major investments in the technology — and as regulators became increasingly wary of it — CEO Avani Desai knew they would need more support with the increasingly vital matter of AI governance. To this end, the firm embarked on an eight-month journey to become the first ANSI-accredited body allowed to audit and grant certification for compliance with the new ISO 42001 standard on artificial intelligence management systems, which the firm finally accomplished in September. 

ISO 42001 sets out a structured way for organizations to manage risks and opportunities associated with AI, balancing innovation with governance. Desai said that while there have been other AI-related standards, this is a comprehensive framework covering multiple aspects of the technology. 

The rise in AI-related regulatory measures over the last few years — from the White House executive order to the EU AI Act — signaled to Desai that there would soon be a need to work with clients to demonstrate responsible use of the technology through strong AI governance. To this end, the firm decided in January to place extra strong emphasis on the matter, as clients planned to make major AI investments over the next few years. Becoming an accredited certification body for this new ISO standard was a key part of how Schellman planned to support these clients.

2023 Best Firms - Schellman

“So, we had to get accredited. It’s not about checking the box or offering another service but all about helping our clients responsibly leverage emerging technology. We want to be a true partner to organizations. We’re not check-the-box auditors. We want to make sure our clients can navigate the opportunities and risks of AI adoption,” she said. 

What followed was a major undertaking that lasted from the beginning of February to around the end of September, working directly with the ISO’s U.S. representative, the American National Standards Institute’s National Accreditation Board (ANAB). The process involved partnering with organizations to audit while the ANAB watched to see if they were capable of acting as a certification body. 

Schellman first partnered with Evisort, an AI-driven contract management company, to undertake the “rigorous, detailed” process that involved auditing the company’s AI governance, then getting and responding to feedback from ANAB and adjusting as needed. This process went through several iterations before then doing it all again with another company, StackAware, which itself is an AI risk solutions provider. Once the audits were done, ANAB then did an office visit where they examined Schellman’s own policies and procedures as well. 

While all this was going on, Schellman was also working to comply with the framework themselves, as the firm deploys its own custom AI tools in its work. “We eat our own dog food,” said Desai, so the firm needed to train its own people in the standard and all the necessary processes, too. 

This was the first time anyone had gone through this process, including the ANAB, and as such it was a learning process for all sides. For example, at first the certification process required companies to make their algorithms transparent, which Desai said few organizations would ever want to do, as algorithms are often proprietary information. 

“At the end of the day, they’re not practitioners. We’re practitioners and our clients are innovators and the last thing that frameworks and laws should do is stifle innovation. So we had to make sure we pushed back on certain things,” she said. 

In this case, the ANAB eventually agreed with Schellman on this issue, as it is a matter of intellectual property, though without algorithm transparency there were questions of how to account for things like bias and hallucinations without revealing proprietary information. 

“So we had this kind of back and forth and that is why [this feedback] is really important. Now I understand why ISO does these witness audits, it’s very important to have the practitioner and operator saying, ‘This doesn’t work, this is physically impossible for us to meet this standard without potentially being detrimental to our business,'” she said. 

Ready to meet demand

With the accreditation now granted, Schellman became the first ANAB-authorized body to provide independent third-party certification for compliance with ISO 420001. So, for example, if a client with this certification is producing large language models, they can tell their own customers that they are meeting global standards and have controls in place for responsible AI. This commitment to responsible innovation can give them a competitive edge, as the certification speaks to a certain level of trust and differentiation in a fast-moving market. 

While technically 42001 certification is now available as a standalone service, Desai noted that modern AI models typically touch other domains like cybersecurity, privacy, operational resilience and data integrity. She anticipates, then, that this will usually be bundled with other certification processes, such as compliance with ISO 27001, which concerns information security. 

There is already significant demand for this ISO 42001 certification. Desai said the firm already has 26 contracts signed with clients who want to undergo the process themselves. People interested in this, she said, generally fall into three categories: those who are building AI on top of their services, AI developers themselves, and those who are building and running their own bespoke models, though she added that it seems everyone is talking about it these days. 

For instance, one client is a very large real estate company with buildings all over the world. They have access to AI systems that can identify how many square feet a tenant actually needs. While she said it does not fit the typical profile, people are concerned about the data collection implications of this AI system and so the company believes certification can help quell some of those worries. 

Desai doesn’t think these sorts of worries will be going away anytime soon, which underscores the importance of certifications like this. The technology is moving fast, and regulations rarely keep up the pace. 

“We went from regular AI to generative AI and now agentic AI — none of these frameworks talk about agentic AI — and I can say people are probably not trained for the next thing… . The way we audit today will be very different from how we audit next year because I think the technology will really change as well,” she said. 

Schellman is currently in the process of getting similar accreditation in the U.K. as well.

Continue Reading

Accounting

Poll: People trust AI less, but use AI more

Published

on

People trust AI tools less and are more worried about their negative impacts than they did two years ago but, despite this, their use has been growing steadily, as many feel the benefits still outweigh the risks. 

This is according to what Big Four firm KPMG said was the largest survey of its kind, polling over 48,000 people across 47 countries, including 1,019 people in the U.S. 

The poll found, among other things, that the proportion of people who said they were willing to rely on AI systems went from 52% in 2022 to 43% in 2024; the proportion of those saying they perceive AI systems as trustworthy went from 63% to 56%; and the proportion of those saying they were worried about AI systems rose from 49% to 62%. 

Yet, at the same time, most people use AI today in some form or another. The poll found that the proportion of organizations reporting that they’ve adopted AI technology went from 34% in 2022 to 71% in 2024; consequently, the proportion of employees who use AI at work went from 54% to 67% in the same time period.

Outside of work, in terms of their personal lives, 20% of respondents said they never use AI, but 51% said they use AI daily, weekly or monthly. For the most part, when people are using AI, it is usually a general purpose public model: 73% said this is what they use for work, versus 18% who are using AI tools developed or customized to their particular organization. 

However, while more people are using AI, fewer say they know enough about it. The poll found that nearly half, 48%, reported their AI knowledge as “low” while a further 31% rated it as “moderate.” Only 21% said they had a high amount of knowledge on AI. 

Despite this, most who use AI believe they’re pretty good at using it effectively. The poll found 62% saying they could skillfully use AI applications to help with daily work or activities; 60% said they could communicate effectively with AI applications; 59% said they can choose the most appropriate AI tool for the task; and 55% said they can evaluate the accuracy of AI responses. Those saying they lacked confidence in any area hovered between 21% to 24%.

The report suggested that this disparity might be due to AI solutions having intuitive interfaces that people can quickly grasp: just as one may not need to know much about cars to drive one, maybe people don’t need to know how AI works to use it well. 

This could be borne out by the benefits people say they have personally witnessed from using AI. A clear majority, 67%, of those using AI at work said they have become more efficient, 61% say it has improved access to accurate information, 59% say it has improved idea generation and innovation, 58% say the quality or accuracy of work and decisions has improved, and 55% say they have used it to develop skills and ability. 

However, other viewpoints are more contentious. Yes, 36% say it has saved them time on repetitive and mundane tasks but 39% say it has increased time; 40% say it has decreased their workload but 26% say it has increased it; meanwhile, 36% say it has led to less pressure and stress at work, but 26% say it has added more. Tellingly, while 19% say AI has reduced privacy and compliance risks, 35% say it has made them worse, and while 13% think it has led to less monitoring and surveillance of employees, 42% say AI has amplified it.  

While more people are using AI, they are not always doing so in ways their organizations would approve. The poll found, for example, that about 31% have contravened specific AI policies at their organizations, 34% admit they uploaded copyright material or intellectual property to a generative AI tool, and 34% said they uploaded company information. Meanwhile, 38% admitted to using AI tools when they weren’t sure if it was allowed and 31% used AI tools in ways that might be considered inappropriate (though the specifics of what that might mean was not mentioned.) 

People are also not entirely forthcoming when they have used AI, as the survey found 42% avoided revealing AI use in their work and 39% have passed off generative AI content as their own. 

The poll also found that AI has had impacts on how people work: 51% concede they’ve gotten lazier because of AI, 42% say they’ve relied on AI output without evaluating the information, and 31% admit they’ve made mistakes in their work because of AI. 

This might explain, at least partially, why 43% overall have reported personally witnessing negative outcomes from AI. The three biggest problems people have personally seen with AI are “loss of human interaction and connection” with 55% saying they’ve seen this; inaccurate outcomes, at 54%; and misinformation or disinformation, at 52%. Meanwhile, though they remain the lowest in the list, a still-troubling 31% said they saw bias or unfair treatment due to AI, 34% have witnessed both environmental impacts and the undermining of human rights due to AI, and 40% said they have seen manipulation and harmful use of AI (though, again, the specifics of this were not elaborated upon.) While right now many still believe the benefits outweigh the risks, this proportion has actually lowered from 50% in 2022 to 41% in 2024. 

However, 83% report they would be more willing to trust an AI system when such assurance mechanisms are in place. The survey also found strong support for the right to opt out of having their data used by AI systems, 86%, as well as for monitoring for accuracy and reliability, 84%, training employees on safe and responsible AI use, 84%, allowing humans to override the system’s recommendations and output, 84%, and effective AI laws or regulations, 84%. The poll also found that the clear majority, 74%, support third party independent assurance for AI systems. 

“Employees are asking for greater investments in AI training and the implementation of clear governance policies to bridge the gap between AI’s potential and its responsible use,” said Bryan McGowan, trusted AI leader for KPMG. “It’s not enough for AI to simply work; it needs to be trustworthy. Building this strong foundation is an investment that will pay dividends in future productivity and growth.”

Continue Reading

Accounting

PCAOB wants examples of CAMs and KAMs

Published

on

The Public Company Accounting Oversight Board’s Investor Advisory Group is asking for examples of critical audit matters or key audit matters that can be used for analysis.

The PCAOB’s Office of the Investor Advocate released an advisory last week asking the public to submit examples to the Investor Advisory Group by June 30. 

The nominations can come from public company issuers (both management and boards), auditors, financial analysts and investors. They’re looking for the most decision useful CAM or KAM contained in public company audit reports included in the 2024 Form 10-Ks and Form 20-Fs. The PCAOB began requiring the disclosure of CAMs in 2019, while the International Auditing and Assurance Standards Board began requiring KAMs disclosures in 2016.

The IAG plans to choose what they believe to be the top three decision useful CAMs or KAMs for 2024 among those nominated. The CAMs or KAMs selected will be identified and discussed in an IAG report expected to be issued publicly later this year.

Each nomination (which may be submitted anonymously) should include an explanation (maximum five hundred words) of why the nominated CAM or KAM provides decision useful information to investors.

The IAG has asked the public to provide submissions to the IAG by June 30, 2025. For more details, see the IAG announcement available here. A similar announcement went out last year.

Continue Reading

Accounting

IRS reduced workforce 11% so far, TIGTA reports

Published

on

More than 11,400 Internal Revenue Service employees have either received termination notices as probationary employees or voluntarily resigned, representing an 11% reduction to the agency’s workforce, according to a report released Monday by the Treasury Inspector General for Tax Administration. 

In February, the IRS had around 103,000 employees, but that number has dropped by about 11% due to a series of executive orders from President Trump since his inauguration in January and the downsizing instigated by the Elon Musk-led Department of Government Efficiency, also known as the U.S. DOGE Service. Specifically, 7,315 probationary employees received termination notices, according to the report, and 4,128 employees were approved to accept the Deferred Resignation Program, a voluntary buyout program offered by the Trump administration, which has been rolled out in phases to IRS employees amid court challenges and appeals. Another 522 employees are pending approval under the program.

Monday’s report was TIGTA’s first on IRS workforce reductions and it focuses on the probationary employees identified for termination and the employees who voluntarily participated in the initial Deferred Resignation Program. TIGTA plans to periodically update the report to highlight further reductions, including the impacts of the second Deferred Resignation Program and Reductions in Force. The DRP allowed federal employees to voluntarily resign with pay through Sept. 30, 2025.  

In conjunction with the reduction in force, the IRS is offering three voluntary separation programs: the Treasury Deferred Resignation Program will mirror the benefits of the first DRP offering; the Voluntary Separation Incentive Payment; and the Voluntary Early Retirement Authority. In April, the IRS extended the TDRP offer to its employees. According to the IRS, over 23,000 employees have applied for the TDRP, and 13,124 were approved as of April 22. 

The report includes a look at the IRS business units affected by the layoffs and voluntary departures. The separations disproportionately impacted employees in certain positions. For example, approximately 31% of revenue agents left the IRS under the program, while 5% of information technology management departed. Revenue agents conduct examinations and audits by reviewing the financial records of individuals and businesses to verify what is reported, and they can work in several IRS business units examining different types of taxpayers. 

The Tax-Exempt/Government Entities division lost 31% of its staff, representing 694 employees, while the Large Business and International division lost 25%, or 1,733 employees. The Small Business/Self-Employed division lost 23% of its staff, or 5,765 employees. In contrast, 7% of the Human Capital Office (207 employees), 5% of IT (450 employees) and 4% of taxpayer services (1,714 employees) were affected.

In March, a federal court ruled that the probationary employees needed to be reinstated. The IRS recalled the terminated employees but put them on paid administrative leave. There have since been various court cases challenging the terminations and reinstatements. “Currently, it is unclear what the final disposition will be for probationary employees who received termination notices.” 

Most of the 7,315 probationary employees who received termination notices had less than one year experience with the IRS, according to the report, and 6,669 employees had one year of service or less, while 615 employees had between one and five years of service, and 31 employees had more than five years of service. 

“The termination of probationary employees will have a greater impact on certain age groups within the IRS workforce,” said the report. The probationary employees who received termination notices tended to be under the age of 40, including 549 probationary employees who were less than 25 years old, representing 14% of all IRS employees in that age group. 

Various estimates have been given of the number of IRS employees who will ultimately be affected by the layoffs, ranging from about 20% to 50%. The budget proposal released last week by the Trump administration would cut $2.5 billion from the IRS budget, following on the heels of clawbacks of about half of the $80 billion in extra funding that was supposed to be provided to the IRS under the Inflation Reduction Act of 2022.

Amid all the turmoil this year, the IRS has also seen a number of high-profile departures, including former commissioner Danny Werfel and former acting commissioners Douglas O’Donnell, Melanie Krause and Gary Shapley.

Some former IRS employees and government accountants may be attractive hires by accounting firms and departments that need to fill their ranks amid the ongoing talent shortage.

“We’re certainly seeing more interest in the hiring of former IRS employees and government accountants in conversations we’re having with clients, and this tracks given the current talent shortage in both the finance and accounting fields,” said Kyle Allen, executive vice president of sales and recruiting for Vaco by Highspring, a recruiting and staffing firm. “These folks bring strong regulatory and audit experience to the table, and their insider knowledge of tax compliance is a big plus for private companies. They can often jump into roles like compliance or advisory work quickly, which is a huge benefit. It’s not a silver bullet for the talent gap, but having more qualified professionals in the mix is definitely a step in the right direction.” 

Continue Reading

Trending