As a technology assurance specialist, Top 50 firm Schellman was already well familiar with AI when it captured the public’s attention a few years back. But as clients began making major investments in the technology — and as regulators became increasingly wary of it — CEO Avani Desai knew they would need more support with the increasingly vital matter of AI governance. To this end, the firm embarked on an eight-month journey to become the first ANSI-accredited body allowed to audit and grant certification for compliance with the new ISO 42001 standard on artificial intelligence management systems, which the firm finally accomplished in September.
ISO 42001 sets out a structured way for organizations to manage risks and opportunities associated with AI, balancing innovation with governance. Desai said that while there have been other AI-related standards, this is a comprehensive framework covering multiple aspects of the technology.
The rise in AI-related regulatory measures over the last few years — from the White House executive order to the EU AI Act — signaled to Desai that there would soon be a need to work with clients to demonstrate responsible use of the technology through strong AI governance. To this end, the firm decided in January to place extra strong emphasis on the matter, as clients planned to make major AI investments over the next few years. Becoming an accredited certification body for this new ISO standard was a key part of how Schellman planned to support these clients.
“So, we had to get accredited. It’s not about checking the box or offering another service but all about helping our clients responsibly leverage emerging technology. We want to be a true partner to organizations. We’re not check-the-box auditors. We want to make sure our clients can navigate the opportunities and risks of AI adoption,” she said.
What followed was a major undertaking that lasted from the beginning of February to around the end of September, working directly with the ISO’s U.S. representative, the American National Standards Institute’s National Accreditation Board (ANAB). The process involved partnering with organizations to audit while the ANAB watched to see if they were capable of acting as a certification body.
Schellman first partnered with Evisort, an AI-driven contract management company, to undertake the “rigorous, detailed” process that involved auditing the company’s AI governance, then getting and responding to feedback from ANAB and adjusting as needed. This process went through several iterations before then doing it all again with another company, StackAware, which itself is an AI risk solutions provider. Once the audits were done, ANAB then did an office visit where they examined Schellman’s own policies and procedures as well.
While all this was going on, Schellman was also working to comply with the framework themselves, as the firm deploys its own custom AI tools in its work. “We eat our own dog food,” said Desai, so the firm needed to train its own people in the standard and all the necessary processes, too.
This was the first time anyone had gone through this process, including the ANAB, and as such it was a learning process for all sides. For example, at first the certification process required companies to make their algorithms transparent, which Desai said few organizations would ever want to do, as algorithms are often proprietary information.
“At the end of the day, they’re not practitioners. We’re practitioners and our clients are innovators and the last thing that frameworks and laws should do is stifle innovation. So we had to make sure we pushed back on certain things,” she said.
In this case, the ANAB eventually agreed with Schellman on this issue, as it is a matter of intellectual property, though without algorithm transparency there were questions of how to account for things like bias and hallucinations without revealing proprietary information.
“So we had this kind of back and forth and that is why [this feedback] is really important. Now I understand why ISO does these witness audits, it’s very important to have the practitioner and operator saying, ‘This doesn’t work, this is physically impossible for us to meet this standard without potentially being detrimental to our business,'” she said.
Ready to meet demand
With the accreditation now granted, Schellman became the first ANAB-authorized body to provide independent third-party certification for compliance with ISO 420001. So, for example, if a client with this certification is producing large language models, they can tell their own customers that they are meeting global standards and have controls in place for responsible AI. This commitment to responsible innovation can give them a competitive edge, as the certification speaks to a certain level of trust and differentiation in a fast-moving market.
While technically 42001 certification is now available as a standalone service, Desai noted that modern AI models typically touch other domains like cybersecurity, privacy, operational resilience and data integrity. She anticipates, then, that this will usually be bundled with other certification processes, such as compliance with ISO 27001, which concerns information security.
There is already significant demand for this ISO 42001 certification. Desai said the firm already has 26 contracts signed with clients who want to undergo the process themselves. People interested in this, she said, generally fall into three categories: those who are building AI on top of their services, AI developers themselves, and those who are building and running their own bespoke models, though she added that it seems everyone is talking about it these days.
For instance, one client is a very large real estate company with buildings all over the world. They have access to AI systems that can identify how many square feet a tenant actually needs. While she said it does not fit the typical profile, people are concerned about the data collection implications of this AI system and so the company believes certification can help quell some of those worries.
Desai doesn’t think these sorts of worries will be going away anytime soon, which underscores the importance of certifications like this. The technology is moving fast, and regulations rarely keep up the pace.
“We went from regular AI to generative AI and now agentic AI — none of these frameworks talk about agentic AI — and I can say people are probably not trained for the next thing… . The way we audit today will be very different from how we audit next year because I think the technology will really change as well,” she said.
Schellman is currently in the process of getting similar accreditation in the U.K. as well.