AI governance was a major theme of 2024, and as the technology continues to evolve, oversight and control—as well as ways to demonstrate it to others—will become even more important this year.
This was the assessment of Danny Manimbo, a principal with Top 50 firm Schellman, who is primarily responsible for leading the firm’s AI and ISO practices. Speaking during the firm’s Schellmancon event today, he said that last year saw the release of a number of AI governance frameworks, including the National Institute of Standards and Technology’s AI Risk Management Framework, the International Standards Organization’s ISO 42001, and Microsoft’s revisions to its Supplier Security and Privacy Assurance Program to account for AI. Meanwhile, actual regulation is also gaining momentum, with Manimbo pointing to the EU’s AI Act, South Korea’s AI Basic Act, and a number of state-level regulations such as California’s recent AI laws.
“That kind of set the tone for a lot of the inquiries and the interest that we saw, and for the trends on where GRC was going in 2024, maybe not so much immediately in the beginning of the year, because the frameworks were so new, but I think they were boosted by a number of things in the regulatory standpoint,” said Manimbo.
The other panelist, Lisa Hall, chief information security officer for the trust platform SafeBase, added that, given the pace of AI advances, it is likely that last year’s measures were not the end but just the beginning, especially considering how widely used even the current generation of solutions is.
“I think it’s only going to increase, and everyone seems to have some type of AI offering,” said Hall. “Regulations and standards will likely become more demanding, and even with the shadow IT capabilities we have now, I worry that we may be underestimating how often AI technologies are actually used by our employees. And also, on the flip side, how can we best leverage these to make our lives easier?”
Manimbo noted that, with this rise in control frameworks and regulation, this year will also see a rise in demand for ways to demonstrate that one is aligned and compliant with them. The ISO 42001 certification, for which Schellman recently became the first ANSI-accredited body allowed to audit and grant certification for compliance with the standard, is one example, but he anticipated other avenues will open this year. “For example, I sit on the [Cloud Security Alliance] AI Control Framework [board], and they are launching a program scheduled for the second half of this year which is going to be very similar to their [Security Trust Assurance and Risk] program for cloud security but specific to AI risk. That’ll be another avenue,” he said. He added that other standard setters, like the AICPA, might also decide to update their frameworks to account for AI risk.
Such demonstrations are vital for establishing customer trust in a world that is increasingly connected. Hall noted that supply chains have grown much more complex, which has allowed attackers new opportunities to target vendors or third party software providers and compromise multiple downstream organizations at once. In such an environment, establishing trust with a customer is vital, but it can often involve lengthy and tedious audits filled with manual processes. While she has had success with some automation, such as using AI to reduce time on customer questionnaires and automate access controls, there remain many things that still need human intervention.
“I’ve definitely struggled with that, like where an auditor is asking for data sets, you’re coming back with a sample set, you’re bouncing back and forth from a tool to gather evidence, and it becomes even more complex when you’re dealing with customer audits and you’re talking to more than one auditor, and you can only reuse evidence for so long that evidence goes stale,” she said. “And then a lot of times, auditors have competing platforms and tools that may not integrate with yours. So it’s still a manual process. There’s a ton of back and forth communication there. I’m still copying and pasting, I’m still downloading from here and uploading to here. So I’d love to see this process improve,”
Manimbo noted AI has also been helping processes like this, noting that AI can itself help bolster an organization’s controls through automating routine processes and reducing dependence on manual processes.
“On this front, some of the things that have plagued us in the past is the amount of context that we need as professionals to know if something is something that needs to be addressed immediately as part of a control failure that may be detected. And I think AI will help provide that context there… It may not necessarily be [about] what the controls may be, but how efficient are the models in augmenting existing automation to find those failures in a way that we can effectively address those findings in a way that we can again improve on those and so hopefully reducing additional burden on a team members,” he said.
However, with all these different frameworks coming out, and with current ones being revised to account for AI, professionals may be challenged in keeping up with all the changes. Professionals need to not only know how to apply these frameworks but also how to scale them as time goes on. Hall said that, by maintaining a security-focused mindset and being proactive, so that the organization is more able to respond to change.
“If we build and buy with security in mind and find ways to leverage automation and AI to enable us to quickly adjust, … we’re just going to be way better off,” said Hall. “Instead of looking at ‘here’s the strict regulation, here’s what I have to do,’ [it is] kind of this afterthought, by being more proactive and just having these things in mind. .. I think it’s about us having that mindset of: How is the security built in? How can I be accountable and prove that I’m doing what I’m doing? And think about that before the auditors show up and before the regulations show up.”