Though the Securities and Exchange Commission has yet to issue regulations specific to AI, this doesn’t mean companies are off the hook when it comes to disclosures, as the technology’s use can easily be slotted into other, already existing requirements.
Speaking today at a virtual conference hosted by Financial Executives International, Scott Lesmes, partner in charge of public company advisory and governance with law firm Morrison Foerster, noted that there are many risks that come with AI including false or misleading information, data breaches, cyberattacks, intellectual property risk and much more. He said people need to be taking these risks seriously.
“These mistakes are in the real world and have had significant consequences,” he said.
He pointed to a case where a chatbot advised small business owners that it was legal to fire people for complaining about sexual harassment, which is absolutely is no. He also referred to another case where a real estate company was forced to take a $300 million writeoff for relying on a faulty AI algorithm for property pricing decisions, and another where an AI model used by hospitals to determine which patients are high risk and need extra care was found to be biased against Black people, as it was far less likely to identify them.
Incidents like this underscore the need for robust AI governance. He noted that there has been a rise in companies forming cross-disciplinary AI governance committees encompassing finance, legal, product, cybersecurity, compliance and in some cases HR and marketing; failing that, he has also seen companies add AI oversight on the duties of existing committees. While some companies have established dedicated AI departments, more commonly they have been giving AI oversight duties to the Chief Information Security Officer or other relevant c-suite position.
He also noted that there has been a dramatic increase in board supervision of AI, saying that in the most recent 10-K season there was a lot of clients who added “Oversight of AI” in terms of what the board was responsible for; while it was a small percentage, he was certain it was going to increase over time. He has also found that many boards either designate a single AI expert who handles such matters or place the responsibility on either already-existing technology committees or (more commonly) audit committees.
“There is certainly a tension, audit committees already have such a full plate, so adding another responsibility, especially with such a broad mandate, can be a little unsettling but that is where many companies are putting this, if they handle it on the board level. Audit committee does make some sense, because it is very focused on internal controls as well as compliance,” he said.
Boards generally need to consider the legal and regulatory factors that may impact operations, and just like how many have management frameworks for oversight, so too should there be AI frameworks for how the board fulfills these responsibilities. In executing these duties, boards needs to understand the critical AI uses and risks in the company, how they integrate with business processes, what is the nature of the AI system, how does the company mitigate risk, how oversight responsibility is divided between board and management, as well as any material AI incidents.
“The board does not need to know about every AI incident altogether, there needs to be a level of understanding of what’s important enough to share and what’s not. The board should understand the material incidents, how the company responded and the material impact,” he said.
SEC Disclosures
Ryan Adams, another Morrison Foerster partner in the same practice area, noted that even though regulators like the Securities and Exchange Commission have yet to issue specific rules or guidance around AI, they have stressed the importance of complying with existing obligations, which may or may include disclosures regarding the company’s use of AI and its impact, particularly where it concerns business operations. Already companies need to report material risks and changes in their filings, and as AI further embeds itself into the global economy, it will almost certainly be a factor.
Further, companies should not be making false claims or misleading potential investors in general, and this applies to AI as well. He noted that the government has been especially interested in “AI washing,” that is exaggerating, or making false claims about the company’s AI capabilities or use. He pointed to one example where the SEC brought charges against the CEO and founder of a startup who said they had a proprietary AI system that could help clients find job candidates from diverse backgrounds, but this AI did not in fact exist. He pointed out that this didn’t even involve a public company, just a private one that was trying to raise investment capital.
“So it makes clear that the SEC will scrutinize all AI-related claims made by any company, public or private, trying to get investors to raise capital,” he said.
He added that AI washing can be thought of very similarly to inflating financial results or just making up the numbers entirely. Also, just as an entity should not overstate the capacities of their AI systems, the same has already applied for automation technology in general. Regulators want clear and candid disclosures about how a company uses AI and how it presents material risks. In this regard, he also warned against generic or boilerplate disclosures regarding AI.
“Regardless of the type of company you are, you have to take this seriously. Anyone touting the benefits of AI with customers or the public needs to make sure what they say is truthful and accurate and can be substantiated, or risk potential legal consequences,” he said.
It is important to keep materiality in mind. Neither investors nor regulators want to read a list of every conceivable AI-related risk a company faces when only one or two are relevant. He conceded that this might require slightly different thinking, as accountants tend to lean on quantitative factors to assess materiality, but AI can also carry qualitatively material factors as well. There is the risk that AI could inadvertently breach confidentiality agreements through sensitive information in the training data, it could completely disrupt traditional business functions if used properly or completely disrupt new ones if used improperly, there is the risk of being unable to find the experts needed to properly monitor an AI system, there could be third party fees for things like data storage or increased energy use, AI can disrupt competitive dynamics in the market, there could be ethical risk like the aforementioned racist algorithm, and legal or regulatory risks.
“You could go on forever with these AI risks… Just because you use AI and a risk is potential does not necessarily mean disclosure is appropriate. You need to spend time thinking about whether AI-related risks are appropriate to disclose and if they are they should be narrowly tailored to describe the material risk,” he said.
When assessing materiality, he said to go with the same standard accountants have been using for ages: is there a substantial likelihood a reasonable investor would consider this information important to determine whether to buy, sell or hold a security. Where AI introduces a slight wrinkle is that, given the pace of change in the field, it is important for companies to review and reevaluate their risk factors every quarter.
But risks are not the only thing one should disclose. Adams noted that companies should also consider AI impacts when drafting management discussion and analysis or the executive overview, painting out major developments or initiatives or milestones related to the technology. AI could also come up in discussions of capital expenditures, if the entity made big AI investments that are material and known to the business, that needs to be disclosed. Another area AI plays into is cybersecurity disclosures, which already has a number of SEC requirements around it. The two topics, he said, often go hand in hand, so if AI interacts with cybersecurity in any way it might be worth disclosing.
Overall, Adams recommended companies fully and accurately disclose their AI use; avoid overly vague or generic language given AI’s wide variations; avoid exaggerated claims around what your AI is capable of doing, taking care especially not to discuss capacities in terms of hypotheticals; be specific about the nature and extent of how the entity is using AI and the role AI plays in business operations; have a good understanding of vendors and other third parties who use AI, as their risks could ripple outwards; establish, or at least begin to establish, an AI governance framework; train the staff in AI so they can understand what it can and cannot do; actively monitor company AI usage; regularly update stakeholders on changes, progress and improvements in company AI use; and have either the legal department or outside counsel review any public statements or marketing materials mentioning AI.
While the current administration has emphasized a less regulated approach to AI, Adams noted that the SEC is still active in its dialogues with the business community around potential regulation, mentioning a recent meeting with the investment advisor community as well as a strategy roundtable with the financial services community.
“The big takeaway here is that both the SEC and industry are saying ‘we want to have active and ongoing communications as this develops’ … any regulations we do see, if any, in the future [will be] informed by what is actually happening in the marketplace,” he said.