Connect with us

Accounting

No AI disclosure rules doesn’t mean no AI disclosures at all

Published

on

Though the Securities and Exchange Commission has yet to issue regulations specific to AI, this doesn’t mean companies are off the hook when it comes to disclosures, as the technology’s use can easily be slotted into other, already existing requirements. 

Speaking today at a virtual conference hosted by Financial Executives International, Scott Lesmes, partner in charge of public company advisory and governance with law firm Morrison Foerster, noted that there are many risks that come with AI including false or misleading information, data breaches, cyberattacks, intellectual property risk and much more. He said people need to be taking these risks seriously.

“These mistakes are in the real world and have had significant consequences,” he said. 

He pointed to a case where a chatbot advised small business owners that it was legal to fire people for complaining about sexual harassment, which is absolutely is no. He also referred to another case where a real estate company was forced to take a $300 million writeoff for relying on a faulty AI algorithm for property pricing decisions, and another where an AI model used by hospitals to determine which patients are high risk and need extra care was found to be biased against Black people, as it was far less likely to identify them. 

Incidents like this underscore the need for robust AI governance. He noted that there has been a rise in companies forming cross-disciplinary AI governance committees encompassing finance, legal, product, cybersecurity, compliance and in some cases HR and marketing; failing that, he has also seen companies add AI oversight on the duties of existing committees. While some companies have established dedicated AI departments, more commonly they have been giving AI oversight duties to the Chief Information Security Officer or other relevant c-suite position. 

He also noted that there has been a dramatic increase in board supervision of AI, saying that in the most recent 10-K season there was a lot of clients who added “Oversight of AI” in terms of what the board was responsible for; while it was a small percentage, he was certain it was going to increase over time. He has also found that many boards either designate a single AI expert who handles such matters or place the responsibility on either already-existing technology committees or (more commonly) audit committees. 

“There is certainly a tension, audit committees already have such a full plate, so adding another responsibility, especially with such a broad mandate, can be a little unsettling but that is where many companies are putting this, if they handle it on the board level. Audit committee does make some sense, because it is very focused on internal controls as well as compliance,” he said. 

Boards generally need to consider the legal and regulatory factors that may impact operations, and just like how many have management frameworks for oversight, so too should there be AI frameworks for how the board fulfills these responsibilities. In executing these duties, boards needs to understand the critical AI uses and risks in the company, how they integrate with business processes, what is the nature of the AI system, how does the company mitigate risk, how oversight responsibility is divided between board and management, as well as any material AI incidents. 

“The board does not need to know about every AI incident altogether, there needs to be a level of understanding of what’s important enough to share and what’s not. The board should understand the material incidents, how the company responded and the material impact,” he said. 

SEC Disclosures

Ryan Adams, another Morrison Foerster partner in the same practice area, noted that even though regulators like the Securities and Exchange Commission have yet to issue specific rules or guidance around AI, they have stressed the importance of complying with existing obligations, which may or may include disclosures regarding the company’s use of AI and its impact, particularly where it concerns business operations. Already companies need to report material risks and changes in their filings, and as AI further embeds itself into the global economy, it will almost certainly be a factor. 

Further, companies should not be making false claims or misleading potential investors in general, and this applies to AI as well. He noted that the government has been especially interested in “AI washing,” that is exaggerating, or making false claims about the company’s AI capabilities or use. He pointed to one example where the SEC brought charges against the CEO and founder of a startup who said they had a proprietary AI system that could help clients find job candidates from diverse backgrounds, but this AI did not in fact exist. He pointed out that this didn’t even involve a public company, just a private one that was trying to raise investment capital. 

“So it makes clear that the SEC will scrutinize all AI-related claims made by any company, public or private, trying to get investors to raise capital,” he said. 

He added that AI washing can be thought of very similarly to inflating financial results or just making up the numbers entirely. Also, just as an entity should not overstate the capacities of their AI systems, the same has already applied for automation technology in general. Regulators want clear and candid disclosures about how a company uses AI and how it presents material risks. In this regard, he also warned against generic or boilerplate disclosures regarding AI. 

“Regardless of the type of company you are, you have to take this seriously. Anyone touting the benefits of AI with customers or the public needs to make sure what they say is truthful and accurate and can be substantiated, or risk potential legal consequences,” he said. 

It is important to keep materiality in mind. Neither investors nor regulators want to read a list of every conceivable AI-related risk a company faces when only one or two are relevant. He conceded that this might require slightly different thinking, as accountants tend to lean on quantitative factors to assess materiality, but AI can also carry qualitatively material factors as well. There is the risk that AI could inadvertently breach confidentiality agreements through sensitive information in the training data, it could completely disrupt traditional business functions if used properly or completely disrupt new ones if used improperly, there is the risk of being unable to find the experts needed to properly monitor an AI system, there could be third party fees for things like data storage or increased energy use, AI can disrupt competitive dynamics in the market, there could be ethical risk like the aforementioned racist algorithm, and legal or regulatory risks. 

“You could go on forever with these AI risks…  Just because you use AI and a risk is potential does not necessarily mean disclosure is appropriate. You need to spend time thinking about whether AI-related risks are appropriate to disclose and if they are they should be narrowly tailored to describe the material risk,” he said. 

When assessing materiality, he said to go with the same standard accountants have been using for ages: is there a substantial likelihood a reasonable investor would consider this information important to determine whether to buy, sell or hold a security. Where AI introduces a slight wrinkle is that, given the pace of change in the field, it is important for companies to review and reevaluate their risk factors every quarter. 

But risks are not the only thing one should disclose. Adams noted that companies should also consider AI impacts when drafting management discussion and analysis or the executive overview, painting out major developments or initiatives or milestones related to the technology. AI could also come up in discussions of capital expenditures, if the entity made big AI investments that are material and known to the business, that needs to be disclosed. Another area AI plays into is cybersecurity disclosures, which already has a number of SEC requirements around it. The two topics, he said, often go hand in hand, so if AI interacts with cybersecurity in any way it might be worth disclosing. 

Overall, Adams recommended companies fully and accurately disclose their AI use; avoid overly vague or generic language given AI’s wide variations; avoid exaggerated claims around what your AI is capable of doing, taking care especially not to discuss capacities in terms of hypotheticals; be specific about the nature and extent of how the entity is using AI and the role AI plays in business operations; have a good understanding of vendors and other third parties who use AI, as their risks could ripple outwards; establish, or at least begin to establish, an AI governance framework; train the staff in AI so they can understand what it can and cannot do; actively monitor company AI usage; regularly update stakeholders on changes, progress and improvements in company AI use; and have either the legal department or outside counsel review any public statements or marketing materials mentioning AI. 

While the current administration has emphasized a less regulated approach to AI, Adams noted that the SEC is still active in its dialogues with the business community around potential regulation, mentioning a recent meeting with the investment advisor community as well as a strategy roundtable with the financial services community. 

“The big takeaway here is that both the SEC and industry are saying ‘we want to have active and ongoing communications as this develops’ … any regulations we do see, if any, in the future [will be] informed by what is actually happening in the marketplace,” he said.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Accounting

Acting IRS commissioner reportedly replaced

Published

on

Gary Shapley, who was named only days ago as the acting commissioner of the Internal Revenue Service, is reportedly being replaced by Deputy Treasury Secretary Michael Faulkender amid a power struggle between Treasury Secretary Scott Bessent and Elon Musk.

The New York Times reported that Bessent was outraged that Shapley was named to head the IRS without his knowledge or approval and complained to President Trump about it. Shapley was installed as acting commissioner on Tuesday, only to be ousted on Friday. He first gained prominence as an IRS Criminal Investigation special agent and whistleblower who testified in 2023 before the House Oversight Committee that then-President Joe Biden’s son Hunter received preferential treatment during a tax-evasion investigation, and he and another special agent had been removed from the investigation after complaining to their supervisors in 2022. He was promoted last month to senior advisor to Bessent and made deputy chief of IRS Criminal Investigation. Shapley is expected to remain now as a senior official at IRS Criminal Investigation, according to the Wall Street Journal. The IRS and the Treasury Department press offices did not immediately respond to requests for comment.

Faulkender was confirmed last month as deputy secretary at the Treasury Department and formerly worked during the first Trump administration at the Treasury on the Paycheck Protection Program before leaving to teach finance at the University of Maryland.

Faulkender will be the fifth head of the IRS this year. Former IRS commissioner Danny Werfel departed in January, on Inauguration Day, after Trump announced in December he planned to name former Congressman Billy Long, R-Missouri, as the next IRS commissioner, even though Werfel’s term wasn’t scheduled to end until November 2027. The Senate has not yet scheduled a confirmation hearing for Long, amid questions from Senate Democrats about his work promoting the Employee Retention Credit and so-called “tribal tax credits.” The job of acting commissioner has since been filled by Douglas O’Donnell, who was deputy commissioner under Werfel. However, O’Donnell abruptly retired as the IRS came under pressure to lay off thousands of employees and share access to confidential taxpayer data. He was replaced by IRS chief operating officer Melanie Krause, who resigned last week after coming under similar pressure to provide taxpayer data to immigration authorities and employees of the Musk-led U.S. DOGE Service. 

Krause had planned to depart later this month under the deferred resignation program at the IRS, under which approximately 22,000 IRS employees have accepted the voluntary buyout offers. But Musk reportedly pushed to have Shapley installed on Tuesday, according to the Times, and he remained working in the commissioner’s office as recently as Friday morning. Meanwhile, plans are underway for further reductions in the IRS workforce of up to 40%, according to the Federal News Network, taking the IRS from approximately 102,000 employees at the beginning of the year to around 60,000 to 70,000 employees.

Continue Reading

Accounting

On the move: EY names San Antonio office MP

Published

on

Carr, Riggs & Ingram appoints CFO and chief legal officer; TSCPA hosts accounting bootcamp; and more news from across the profession.

Continue Reading

Accounting

Tech news: Certinia announces spring release

Published

on


Certinia announces spring release; Intuit acquires tech and experts from fintech Deserve; Paystand launches feature to navigate tariffs; and other accounting tech news and updates.

Continue Reading

Trending