Accounting
Humans: the main cause of, and solution to, AI mishaps
Published
5 months agoon
When it goes right, AI can seem like magic—all the busywork done for you, data shaped into clear insights, problems solved before you even notice. But when it goes wrong it can be a nightmare of wasted time and expensive mistakes. And, unfortunately, there are a lot of ways it can go wrong. But while it can be tempting to blame the software, most cases of AI error come down not to the bot but the user.
One of the biggest issues, mentioned over and over, is that people simply do not double check the outputs of their AI models. While people are told over and over again that AI models can make mistakes, the majority of people generally do not take the time to confirm what AI is telling them. A
David Wood, an accounting professor at Brigham Young University who

Vatcharachai – stock.adobe.com
“I think the number one error in terms of frequency is some type of small hallucination, and the user copies and pastes and doesn’t review. They don’t review the output. That is the biggest one. They go too fast without considering what it actually says. That’s, oof, 95 percent of errors I’ve seen,” he said.
Jeff Seibert, founder and CEO of accounting solutions provider Digits, made a similar point. He noted that Digits’ goal as a company is to automate the preparation of the books. Not to review them, or verify them. That is up to the user. He stressed the importance of really making sure humans double-check what the AI is telling them.
“That is the biggest risk: a business owner just blindly trusting the AI bookkeeping model. It may look correct, but might not follow the guidelines because it will be biased by what has been previously done in their books… Our guidance is that every business should still work with a firm or have someone qualified who can review and sign off on the finances,” said Seibert in an interview.
Gina Montgomery, director of AI, automation and analytics at top 25 firm Armanino, said that the damage that comes from ignoring the human review element may not just be monetary but legal and reputational as well, as it could lead to firms submitting flawed deliverables just because the AI sounded confident. She mentioned how we’ve already seen large organizations turn in work with AI-generated errors and suffered for it.
“That’s the exact kind of instance where you talk about human review: fabricated data, hallucinated citations, misapplied logic can reach regulators or investors before [the firm detects it] if you don’t have that layer built in,” she said.
This type of risk grows as oversight weakens. She said firms should not apply a set it and forget it mindset when it comes to AI, particularly where it concerns automation. It’s important to clearly define what an AI is and is not allowed to do, but too many organizations ignore this control.
“The most common missing control is a clear definition of what the AI is allowed to do. So installing an automation tool and assuming the guardrails are just built in, but don’t document approval thresholds or decision boundaries,” she said. An AI system without these controls operates without context and so becomes more likely to make errors.
But even if controls are built into the process, there is also the matter of getting people to follow them, as the phenomena of “
“It’s not Walmart putting this into their AP process and it goes awry. If there’s mistakes, most will be from shadow AI or shadow IT. Someone is using it and not letting other people know. It’s not part of the regularly designed process,” he said.
Ellen Choi, the founder and CEO of accounting AI consultancy Edgefield Group, noted that many large accounting firms already have strong control frameworks in place and so, like Wood, has not seen that much in the way of catastrophic AI failures. However, she did point out that there seems to be a consistent thread of people not necessarily knowing who is and is not using AI within the firm.
“I’m not sure what it is but people don’t seem to want to talk about AI use with each other within the firm. … You get these meetings where I ask about AI use, and one person waxes poetic about how they love it and everyone else looks surprised. It happens all the time. It seems pretty obvious that when firms don’t know what their people are doing, this can have unintended consequences of not using AI in a way that is compliant according to the guardrails the company may have set up,” she said.
For instance, she spoke of one firm where a junior associate put a client document into ChatGPT despite being explicitly warned against doing that specific thing. The only reason this person was caught was that the firm did happen to have IT controls that allowed them to detect it—else no one would have known. While the firm was able to work with OpenAI to get the documents removed from its training data, Choi said this is very rare and most commonly “it’s like ink and water. Once something has been added to the training data, it’s very, very difficult to isolate and remove it.”
“That person did get fired. Did it have actual business consequences beyond the risk exposure, the bottom line financial impact? No. But I think this kind of stuff is definitely something firms should be vigilant about,” she said.
Beyond expense and embarrassment, failing to implement proper controls over AI and keeping the human in the loop can serve to further degrade the usefulness of a firm’s AI model. This is because many models will actively learn from what the humans do, picking up the processes and procedures particular to that practice. Much like any human worker, if they’re given bad information, they’ll produce bad results, and if they’re not corrected by a supervisor, they’ll think that’s what they’re meant to be doing.
“It learns the best practices of each accounting firm. Obviously, if you have a rogue accountant in your firm doing bad accounting, yes, the model could pick up some of those practices,” said Seibert from Digits, though he stressed that there is also a global model that can correct the local firm model if it acquires bad habits.
Montgomery, from Armanino, talks about how letting mistakes into the training data can have cascading effects, which underscores the need for leaders to have independent verification layers before anything reaches the general ledger.
“AI errors that reach the ledger could directly affect reported earnings or compliance status. The most frequent examples might be misclassifications, incorrect accruals or unauthorized payments. These are not code failures but governance failures. When AI is trained on inconsistent data or outdated coding logic, it can replicate past mistakes at scale. A misclassification error that a human might make one month could be repeated thousands of times automatically, which could certainly cause some big issues,” she said.
To illustrate her point, she spoke about a medical organization client whose I would have made a very expensive error if not for last minute human intervention. The AI was attached to the procurement system, where it was responsible for ordering supplies as needed. Given a great degree of autonomy, the AI did most of the work itself. In this case, the AI had to order more gloves. Unfortunately, it had confused 20 boxes for 20 cases after misinterpreting a unit field.
“That discrepancy was caught during manual review. So those validation points are not bottlenecks. A lot of people think about it that way, thinking ‘oh no, I’ve got to add a human to that.’ But it’s more like the brakes that make the automation safe,” she said.
Controls also need to account for the types of AI used; if someone is using the wrong kind of model for something, it won’t matter how good the data is or how strong the guardrails, it won’t be able to do the job well. Using the right tool for the right job is important not just in AI but overall. Yet Seibert, from Digits, has seen too many people expecting large language models to do math when that’s not really their strong suit. He noted that Digits deliberately does not use LLMs to do any of the actual accounting work, relying instead on deterministic models and calculators to do the math, the results of which can then be communicated via LLM.
“When you ask it a question we don’t want it to make up an answer. You can, in Digits, ask a question about finances, it can be ‘how much did we spend on marketing this year versus last year.’ Hand that to ChatGPT and it will literally make up an answer and the math may look right but may be subtly wrong since it’s not really doing math. … We have to go to extreme lengths to prevent our models from doing math. This is a common failure place. A lot of companies are not treating it that seriously. There will be subtle issues in the predictions because the models are hallucinating,” he said.
But also, sometimes the right model does not remain the right model. Especially when someone licenses an AI model versus building their own, the company that controls it might make a new version or patch a current one. This can lead models to have wildly different behaviors even from small changes in its code.
“You see it occasionally, where you’ve designed some kind of process with the API where the old model is not as good as the new one but they didn’t update to the newer model. Or the model changes and they don’t go back and test it,” he said, adding that it is not necessarily true that the most recent version is the best for your particular purpose. “People always expect 5 to be better than 4 but that’s not how generative AI works. It might work better at 95% [of things] but that 5% catches you.”
Montgomery also talked about this risk, adding that often vendors won’t even provide notice that they’re changing the behavior of their AI model.
“Contracts with vendors should have a certain level of transparency and notice on what they’re doing and you need a right to audit the AI’s behavior based on the way it is stated it is going to work… Firms have to assess not only their own AI solutions or models, but also the controls of every provider that they depend on,” she said.
This speaks to the fact that even if a firm’s own controls are sterling, there is still the matter of third parties. This is especially the case with AI agents which can act semi-autonomously. The issue with agents interacting with agents is that, by definition, a human is not involved. If every AI agent is using accurate information in the proper context, this is not such a big deal. But if an agent is acting on bad information, this can create a cascade that spread through an entire system.
“As AI digital workers interact, there’s a growing risk of error propagation: one AI system can accept and reinforce another one’s incorrect output, and that is not good. … Without human validation, misinformation can spread faster than any individual could ever have done. So organizations have to design interaction protocols where AIs do not self validate. That’s where I think we are. Self validation between AIs is not a good idea. Any system to system exchange should include human confirmation or independent verification logic. Something like that has to happen. Human oversight being the last line of defense ensures accountability even when machines are collaborating,” she said.
With all this in mind, Montgomery said firm leaders need to design their AI control structure with the same rigor as they would any internal control.
“Every model should have a named owner, a review schedule, an audit trail. Accountability had to be specific, not collective. … I think the human relationship defines accountability for both the human side and the machine side. It’s easy to write really beautiful code. It’s just that the AI is going to execute exactly as it’s designed. So if you don’t have that validation in place, you’re in trouble,” she said.
You may like

The Financial Accounting Standards Board met this week to discuss its projects on accounting for transfers of cryptocurrency assets and enhancing the disclosures around certain digital assets, such as stablecoins.
Processing Content
During Wednesday’s meeting, FASB’s board made certain tentative decisions, according to a
At a future meeting, the board plans to consider clarifying the derecognition guidance for crypto transfer arrangements to assess whether the control of a crypto asset has been transferred.
FASB also began deliberations on the
The board decided to provide illustrative examples in Topic 230, Statement of Cash Flows, to clarify whether certain digital assets such as stablecoins can meet the definition of cash equivalents. It also decided to include the following concepts in the illustrative examples:
- Interpretive explanations that link to the current cash equivalents definition;
- The amount and composition of reserve assets; and,
- The nature of qualifying on-demand, contractual cash redemption rights directly with the issuer.
FASB plans to clarify that an entity should consider compliance with relevant laws and regulations when it’s creating a policy concerning which assets that satisfy the Master Glossary definition of the term “cash equivalents“ will be treated as cash equivalents.
“I agree with the staff suggestion to look at examples,” said FASB vice chair Hillary Salo. “From my perspective, I think that is going to help level the playing field. People have been making reasonable judgments. I agree with that. And I think that this is really going to help show those goalposts or guardrails of what types of stablecoins would be in the scope of cash equivalents, and which ones would not be in the scope of cash equivalents. I certainly appreciate that approach, and I think it has the least potential impact of unintended consequences, because I do agree with my fellow board members that we shouldn’t be changing the definition of cash equivalents, and it’s a high bar to get into the cash equivalent definition.”
“I’m definitely supportive of not changing the definition of cash equivalents,” said FASB chair Richard Jones. “I believe that’s settled GAAP in a way, and we’re not really seeing a call to change it for broader issues. I am supportive of the example-based approach. The challenge with examples, though, is everybody’s going to want their exact pattern, but that’s not what we’re doing.”
The examples will explain the rationale for how digital assets such as stablecoins do or do not qualify as cash equivalents and give a roadmap for other types of digital assets with varying fact patterns to be able to apply.
“We really don’t want to be as a board facing a situation where something was a cash equivalent and then no longer is at a later date,” said Jones. “That’s not good for anyone, so keeping it as a high bar with certain rigid criteria, I think, is fine.”
Stablecoins are supposed to be pegged to fiat currencies such as U.S. dollars and thus provide more stability to investors. “In my view, while a stablecoin may meet the accounting definition established for cash equivalents, not every one of those stablecoins in the cash equivalent classification represents the same level of risk,” said FASB member Joyce Joseph.
She noted that the capital markets recognize the distinctions and have established a Stablecoin Stability Assessment Framework to evaluate a stablecoin’s ability to maintain its peg to a fiat currency. Such assessments look at the legal and regulatory framework associated with the stablecoin, and provide investors with information that could enable them to do forward-looking assessments about the stability of the stablecoin.
“However, for an investor to consider and utilize such information for a company analysis the financial statement disclosures would need to include information about the stablecoin itself,” Joseph added. “In outreach, the staff learned that investors supported classifying certain stablecoins as cash equivalents when transparent information is available about the entities at which the reserve assets are held. Therefore, in my view, taking all of this into consideration a relevant and informative company disclosure would include providing investors with the name of the stablecoin and the amount of the stablecoin that is classified as a cash equivalent, so investors can independently assess the liquidity risks more meaningfully and more comprehensively by utilizing broader information that is available in the capital markets and its emerging information.”
Such information could include the issuer, reserves, governance and management, she noted, so investors would get a more holistic look at the risks that holding the stablecoin would entail for a given company.
The board decided to require all entities to disclose the significant classes and related amounts of cash equivalents on an annual basis for each period that a statement of financial position is presented.
Entities should apply the amendments related to the classification of certain digital assets as cash equivalents on a modified prospective basis as of the beginning of the annual reporting period in the year of adoption.
FASB decided that entities should apply the amendments related to the disclosure of the significant classes and amounts of cash equivalents on a prospective basis as of the date of the most recent statement of financial position presented in the period of adoption.
The board will allow early adoption in both interim and annual reporting periods in which financial statements have not been issued or made available for issuance.
FASB also decided to permit entities to adopt the amendments to be illustrated in the examples related to the classification of certain digital assets as cash equivalents without the need to perform a preferability assessment as described in Topic 250, Accounting Changes and Error Corrections.
The board directed the staff to draft a proposed accounting standards update to be voted on by written ballot. The proposed update will have a 90-day comment period.
Accounting
Lawmakers propose tax and IRS bills as filing season ends
Published
2 weeks agoon
April 17, 2026

Senators introduced several pieces of tax-related legislation this week, including measures aimed at improving customer service at the Internal Revenue Service, cracking down on tax evasion and curbing the carried interest tax break, in addition to efforts in the House to repeal the Corporate Transparency Act.
Processing Content
Senators Bill Cassidy, R-Louisiana, and Mark Warner, D-Virginia, teamed up on introducing a bipartisan bill, the
The bill would establish a dashboard to inform taxpayers of backlogs and wait times; expand electronic access to information and refunds; expand callback technology and online accounts; and inform individuals facing economic hardship about collection alternatives.
“Taxpayers deserve a simple, stress-free experience when dealing with the IRS,” Cassidy said in a statement Wednesday. “This bill makes the process quicker and easier for taxpayers to get the information they need.”
He also mentioned the bill during a
“I’m happy to meet with the team … and do all I can to make it as good as you want it to be,” said Bisignano.
“My bill would equip the IRS with the legislative mandate to create an online dashboard so that taxpayers can monitor average call wait time and budget time accordingly,” said Cassidy. He noted that the bill would allow a callback for taxpayers that might need to wait longer than five minutes to speak to a representative, and establish a program to identify and support taxpayers struggling to make ends meet by providing information about alternative payment methods, such as installments, partial payments and offers in compromise.
“I know people are kind of desperate and don’t know where to turn for cash, so I think this could really ease anxiety,” he added. “This legislation is bipartisan and is likely to pass this Congress.”
Cassidy and Warner
“Taxpayers shouldn’t have to jump through hoops to get basic answers from the IRS — and in the last year, those challenges have only gotten worse,” Warner said in a statement. “I am glad to reintroduce this bipartisan legislation on Tax Day to ease some of this frustration by increasing clear communication and making IRS resources more readily available.”
Stop CHEATERS Act
Also on Tax Day, a group of Senate Democrats and an independent who usually caucuses with Democrats teamed up to introduce the Stop Corporations and High Earners from Avoiding Taxes and Enforce the Rules Strictly (Stop CHEATERS) Act.
Senate Finance Committee ranking member Ron Wyden, D-Oregon, joined with Senators Angus King, I-Maine, Elizabeth Warren, D-Massachusetts, Tim Kaine, D-Virginia, and Sheldon Whitehouse, D-Rhode Island. The bill would provide additional funding for the IRS to strengthen and expand tax collection services and systems and crack down on tax cheating by the wealthy.
“Wealthy tax cheats and scofflaw corporations are stealing billions and billions from the American people by refusing to pay what they legally owe, and far too many of them are getting a free pass because Republicans gutted the enforcement capacity of the IRS,” Wyden said in a statement. “A rich tax cheat who shelters mountains of cash among a web of shell companies and passthroughs is likelier to be struck by lightning than face an IRS audit, and Republicans want to keep it that way. This bill is about making sure the IRS has the resources it needs to go after wealthy tax cheats while improving customer service for the vast majority of American taxpayers who follow the law every year.”
Earlier this week. Wyden also
The Stop CHEATERS Act would provide the IRS with additional funding for tax enforcement focused upon high-income tax evasion, technology operations support, systems modernization, and taxpayer services like free tax-payer assistance.
“As Congress seeks ways to fund much-needed policy priorities and address our growing national debt, there is one common sense solution that should have unanimous bipartisan support: let’s enforce the tax laws already on the books,” said King in a statement. “Our legislation will make sure the IRS has the resources it needs to confront the gap between taxes owed and taxes paid – while ensuring that our tax enforcement professionals are focused on the high-income earners who account for the most tax evasion. This is a serious problem with an easy solution; let’s pass this legislation and make sure every American pays what they owe in taxes.”
Carried interest
Wyden, King and Whitehouse also teamed up on another bill Thursday to close the carried interest tax break for hedge fund managers that
Carried interest is a form of compensation received by a fund manager in exchange for investment management services, according to a
Under the bill, the
“Our tax code is rigged to favor ultra-wealthy investors who know how to game the system to dodge paying a fair share, and there is no better example of how it works in practice than the carried interest loophole,” Wyden said in a statement. “For several decades now we’ve had a tax system that rewards the accumulation of wealth by the rich while punishing middle-class wage earners, and the effect of that system has been the strangulation of prosperity and opportunity for everybody but the ultra-wealthy. There are a lot of problems to fix to restore fairness and common sense to our tax code, and closing the carried interest loophole is a great place to start.”
Repealing Corporate Transparency Act
The House Financial Services Committee is also planning to markup a bill next Tuesday that would fully repeal the Corporate Transparency Act, which has already been significantly
If enacted, the repeal would eliminate beneficial ownership reporting requirements, removing a transparency measure designed to help law enforcement and national security officials identify who is behind U.S. companies.
“This repeal would turn the United States back into one of the easiest places in the world to set up anonymous shell companies, something Congress worked for years to fix,” said Erica Hanichak, deputy director of the FACT Coalition, in a statement. “These entities are routinely used to facilitate corruption, financial crime, and abuse. Rolling back the CTA doesn’t just weaken transparency, it signals to bad actors around the world that the U.S. is once again open for illicit business.”
Accounting
IRS struggles against nonfilers with large foreign bank accounts
Published
3 weeks agoon
April 15, 2026

The Internal Revenue Service rarely penalizes taxpayers who have high balances in foreign bank accounts and fail to file the proper forms, according to a new report.
Processing Content
The
Taxpayers with specified foreign financial assets that meet a certain dollar threshold are also required to report the information to the IRS by filing Form 8938. Failure to file the form can result in penalties of up to $60,000. However, TIGTA’s previous reports have demonstrated that the IRS rarely enforces these penalties.
The IRS created an Offshore Private Banking Campaign initiative to address tax noncompliance related to taxpayers’ failure to file Form 8938 and information reporting associated with offshore banking accounts, but it’s had limited success.
Even though the initiative identified hundreds of individual taxpayers with significant foreign bank account deposits who failed to file Forms 8938, the campaign only resulted in relatively few taxpayer examinations and a small number of nonfiling penalties. The campaign identified 405 taxpayers with significant foreign account balances who appeared to be noncompliant with their FATCA reporting requirements.
The IRS used two ways to address the 405 noncompliant taxpayers: referral for examinations and the issuance of letters to them.
- 164 taxpayers (who had an average unreported foreign account balance of $1.3 billion) were referred for possible examination, but only 12 of the 164 were examined, with five having $39.7 million in additional tax and $80,000 in penalties assessed.
- 241 noncompliant taxpayers (who had an average unreported account balance of $377 million) received a combination of 225 educational letters (requiring no response from the taxpayers) and 16 soft letters (requiring taxpayers to respond). None of the 241 taxpayers were assessed the initial $10,000 FATCA nonfiling penalty.
“While taxpayers can hold offshore banking accounts for a number of legitimate reasons, some taxpayers have also used them to hide income and evade taxes,” said the report.
Significant assets and income are factors considered by the IRS when assessing whether taxpayers intentionally evaded their tax responsibilities, the report noted. Given the large size of the average unreported foreign account balances, these taxpayers probably have higher levels of sophistication and an awareness of their obligation to comply with the law.
TIGTA believes the IRS needs to establish specific performance measures to determine the effectiveness of the FATCA program. “If the IRS does not plan to enforce the FATCA provisions even where obvious noncompliance is identified, it should at least quantify the enforcement impact of its efforts,” said the report. “This will ensure that IRS decision makers have the information they need to determine if the FATCA program is worth the investment and improves taxpayer compliance.
TIGTA made three recommendations in the report, including revising Campaign 896 processes to include assessing FATCA failure to file penalties; assessing the viability of using Form 1099 data to identify Form 8938 nonfilers; and implementing additional performance measures to give decision makers comprehensive information about the effectiveness of the FATCA program. The IRS disagreed with two of TIGTA’s recommendations and partially agreed with the remaining recommendation. IRS officials didn’t agree to assess penalties in Campaign 896 or with implementing performance measures to assess the effectiveness of the FATCA program.
“From our perspective, TIGTA’s conclusions regarding IRS Campaign 896 are based, in part, on a misguided premise and overgeneralizations, including the treatment of ‘potential noncompliance’ as tantamount to ‘egregious noncompliance’ that warrants a monetary penalty without contemplating the variety of justifications that may exempt a taxpayer from having to file Form 8938,” wrote Mabeline Baldwin, acting commissioner of the IRS’s Large Business and International Division, in response to the report.
What that means for consumer loans
Checks and Balance newsletter: Of God and MAGA
Why software stocks, 2026’s market dogs, have joined the rally
Armanino adds Strategic Accounting Outsourced Solutions
New 2023 K-1 instructions stir the CAMT pot for partnerships and corporations
