As AI works its way into more and more business processes, it has become increasingly important for auditors to understand where, why, when and how organizations use it and what impact it is having not only on the entity itself but its various stakeholders as well.
Speaking at a virtual conference on AI and finance hosted by Financial Executives International, Ryan Hittner, an audit and assurance principal with Big Four firm Deloitte, noted that since the technology is still relatively new it has not yet had time to significantly impact the audit process. However, given AI’s rapid rate of development and adoption throughout the economy, he expects this will change soon, and it won’t be long before auditors are routinely examining AI systems as a natural part of the engagement. As auditors are preparing for this future, he recommended that companies do as well.
“We expect lots of AI tools to inject themselves into multiple areas. We think most companies should be getting ready for this. If you’re using AI and doing it in a way where no one is aware it is being used, or without controls on top of it, I think there is some risk for audits, both internal and external,” he said.
Elevated View Of Robotic Hand Examining Financial Data With Magnifying Glass
Andrey Popov/stock.adobe.com
There are several risks that are especially relevant to the audit process. The primary risk, he said, is accuracy. While models are improving in this area, they still have the tendency to make things up, which might be fine for creative writing but terrible for financial data reporting. Second, AI tends to lack transparency, which is especially problematic for auditors, as their decision making process is often opaque, so unlike a human, an AI may not necessarily be able to explain why it classified an invoice a particular way, or how it decided on this specific chart of accounts for that invoice. Finally, there is the fact that AI can be unpredictable. Auditors, he said, are used to processes with consistent steps and consistent results that can be reviewed and tested; AI, however, can produce wildly inconsistent outputs even from the same prompt, making it difficult to test.
This does not mean auditors are helpless, but that they need to adjust their approach. Hittner said that an auditor will likely need to consider the impact of AI on the entity and its internal controls over financial reporting; assess the impact of AI on their risk assessment procedures; consider an entity’s use of AI when identifying relevant controls and AI technologies or applications; and assess the impact of AI on their audit response.
In order to best assist auditors evaluating AI, management should be able to answer relevant questions when it comes to their AI systems. Hittner said auditors might want to know how the entity assesses the appropriate of AI for the intended purpose, what governance controls are in place around the use of AI, how the entity measures and monitors AI performance metrics, whether or how often they backtest the AI system, and what is the level of human oversight over the model and what approach does the entity take for overriding outputs when necessary.
“Management should really be able to answer these kinds of questions,” he said, adding that one of the biggest questions an auditor might ask is “how did the organization get comfortable with the result of what is coming out of this box. Is it a low risk area with lots of review levels? … How do you measure the risk and how do you measure whether something is acceptable for use or not, and what is your threshold? If it’s 100% accurate, that’s pretty good, but no backtesting, no understanding of performance would give auditors pause.”
He also said that it’s important that organizations be transparent about their AI use not just with auditors but stakeholders as well. He said cases are already starting to appear where people unaware that generative AI was producing the information they were reviewing.
Morgan Dove, a Deloitte senior manager within the AI & Algorithmic Assurance practice, stressed the importance of human review and oversight of AI systems, as well as documenting how that oversight works for auditors. When should there be human review? Anywhere in the AI lifecycle, according to Dove.
“Even the most powerful AIs can make mistakes, which is why human review is essential for accuracy and reliability. Depending on use case and model, human review may be incorporated in any stage of the AI lifecycle, starting with data processing and feature selection to development and training, validation and testing, to ongoing use,” she said.
But how does one perform this oversight? Dove said data control is a big part of it, as the quality and accuracy of a model hinges on its data stores. Organizations need to verify the quality, completeness, relevance and accuracy of any data they put into an AI, not just the training data but also what is fed into the AI in its day to day functions.
She also said that organizations need to archive the inputs and outputs of their AI models, without this documentation it becomes very difficult for auditors to review the system because it allows them to trace the inputs to the outputs to test consistency and reliability. When archiving data she said organizations should include details like the name and title of the dataset, and its source. They should also document the prompts fed into the system, with timestamps, so they can possibly be linked with related outputs.
Dove added that effective change management is also essential, as even little changes in model behaviors can create large variations in performance and outputs. It is therefore important to document any changes to the model, along with the rationale for the change, the expected impact and the results of testing, all of which supports a robust audit trail. She said this should be done regardless of whether the organization is using its own proprietary models or a third party vendor model.
“There are maybe two nuances. One is, as you know, vendor solutions are proprietary so that contributes to the black box lack of transparency, and consequently does not provide users with the appropriate visibility … into the testing and how the given model makes decisions. So organizations may need to arrange for additional oversight in outputs made by the AI system in question. The second point is around the integration and adoption of a chosen solution, they need to figure out how they process data from existing systems, they also need to devote necessary resources to train personnel in using the solution and making sure there’s controls at the input and output levels as well as pertinent data integration points,” she said.
When monitoring an AI, what exactly should people be looking for? Dove said people have already developed many different metrics for AI performance. Some include what’s called a SemScore, which measures how similar the meaning of the generated text is to the reference text, BLEU (bilingual evaluation understudy), which measures how many words or phrases in the generated text match the reference text, or ROC-AUC (Receiver Operating Characteristic Area Under the Curve) which measures the overall ability of an AI model to distinguish between positive and negative classes.
Mark Hughes, an audit and assurance consultant with Deloitte, added that humans can also monitor the Character Error Rate, which measures the exact accuracy of an output down to the character (important for processes like calculating the exact dollar amount of an invoice), Word Error Rate, which is similar but does the evaluation at the word level, and the “Levenshtein distance,” defined as the number of single character edits needed to fix an extracted text to see how far away the output is from the ground truth text.
Hittner said that even if an organization is only just experimenting with AI now, it is critical to understand where AI is used, what tools the finance and accounting function have at their disposal to use, and how it will impact the financial statement process.
“Are they just drafting emails, or are they drafting actual parts of the financial statements or management estimates or [are] replacing a control? All these are questions we have to think about,” he said.
U.S. homeowners saw their property taxes rise more slowly last year compared with 2023, while the number of counties where the average bill tops $10,000 continued its steady increase, according to a new study.
The average U.S. homeowner paid $4,172 in property taxes last year, according to a report by real estate data firm ATTOM. That’s an increase of 2.7% from 2023 — roughly in step with headline inflation, which was 2.9% in the period, and down from the 4.1% average tax increase in 2023.
The analysis is based on bills for 85.7 million single-family homes nationwide. Breaking them down regionally, the report shows that 19 counties had an average bill that exceeded $10,000 last year, the most on record. That suggests plenty of homeowners in those areas would need to come up with $1,000 a month or more, once insurance is included, even if their mortgage is paid off.
New York was excluded from ATTOM’s 2024 analysis due to data availability limitations. Almost half of the most expensive counties that did feature in the report are in New Jersey — where the statewide average bill topped the $10,000 threshold — including high-population areas such as Bergen, Monmouth and Middlesex.
Other places with high average taxes include the San Jose-Sunnyvale-Santa Clara metro area in California — where the average bill was $12,293 — along with San Mateo and San Francisco.
By state, the highest average tax bills tended to be in the Northeast. Top-ranked New Jersey was followed by Connecticut ($8,402), New Hampshire ($7,723), Massachusetts ($7,720) and California ($7,131).
In its 2023 analysis, which included data for New York, ATTOM’s data showed that counties such as Nassau, Rockland, Suffolk and Westchester all had average property taxes that exceeded $10,000.
Harvard University pushed back against the U.S. government after President Donald Trump said the school should lose its tax-exempt status, warning that such a move would endanger its ability to carry out its mission and threaten higher education in America.
“There is no legal basis to rescind Harvard’s tax-exempt status,” university spokesman Jason Newton said in a statement, adding that such a move would damage Harvard’s medical research efforts and ability to offer financial aid for students. He also cautioned that using this “instrument” would have “grave consequences for the future of higher education in America.”
Trump has escalated his fight with the oldest and richest U.S. university after the school refused to bow to his administration’s demands. The U.S. froze more than $2.2 billion of multiyear grants this week, Trump suggested the Internal Revenue Service should tax the university as a “political entity” and then his homeland security chief threatened to prevent the school from enrolling foreign students.
The White House has sought to overhaul elite education arguing that schools need to combat antisemitism after protests against Israel broke out on campuses across the U.S. in the wake of Hamas’s attack on the Jewish state and the resulting war in Gaza — but its efforts have sparked concern the administration is trying to suppress free speech and imperil academic freedom.
Harvard president Alan Garber said the university was willing to work with the administration to fight antisemitism, but the U.S. demands made clear that wasn’t their intent. Instead Harvard said the government was seeking to determine what the university teaches and who it hires and admits — and that it won’t “surrender its independence or its constitutional rights.”
CNN first reported the IRS was looking at revoking the tax-exempt status, a move that would deal a significant financial blow to the university and send a message to other institutions that they face a similar risk. A Bloomberg News analysis estimated that Harvard’s tax benefits totaled at least $465 million in 2023.
The White House hasn’t yet formally confirmed it’s pursuing the IRS path, which would almost certainly face legal challenges.
Harrison Fields, a White House spokesman, said the IRS was investigating Harvard’s tax status before Trump called for the school to pay taxes, adding that “any forthcoming actions by the IRS will be conducted independently of the president.”
With a $53 billion endowment, Harvard has emerged as the highest-profile university to contest Trump’s attempts to force sweeping changes. Other university leaders, including Princeton’s, have expressed support for Harvard’s stance, but they also face pressure from the White House. The administration has already canceled $400 million in federal money to Columbia University and frozen dozens of research contracts at Princeton, Cornell and Northwestern universities.
While Harvard’s rebuke sparked cheers from Democrats and many Harvard alumni, including former President Barack Obama, the university’s resistance has come at a cost that could spiral. U.S. agencies previously said they are reviewing about $9 billion of grants and contracts to the Cambridge, Massachusetts-based school.
On Wednesday, Homeland Security Secretary Kristi Noem also threatened to prevent Harvard from enrolling international students if it doesn’t comply with sending records to the government related to foreign student visa holders. This would threaten Harvard’s ability to attract students from foreign countries, which make up more than 27% of the university’s student body in this academic year.
“Harvard bending the knee to antisemitism — driven by its spineless leadership — fuels a cesspool of extremist riots and threatens our national security,” Noem said in a statement.
A Harvard spokesperson said in a statement that the federal scrutiny, including funding cuts and the threat to its tax status, “follows on the heels of our statement that Harvard will not surrender its independence or relinquish its constitutional rights.”
“We will continue to comply with the law and expect the Administration to do the same,” according to the statement.
The Trump administration is reportedly making plans to shut down the Internal Revenue Service’s Direct File free tax prep system next year.
The Associated Press reported Wednesday about the plans, which come amid widespread layoffs at the IRS. Elon Musk had posted on X in February that he had “deleted” 18F, a digital services team that helped build the Direct File system ahead of its initial pilot test last year. The IRS staff who had taken over development of the program were reportedly told last month to end their work on developing the system for next tax season. The U.S. Digital Service that also worked on developing Direct File has been renamed the U.S. DOGE Service after a takeover by Musk’s Department of Government Efficiency.
Senate Finance Committee ranking member Ron Wyden, D-Oregon, blamed the move on lobbying by the tax prep software industry, as well as Treasury Secretary Scott Bessent.
“No one should have to pay huge fees just to file their taxes,” Wyden said in a statement Wednesday. “Direct File was a massive success, saving taxpayers millions in fees, saving them time and cutting out an unnecessary middleman that took money out of Americans’ pockets for no good reason,” Wyden said. “Trump and Secretary Bessent are robbing regular American families to pay back lobbyists that spend millions to make tax filing more expensive and more difficult.”
The Direct File system expanded from pilot tests in 12 states last year to 25 states this year, aided by the nonprofit group Code for America and its FileYourStateTaxes project. A survey of over 1,000 Direct File and FileYourStateTaxes users reportedly found that 98% of respondents said they were either satisfied or very satisfied with the programs, according to the Federal News Network. Last year, former IRS commissioner Danny Werfel announced plans to make the Direct File program permanent, but the program has been repeatedly attacked by Republican lawmakers in Congress and the tax prep industry.