External auditors have long been tasked with ensuring financial integrity, detecting fraud and providing an independent opinion on a company’s financial statements.
Now, with the rise of continuous auditing, this role is evolving. Should auditors be involved in real-time financial monitoring? Will continuous auditing enhance audit quality or introduce new risks? And will AI and automation result in continuous audits that are more efficient, or will it drive up complexity and costs?
These questions go beyond technology — they redefine the audit function, independence and financial reporting expectations. The potential is huge, but so are the challenges that come with it.
What is continuous auditing?
Think of a traditional audit like an annual medical check-up — you go in once a year, the doctor reviews your health and gives you an assessment based on that visit. Continuous auditing? That’s more like wearing a smartwatch that tracks your health 24/7, constantly looking for issues as they happen. It uses AI, automation and analytics to monitor transactions in real time. Instead of waiting until the end of the reporting cycle, risks, anomalies and possible control issues are flagged as they happen.
At first glance, continuous auditing seems like a clear win — faster fraud detection, stronger financial oversight and fewer year-end surprises. But it also raises a critical question: If auditors are reviewing financial data year-round, are they expected to report findings externally in real time? And if they are not, could that expose them to greater liability?
The shift from traditional audits to continuous audits
Auditors traditionally provide independent opinions after management closes the books, but continuous auditing challenges this boundary. When auditors monitor financials year-round, the distinction between independent oversight and management’s control function can become blurred — at least in perception.
Flagging issues at many touchpoints during the year may also introduce concerns about their accountability for financial outcomes before the final opinion is issued.
Independence will always be a core pillar of auditing, both in fact and perception. As auditors engage in real-time monitoring, the challenge becomes ensuring they remain objective third parties rather than part of management’s oversight process. Regulators must then establish clear safeguards to uphold auditor independence while leveraging continuous auditing’s benefits.
AI and automation
This shift isn’t just happening because companies want it — it’s happening because AI and automation have made it possible. And let’s be honest: this technology is a game-changer. AI is transforming auditing by enabling real-time anomaly detection, predictive risk assessment and full population testing with greater accuracy than traditional sampling.
For audit firms, this means a fundamental shift in how audits are conducted. AI isn’t just making audits faster — it’s enabling full population analysis to catch risks that sampling might miss, automating repetitive tasks to give auditors more time for complex judgment calls, and strengthening fraud detection with continuous monitoring that builds investor confidence. How ready are firms to embrace this transformation?
What about the cost of continuous auditing?
Cost is another part of this debate around continuous auditing. Continuous auditing smooths workloads year-round, optimizing firm resources and specialists. AI handles routine transactions, freeing auditors to focus on complex, high-risk (high value) areas requiring expert judgment. It also allows management to have visibility of the audit fee build-up — distinguishing between tasks that can be automated with AI and the specialized work that demands deeper professional judgement.
While continuous auditing offers those advantages, one could argue this may lead to higher audit fees if auditors are “on the ground” 24/7, the cost of upfront investment in AI tools, and added complexity in maintaining compliance with new regulations. The final answer depends on how firms adopt it — but in the long run, efficiency gains and stronger risk detection (i.e., preventing costly year-end financial restatements) may strongly justify the investment.
Will auditors fully embrace continuous auditing?
The demand for faster financial assurance is already here. Shareholders want more transparency and faster reporting, regulators want better oversight, and companies see AI-driven monitoring as an advantage. For this to happen, regulatory standards will need to evolve to address real-time assurance and how it aligns with auditor independence. Audit firms will need to balance technology investment with governance structures that ensure objectivity, transparency and liability-mitigation.
As companies (and internal audit practitioners) adopt rolling and periodic assurance models with AI-driven monitoring, the shift to a fully continuous audit model for external audit is not just a possibility — it’s within reach. But getting there requires more than just technology; it demands clear regulatory frameworks, strategic investment, and strong legal protection and independence safeguards to maintain trust in the audit process.
AI and automation will rewrite the playbook, shifting audit expectations from a single annual opinion to rolling, real-time insights. With historical audits losing their shine, more stakeholders are asking for a better solution.
Continuous auditing is no longer theoretical — it’s happening now. The challenge is ensuring it enhances audit quality while maintaining independence. With AI redefining expectations, are audit firms, regulators and businesses ready to embrace this shift? The conversation is just beginning — where do you stand?
As AI works its way into more and more business processes, it has become increasingly important for auditors to understand where, why, when and how organizations use it and what impact it is having not only on the entity itself but its various stakeholders as well.
Speaking at a virtual conference on AI and finance hosted by Financial Executives International, Ryan Hittner, an audit and assurance principal with Big Four firm Deloitte, noted that since the technology is still relatively new it has not yet had time to significantly impact the audit process. However, given AI’s rapid rate of development and adoption throughout the economy, he expects this will change soon, and it won’t be long before auditors are routinely examining AI systems as a natural part of the engagement. As auditors are preparing for this future, he recommended that companies do as well.
“We expect lots of AI tools to inject themselves into multiple areas. We think most companies should be getting ready for this. If you’re using AI and doing it in a way where no one is aware it is being used, or without controls on top of it, I think there is some risk for audits, both internal and external,” he said.
Elevated View Of Robotic Hand Examining Financial Data With Magnifying Glass
Andrey Popov/stock.adobe.com
There are several risks that are especially relevant to the audit process. The primary risk, he said, is accuracy. While models are improving in this area, they still have the tendency to make things up, which might be fine for creative writing but terrible for financial data reporting. Second, AI tends to lack transparency, which is especially problematic for auditors, as their decision making process is often opaque, so unlike a human, an AI may not necessarily be able to explain why it classified an invoice a particular way, or how it decided on this specific chart of accounts for that invoice. Finally, there is the fact that AI can be unpredictable. Auditors, he said, are used to processes with consistent steps and consistent results that can be reviewed and tested; AI, however, can produce wildly inconsistent outputs even from the same prompt, making it difficult to test.
This does not mean auditors are helpless, but that they need to adjust their approach. Hittner said that an auditor will likely need to consider the impact of AI on the entity and its internal controls over financial reporting; assess the impact of AI on their risk assessment procedures; consider an entity’s use of AI when identifying relevant controls and AI technologies or applications; and assess the impact of AI on their audit response.
In order to best assist auditors evaluating AI, management should be able to answer relevant questions when it comes to their AI systems. Hittner said auditors might want to know how the entity assesses the appropriate of AI for the intended purpose, what governance controls are in place around the use of AI, how the entity measures and monitors AI performance metrics, whether or how often they backtest the AI system, and what is the level of human oversight over the model and what approach does the entity take for overriding outputs when necessary.
“Management should really be able to answer these kinds of questions,” he said, adding that one of the biggest questions an auditor might ask is “how did the organization get comfortable with the result of what is coming out of this box. Is it a low risk area with lots of review levels? … How do you measure the risk and how do you measure whether something is acceptable for use or not, and what is your threshold? If it’s 100% accurate, that’s pretty good, but no backtesting, no understanding of performance would give auditors pause.”
He also said that it’s important that organizations be transparent about their AI use not just with auditors but stakeholders as well. He said cases are already starting to appear where people unaware that generative AI was producing the information they were reviewing.
Morgan Dove, a Deloitte senior manager within the AI & Algorithmic Assurance practice, stressed the importance of human review and oversight of AI systems, as well as documenting how that oversight works for auditors. When should there be human review? Anywhere in the AI lifecycle, according to Dove.
“Even the most powerful AIs can make mistakes, which is why human review is essential for accuracy and reliability. Depending on use case and model, human review may be incorporated in any stage of the AI lifecycle, starting with data processing and feature selection to development and training, validation and testing, to ongoing use,” she said.
But how does one perform this oversight? Dove said data control is a big part of it, as the quality and accuracy of a model hinges on its data stores. Organizations need to verify the quality, completeness, relevance and accuracy of any data they put into an AI, not just the training data but also what is fed into the AI in its day to day functions.
She also said that organizations need to archive the inputs and outputs of their AI models, without this documentation it becomes very difficult for auditors to review the system because it allows them to trace the inputs to the outputs to test consistency and reliability. When archiving data she said organizations should include details like the name and title of the dataset, and its source. They should also document the prompts fed into the system, with timestamps, so they can possibly be linked with related outputs.
Dove added that effective change management is also essential, as even little changes in model behaviors can create large variations in performance and outputs. It is therefore important to document any changes to the model, along with the rationale for the change, the expected impact and the results of testing, all of which supports a robust audit trail. She said this should be done regardless of whether the organization is using its own proprietary models or a third party vendor model.
“There are maybe two nuances. One is, as you know, vendor solutions are proprietary so that contributes to the black box lack of transparency, and consequently does not provide users with the appropriate visibility … into the testing and how the given model makes decisions. So organizations may need to arrange for additional oversight in outputs made by the AI system in question. The second point is around the integration and adoption of a chosen solution, they need to figure out how they process data from existing systems, they also need to devote necessary resources to train personnel in using the solution and making sure there’s controls at the input and output levels as well as pertinent data integration points,” she said.
When monitoring an AI, what exactly should people be looking for? Dove said people have already developed many different metrics for AI performance. Some include what’s called a SemScore, which measures how similar the meaning of the generated text is to the reference text, BLEU (bilingual evaluation understudy), which measures how many words or phrases in the generated text match the reference text, or ROC-AUC (Receiver Operating Characteristic Area Under the Curve) which measures the overall ability of an AI model to distinguish between positive and negative classes.
Mark Hughes, an audit and assurance consultant with Deloitte, added that humans can also monitor the Character Error Rate, which measures the exact accuracy of an output down to the character (important for processes like calculating the exact dollar amount of an invoice), Word Error Rate, which is similar but does the evaluation at the word level, and the “Levenshtein distance,” defined as the number of single character edits needed to fix an extracted text to see how far away the output is from the ground truth text.
Hittner said that even if an organization is only just experimenting with AI now, it is critical to understand where AI is used, what tools the finance and accounting function have at their disposal to use, and how it will impact the financial statement process.
“Are they just drafting emails, or are they drafting actual parts of the financial statements or management estimates or [are] replacing a control? All these are questions we have to think about,” he said.
Big Four firm EY announced a collection of new AI solutions made to enhance the work of its assurance professionals, the latest development in its $1 billion investment in AI.
“Through its $1billion technology investment, EY is bringing AI right to the heart of the audit, accelerating its transformation,” ,” said Marc Jeschonneck, EY’s global assurance digital leader in a statement. “This elevates the attractiveness of EY for talent and equips EY professionals with technology capabilities to shape the future with confidence.”
Among the new AI-powered solutions is EYQ Assurance Knowledge, which uses generative AI to help with detailed searches and summarization of accounting and auditing content. By integrating EYQ Assurance Knowledge directly into the workflow of the EY Assurance technology platform, the AI can tailor responses based on the profile and context of the audit engagements for companies served, including geography, industry and complexity.
Another product is a new release of EY Intelligent Checklists with AI, which recommends responses to questions in disclosure checklists to support audit professionals in addressing accounting standards and legal requirements. EY is also releasing enhancements to EY Financial Statement Tie Out, which supports audit professionals with accuracy and integrity checks. The enhancements help manage changes between different iterations of company financial statements, alongside existing technology features.
“This launch of new AI capabilities is the first of a series of generative and agentic AI technologies which build on the strong foundations established by integrated and transformative technology,” said Paul Goodhew, EY’s global assurance innovation and emerging technology leader. “This supports the EY organization’s ambition to become the world’s most trusted AI-powered assurance provider.”
This is not the only area where EY is adding AI. For example, the firm recently announced new artificial intelligence capabilities in its EY Blockchain Analyzer: the Smart Contract and Token Review tool, designed to enhance vulnerability detection in smart contracts through greater code coverage and streamline the contract simulation process. A smart contract is a type of self-executing agreement that is usually implemented through a blockchain; the concept undergirds certain cryptocurrencies like Ethereum. The advanced AI feature enable users to automate and simulate the entire contract review process using natural language prompts and the tool’s testing engine. Trained on a library of existing tests and simulations, the feature supports the reviewer’s ability to detect vulnerabilities with higher test coverage while leveraging the same number of resources. The SC&TR tool’s new AI capabilities eliminate several manual steps for smart contract deployment, such as sandbox simulations and test creation.
The American Institute of CPAs asked officials in the Treasury Department and the Internal Revenue Service for changes in the final regulations governing generation-skipping transfer tax exemptions.
Last April, the Treasury and the IRS issued final regulations providing guidance describing the circumstances and procedures under which an extension of time will be granted to make certain allocations and elections related to the GST tax. The relief provisions are supposed to offer a kind of safety net for taxpayers so their estate planning and tax strategies can be effectively implemented even if some errors are made initially.
The AICPA sent a comment letter last week in response to the recent GST final regulations in which Treasury and the IRS said they’re prepared to issue further revenue procedures or other guidance when they identify situations for which simplified or automatic relief under Section 2642(g)(1) of the Tax Code would be appropriate and administrable.
The AICPA suggested the Treasury and IRS should extend the relief provided by Rev. Proc. 2004-46 to tax years 2001 and later. The Treasury and the IRS should also provide a similar revenue procedure to Rev. Proc. 2004-46 for situations in which the donor’s GST exemption has been automatically allocated to a prior transfer, but the donor either did not intend for GST exemption to be allocated or the donor was not aware that GST exemption was allocated to the transfer, the AICPA suggested.
The administrative burden on both taxpayers and the IRS to process private letter rulings for small amounts is disproportionate to the amounts involved, the AICPA pointed out. Extending the relief provided by Rev. Proc. 2004-46 to tax years 2001 and later would streamline the process for taxpayers seeking to allocate their GST exemption to post-2000 transfers, thus reducing the administrative burdens and costs associated with PLRs for both taxpayers and the government.
In addition, extending relief to tax years 2001 and later would help taxpayers who didn’t file gift tax returns for certain gifts to trusts, the AICPA recommended. That would enable taxpayers to make more informed decisions and fix their past mistakes so their GST exemptions can better match their tax planning goals.
“While the final regulations offer a safety net for missed GST elections, the high cost and complexity make the Private Letter Ruling approach impractical for many taxpayers,” said Eileen Sherr, the AICPA’s director of tax policy and advocacy, in a statement Wednesday. “The AICPA’s suggestions will help taxpayers effectively utilize their GST exemptions and reduce unnecessary administrative burdens on taxpayers and the IRS.”