Connect with us

Accounting

Accounting profs. adapt to AI amid cheating concerns, other challenges

Published

on

The rise of generative AI in society has also given rise to AI-guided cheating in schools, a problem that has challenged educators’ capacity to adapt. While this issue is primarily associated with the humanities, accounting educators report that they are seeing this in their own classrooms as well. 

Generative AI is known not for its skill with numbers but words, which makes it an unfortunately ideal cheating tool for humanities courses that use written essays as major components of their programs. However, while accounting is not exactly 19th century romantic literature, language and writing are not entirely irrelevant. An accounting student may not need to analyze the major themes in Ulysses, but they may be called upon to interpret an accounting standard, tax regulation or audit document, which can be just as dense and confusing. So while there are not as many opportunities for AI-guided cheating as in other fields, students are still finding places where bots can do their work for them, much to the chagrin of their professors. 

“This is definitely something I have heard quite a bit about from my colleagues in the humanities and other fields, but is becoming an issue for accounting/finance classes as well. Students still need to understand the implications of ASU’s, disclosures, etc, and if they rely entirely on AI for assignment completion that knowledge will fade away,” said Sean Stein Smith, a professor at Lehman College who teaches intermediate accounting, cost accounting, advanced accounting and forensic accounting. He also leads Lehman’s development of AI business courses, as well its crypto/blockchain content.

AI cheating
Kyoto city, Japan – May 05, 2023: The OPEN AI logo visible on a smartphone screen

filins – stock.adobe.com

He added that he has seen AI-guided cheating first-hand, especially for short-form essay assignments as well as when he requires students to perform financial analyses using specific ratios. 

Douglas Carmichael, former chief auditor of the PCAOB and currently a professor at Baruch College where he teaches auditing, noted that while he does not himself give any writing assignments that a student could use generative AI to cheat on, this doesn’t mean they’re still not using AI to undermine the purpose of an assignment, though once students realized he was on to them it has become less of an issue. 

“I do ask students to submit at least one question before class on something in the text or recorded lectures they found difficult to understand or want additional information about. My experience in prior semesters was that about half of the students submitted a question that seemed suspicious to me given the language used and generality of the issue. The lack of specific reference to the topic in the text or recorded lecture was also apparent. These kinds of questions did not earn any credit and as word got out about that use of ChatGPT is infrequent,” he said. 

But even if students are not out and out cheating, some have observed an unhealthy reliance on generative AI starting to form. Jack Castonguay, vice president of learning and development with Surgent as well as a Hofstra University professor who teaches advanced courses in accounting and auditing theory, has seen students struggling with understanding and communicating core concepts at least in part due to their reliance on generative AI. 

“We see the reliance significantly when they have to give a presentation or take an in-person exam. It’s clear they have gotten to that point by using AI and can’t apply the logic on their own. Maybe in 3-10 years (given the speed of the improvement in LLMs) they won’t have to do it on their own, but it’s a large problem now for client relationships and having conversations with this in practice. They need to look up everything and use AI as a crutch. Seminar discussions are like pulling teeth oftentimes for me,” he said. 

With this in mind, accounting educators — much like those in other fields — are currently in conversation about how to respond to this issue. Richard C. Jones, a Hofstra University accounting professor and former technical staff member at the Financial Accounting Standards Board, said this is a major topic of debate and discussion among college faculty and administrators, noting that it seems to be brought up in nearly every meeting. It is obvious, he said, that students will use LLMs on assignments, and so therefore the challenge for faculty is to assign projects and papers that require students to actually demonstrate their knowledge versus just handing in a paper or presentation. 

“Fortunately, I teach classes that require the application of accounting rather than accounting theory. Therefore, my exams and other assessments are specific to case information provided and application of the accounting rules in providing the journal entries and the related disclosure information. So, my students do not have as much of an opportunity to use LLMs to answer the questions,” he said. 

Additionally, he mentioned that educators are trying to find ways to work AI into their assignments, considering how quickly accounting firms themselves have taken to it. 

Tracey Niemotko — a Marist University professor who teaches accounting and auditing as well as sustainability, taxation and forensic accounting — said that she views AI as more of a tool than a cheating mechanism, pointing out how models can be used to expedite audit procedures or clear away the busy work that eats up the day of many professionals. Consequently, she is a little more sanguine about AI-guided cheating, noting that even if students do use AI in their assignments, the nature of the work makes cheating difficult. 

“Even with electronic testing in the classroom, I do not see cheating as a concern overall. I think the accounting students are perhaps a bit more disciplined than most students, so I don’t think they have the mindset to cheat. Even for writing assignments in my upper-level accounting courses, students may use AI to assist them, but they are required to write ‘in their own words.’ Overall, the majority do their own written work but may use AI as a tool to help them develop an outline or get them started,” she said. 

Abigail Zhang Parker, a University of Texas at San Antonio professor whose research specialty is AI in accounting, has also directly worked AI into her classes. For example, her Accounting Information Systems courses include hands-on workshops where students learn to operate different accounting software solutions. She noted that AI can be a useful tool for finding relevant information and understanding difficult concepts.

Therefore, her overall philosophy is that students can use generative AI to help with assignments but not on exams, as that is when they’re tested on their actual understanding of the topic. So long as it is only used for assignments versus exams, she does not consider using AI to be cheating. She added it would be impractical to prevent the use of AI entirely anyway, it’s better for educators to find ways to use it too. However, she noted that teaching students proper use of AI can, itself, present a challenge. 

“Perhaps we need to guide them how to use it properly. This is not easy. One method that came to my mind is to make the parts that demonstrate students’ own skills take a greater portion in the grading components.  … For example, there are three exams throughout the semester, and they take 60% of the total grade, while assignments take 10%. For classes where students need to submit a report and make a presentation, maybe the report itself will not take up a high portion of the grade, but the in-person presentation will, as it better reflects students’ true understanding of the subject. And once students know that they will be mainly graded on their own performance, they are more incentivized to think through the problem than simply over-relying on AI,” she said.

Another reason to learn AI in the classroom is that, once students are working as professional accountants, clients will likely be using AI as well, and they will need to understand and explain what is missing from the AI’s answers. However, Castonguay, from Hofstra, voiced concerns that over-reliance on AI is eroding the critical thinking and reasoning skills needed to properly evaluate these answers in the first place. He does an exercise in class where students have ChatGPT summarize a FASB ASU and review its findings. Some, he said, don’t even know where to start as they have obviously been relying on ChatGPT to understand it at all. 

“My bigger concern is [that] by such a reliance on AI they will lack the critical thinking and synthesizing skills that are still valued even with AI. To use a sports analogy, they are only bowling with gutter guards – what happens when those aren’t there?” he said. 

Smith, from Lehman, said these kinds of things underscores the need to teach responsible AI usage in a way that does not degrade the human skills that they’ll be relying on in the professional world. He felt, unfortunately, that this could be an uphill battle. 

“I do think that as AI becomes more integrated into the classroom and profession, we are going to have to really double-down on making sure students still have the ability to think critically. Especially in cases where questions or data may change on-the-fly, students are seeming to have a harder time pivoting and adapting to analyze said data on the spot. It’s a growing problem with no cookie-cutter or easy solution, but is definitely something I know is being talked about in pretty much every accounting department/School of Business,” he said. 

Continue Reading

Accounting

IAASB tweaks standards on working with outside experts

Published

on

The International Auditing and Assurance Standards Board is proposing to tailor some of its standards to align with recent additions to the International Ethics Standards Board for Accountants’ International Code of Ethics for Professional Accountants when it comes to using the work of an external expert.

The proposed narrow-scope amendments involve minor changes to several IAASB standards:

  • ISA 620, Using the Work of an Auditor’s Expert;
  • ISRE 2400 (Revised), Engagements to Review Historical Financial Statements;
  • ISAE 3000 (Revised), Assurance Engagements Other than Audits or Reviews of Historical Financial Information;
  • ISRS 4400 (Revised), Agreed-upon Procedures Engagements.

The IAASB is asking for comments via a digital response template that can be found on the IAASB website by July 24, 2025.

In December 2023, the IESBA approved an exposure draft for proposed revisions to the IESBA’s Code of Ethics related to using the work of an external expert. The proposals included three new sections to the Code of Ethics, including provisions for professional accountants in public practice; professional accountants in business and sustainability assurance practitioners. The IESBA approved the provisions on using the work of an external expert at its December 2024 meeting, establishing an ethical framework to guide accountants and sustainability assurance practitioners in evaluating whether an external expert has the necessary competence, capabilities and objectivity to use their work, as well as provisions on applying the Ethics Code’s conceptual framework when using the work of an outside expert.  

Continue Reading

Accounting

Tariffs will hit low-income Americans harder than richest, report says

Published

on

President Donald Trump’s tariffs would effectively cause a tax increase for low-income families that is more than three times higher than what wealthier Americans would pay, according to an analysis from the Institute on Taxation and Economic Policy.

The report from the progressive think tank outlined the outcomes for Americans of all backgrounds if the tariffs currently in effect remain in place next year. Those making $28,600 or less would have to spend 6.2% more of their income due to higher prices, while the richest Americans with income of at least $914,900 are expected to spend 1.7% more. Middle-income families making between $55,100 and $94,100 would pay 5% more of their earnings. 

Trump has imposed the steepest U.S. duties in more than a century, including a 145% tariff on many products from China, a 25% rate on most imports from Canada and Mexico, duties on some sectors such as steel and aluminum and a baseline 10% tariff on the rest of the country’s trading partners. He suspended higher, customized tariffs on most countries for 90 days.

Economists have warned that costs from tariff increases would ultimately be passed on to U.S. consumers. And while prices will rise for everyone, lower-income families are expected to lose a larger portion of their budgets because they tend to spend more of their earnings on goods, including food and other necessities, compared to wealthier individuals.

Food prices could rise by 2.6% in the short run due to tariffs, according to an estimate from the Yale Budget Lab. Among all goods impacted, consumers are expected to face the steepest price hikes for clothing at 64%, the report showed. 

The Yale Budget Lab projected that the tariffs would result in a loss of $4,700 a year on average for American households.

Continue Reading

Accounting

At Schellman, AI reshapes a firm’s staffing needs

Published

on

Artificial intelligence is just getting started in the accounting world, but it is already helping firms like technology specialist Schellman do more things with fewer people, allowing the firm to scale back hiring and reduce headcount in certain areas through natural attrition. 

Schellman CEO Avani Desai said there have definitely been some shifts in headcount at the Top 100 Firm, though she stressed it was nothing dramatic, as it mostly reflects natural attrition combined with being more selective with hiring. She said the firm has already made an internal decision to not reduce headcount in force, as that just indicates they didn’t hire properly the first time. 

“It hasn’t been about reducing roles but evolving how we do work, so there wasn’t one specific date where we ‘started’ the reduction. It’s been more case by case. We’ve held back on refilling certain roles when we saw opportunities to streamline, especially with the use of new technologies like AI,” she said. 

One area where the firm has found such opportunities has been in the testing of certain cybersecurity controls, particularly within the SOC framework. The firm examined all the controls it tests on the service side and asked which ones require human judgment or deep expertise. The answer was a lot of them. But for the ones that don’t, AI algorithms have been able to significantly lighten the load. 

“[If] we don’t refill a role, it’s because the need actually has changed, or the process has improved so significantly [that] the workload is lighter or shared across the smarter system. So that’s what’s happening,” said Desai. 

Outside of client services like SOC control testing and reporting, the firm has found efficiencies in administrative functions as well as certain internal operational processes. On the latter point, Desai noted that Schellman’s engineers, including the chief information officer, have been using AI to help develop code, which means they’re not relying as much on outside expertise on the internal service delivery side of things. There are still people in the development process, but their roles are changing: They’re writing less code, and doing more reviewing of code before it gets pushed into production, saving time and creating efficiencies. 

“The best way for me to say this is, to us, this has been intentional. We paused hiring in a few areas where we saw overlaps, where technology was really working,” said Desai.

However, even in an age awash with AI, Schellman acknowledges there are certain jobs that need a human, at least for now. For example, the firm does assessments for the FedRAMP program, which is needed for cloud service providers to contract with certain government agencies. These assessments, even in the most stable of times, can be long and complex engagements, to say nothing of the less predictable nature of the current government. As such, it does not make as much sense to reduce human staff in this area. 

“The way it is right now for us to do FedRAMP engagements, it’s a very manual process. There’s a lot of back and forth between us and a third party, the government, and we don’t see a lot of overall application or technology help… We’re in the federal space and you can imagine, [with] what’s going on right now, there’s a big changing market condition for clients and their pricing pressure,” said Desai. 

As Schellman reduces staff levels in some places, it is increasing them in others. Desai said the firm is actively hiring in certain areas. In particular, it’s adding staff in technical cybersecurity (e.g., penetration testers), the aforementioned FedRAMP engagements, AI assessment (in line with recently becoming an ISO 42001 certification body) and in some client-facing roles like marketing and sales. 

“So, to me, this isn’t about doing more with less … It’s about doing more of the right things with the right people,” said Desai. 

While these moves have resulted in savings, she said that was never really the point, so whatever the firm has saved from staffing efficiencies it has reinvested in its tech stack to build its service line further. When asked for an example, she said the firm would like to focus more on penetration testing by building a SaaS tool for it. While Schellman has a proof of concept developed, she noted it would take a lot of money and time to deploy a full solution — both of which the firm now has more of because of its efficiency moves. 

“What is the ‘why’ behind these decisions? The ‘why’ for us isn’t what I think you traditionally see, which is ‘We need to get profitability high. We need to have less people do more things.’ That’s not what it is like,” said Desai. “I want to be able to focus on quality. And the only way I think I can focus on quality is if my people are not focusing on things that don’t matter … I feel like I’m in a much better place because the smart people that I’ve hired are working on the riskiest and most complicated things.”

Continue Reading

Trending