Ramp, a spend management solutions provider, released a new solution within 24 hours in direct response to recent advances in AI image generation that make it easy to create extremely convincing fake receipts that could be used for financial fraud.
Dave Wieseneck, an “expert in residence” at Ramp who administers the company’s own instance of Ramp, noted that faking receipts is not a new practice. What’s changed is that, with the recent image generation update from OpenAI, it has now become much easier, making what may have once been a painstaking effort into a casual thing done in minutes.
“So while it’s always been possible to create fake receipts, AI has made it super duper easy, especially OpenAI with their latest model. So I think it’s just super easy now and anybody can do it, as opposed to experts that are in the know,” he said in an interview.
AI generated receipt
Rather than try to assess the image itself, the software looks at the file’s metadata for markers particular to generative AI systems. Once those markers are present, the software flags the receipt as a probable fake.
“When we see that these markers are present, we have really high confidence of high accuracy to identify them as potentially AI generated receipts,” said Wieseneck. “I was the first person to test it out as the person that owns our internal instance of Ramp and dog foods the heck out of our product.”
While the speed at which they produced this solution may be remarkable, he said it is part of the company culture. The team, especially small pods within it, will observe a problem and stop what they’re doing to focus on a specific need. They get a group together on a Slack channel, work through the problem, code it late at night and push it out in the morning.
Wieseneck conceded it is not a total solution but rather a first line of defense to deter the casual fraudster. He compared it to locking your door before going out. If the front door is unlocked, a person can just stroll in and steal everything, but will likely give up if it is locked. A professional criminal with tons of breaking and entering experience, however, is unlikely to be deterred by a lock alone, versus a lock plus an alarm system plus an actual security guard.
“But that doesn’t mean that you don’t lock your door and you don’t add pieces of defense to make it harder for people to either rob your house or, in this case, defraud your company,” he said.
This isn’t to say there’s no plans to bolster this solution further. After all, the feature is only days old. He said the company is already looking into things like pixel analysis and textual analysis of the document itself to further enhance its AI detection capabilities, though he stressed that they want to be very confident it works before pushing it out to customers.
“We’re focused on giving finance teams confidence that legitimate receipts won’t be falsely flagged. So we want to tread carefully. We have lots of ideas. We’re going to work through them and kind of solve them in the same process we’ve always done here at Ramp,” he said.
This is likely only the beginning of AI image generators being used to fake documentation. For instance, it has recently been found that bots are also very good at forging passports.
AI fraud ascendant
This speaks to an overall trend of AI being used in financial crimes which was highlighted in a recent report from financial and risk advisory solutions provider Kroll, which surveyed about 600 CEOs, chief compliance officers, general counsel, chief risk officers and other financial crime compliance professionals. What they found was that experts in this area are growing alarmed at the rising use of AI by cybercriminals and other bad actors, and few are confident their own programs are ready to meet this challenge.
The poll found that 61% of respondents say use of AI by cybercriminals is a leading catalyst for risk exposure, such as through the generation of deep fakes and, likely, AI-generated financial documents. While 57% think AI will help against financial crime, 49% think it will hinder (Kroll said they are likely both right).
“The rapid-fire adoption of AI tools can be a blessing and a curse when it comes to financial crime, providing new and more efficient ways to combat it while also creating new techniques to exploit the broadening attack surface — be it via AI-powered phishing attacks, deepfakes, or real-time mimicry of expected security configurations,” said the report.
Yet, many professionals do not feel their current programs are up to the task. The rise in AI-guided fraud is part of an overall projected 71% increase in financial crime risks in 2025. Meanwhile, only 23% rate their compliance programs as “very effective” with lack of technology and investment named as prime reasons. Many also lack confidence in the governance infrastructure overseeing financial crime, with just 29% describing it as “robust.”
They’re also not entirely convinced that more AI is the solution. The poll found that confidence in AI technology has dropped dramatically over the past two years: those who say AI tools have had a positive impact on financial crime compliance have gone from 39% in 2023 to only 20% today. Despite this, there remains heavy investment in AI. The poll found 25% already say AI is an established part of their financial crime compliance program, and 30% say they are in the early stages of adoption. Meanwhile, in the year ahead, 49% expect their organization will invest in AI solutions to tackle financial crime, and 47% say the same about their cybersecurity budgets.
To help combat AI-enabled financial crime, Kroll recommended companies form cross-functional teams that go beyond IT and cybersecurity and involve those in AML, compliance, legal, product and senior management. Further, Kroll said there has to be focused, hands-on training with new AI tools that are updated and repeated as the organization implements new AI capabilities and the regulatory and risk landscape changes. Finally, to combat AI-related fraud, Kroll recommended companies maintain a “back to the basics” approach. Focus on fundamental human intervention and confirmation procedures — regardless of how convincing or time-sensitive circumstances appear.
The International Auditing and Assurance Standards Board is proposing to tailor some of its standards to align with recent additions to the International Ethics Standards Board for Accountants’ International Code of Ethics for Professional Accountants when it comes to using the work of an external expert.
The IAASB is asking for comments via a digital response template that can be found on the IAASB website by July 24, 2025.
In December 2023, the IESBA approved an exposure draft for proposed revisions to the IESBA’s Code of Ethics related to using the work of an external expert. The proposals included three new sections to the Code of Ethics, including provisions for professional accountants in public practice; professional accountants in business and sustainability assurance practitioners. The IESBA approved the provisions on using the work of an external expert at its December 2024 meeting, establishing an ethical framework to guide accountants and sustainability assurance practitioners in evaluating whether an external expert has the necessary competence, capabilities and objectivity to use their work, as well as provisions on applying the Ethics Code’s conceptual framework when using the work of an outside expert.
President Donald Trump’s tariffs would effectively cause a tax increase for low-income families that is more than three times higher than what wealthier Americans would pay, according to an analysis from the Institute on Taxation and Economic Policy.
The report from the progressive think tank outlined the outcomes for Americans of all backgrounds if the tariffs currently in effect remain in place next year. Those making $28,600 or less would have to spend 6.2% more of their income due to higher prices, while the richest Americans with income of at least $914,900 are expected to spend 1.7% more. Middle-income families making between $55,100 and $94,100 would pay 5% more of their earnings.
Trump has imposed the steepest U.S. duties in more than a century, including a 145% tariff on many products from China, a 25% rate on most imports from Canada and Mexico, duties on some sectors such as steel and aluminum and a baseline 10% tariff on the rest of the country’s trading partners. He suspended higher, customized tariffs on most countries for 90 days.
Economists have warned that costs from tariff increases would ultimately be passed on to U.S. consumers. And while prices will rise for everyone, lower-income families are expected to lose a larger portion of their budgets because they tend to spend more of their earnings on goods, including food and other necessities, compared to wealthier individuals.
Food prices could rise by 2.6% in the short run due to tariffs, according to an estimate from the Yale Budget Lab. Among all goods impacted, consumers are expected to face the steepest price hikes for clothing at 64%, the report showed.
The Yale Budget Lab projected that the tariffs would result in a loss of $4,700 a year on average for American households.
Artificial intelligence is just getting started in the accounting world, but it is already helping firms like technology specialist Schellman do more things with fewer people, allowing the firm to scale back hiring and reduce headcount in certain areas through natural attrition.
Schellman CEO Avani Desai said there have definitely been some shifts in headcount at the Top 100 Firm, though she stressed it was nothing dramatic, as it mostly reflects natural attrition combined with being more selective with hiring. She said the firm has already made an internal decision to not reduce headcount in force, as that just indicates they didn’t hire properly the first time.
“It hasn’t been about reducing roles but evolving how we do work, so there wasn’t one specific date where we ‘started’ the reduction. It’s been more case by case. We’ve held back on refilling certain roles when we saw opportunities to streamline, especially with the use of new technologies like AI,” she said.
One area where the firm has found such opportunities has been in the testing of certain cybersecurity controls, particularly within the SOC framework. The firm examined all the controls it tests on the service side and asked which ones require human judgment or deep expertise. The answer was a lot of them. But for the ones that don’t, AI algorithms have been able to significantly lighten the load.
“[If] we don’t refill a role, it’s because the need actually has changed, or the process has improved so significantly [that] the workload is lighter or shared across the smarter system. So that’s what’s happening,” said Desai.
Outside of client services like SOC control testing and reporting, the firm has found efficiencies in administrative functions as well as certain internal operational processes. On the latter point, Desai noted that Schellman’s engineers, including the chief information officer, have been using AI to help develop code, which means they’re not relying as much on outside expertise on the internal service delivery side of things. There are still people in the development process, but their roles are changing: They’re writing less code, and doing more reviewing of code before it gets pushed into production, saving time and creating efficiencies.
“The best way for me to say this is, to us, this has been intentional. We paused hiring in a few areas where we saw overlaps, where technology was really working,” said Desai.
However, even in an age awash with AI, Schellman acknowledges there are certain jobs that need a human, at least for now. For example, the firm does assessments for the FedRAMP program, which is needed for cloud service providers to contract with certain government agencies. These assessments, even in the most stable of times, can be long and complex engagements, to say nothing of the less predictable nature of the current government. As such, it does not make as much sense to reduce human staff in this area.
“The way it is right now for us to do FedRAMP engagements, it’s a very manual process. There’s a lot of back and forth between us and a third party, the government, and we don’t see a lot of overall application or technology help… We’re in the federal space and you can imagine, [with] what’s going on right now, there’s a big changing market condition for clients and their pricing pressure,” said Desai.
As Schellman reduces staff levels in some places, it is increasing them in others. Desai said the firm is actively hiring in certain areas. In particular, it’s adding staff in technical cybersecurity (e.g., penetration testers), the aforementioned FedRAMP engagements, AI assessment (in line with recently becoming an ISO 42001 certification body) and in some client-facing roles like marketing and sales.
“So, to me, this isn’t about doing more with less … It’s about doing more of the right things with the right people,” said Desai.
While these moves have resulted in savings, she said that was never really the point, so whatever the firm has saved from staffing efficiencies it has reinvested in its tech stack to build its service line further. When asked for an example, she said the firm would like to focus more on penetration testing by building a SaaS tool for it. While Schellman has a proof of concept developed, she noted it would take a lot of money and time to deploy a full solution — both of which the firm now has more of because of its efficiency moves.
“What is the ‘why’ behind these decisions? The ‘why’ for us isn’t what I think you traditionally see, which is ‘We need to get profitability high. We need to have less people do more things.’ That’s not what it is like,” said Desai. “I want to be able to focus on quality. And the only way I think I can focus on quality is if my people are not focusing on things that don’t matter … I feel like I’m in a much better place because the smart people that I’ve hired are working on the riskiest and most complicated things.”