People trust AI tools less and are more worried about their negative impacts than they did two years ago but, despite this, their use has been growing steadily, as many feel the benefits still outweigh the risks.
This is according to what Big Four firm KPMG said was the largest survey of its kind, polling over 48,000 people across 47 countries, including 1,019 people in the U.S.
The poll found, among other things, that the proportion of people who said they were willing to rely on AI systems went from 52% in 2022 to 43% in 2024; the proportion of those saying they perceive AI systems as trustworthy went from 63% to 56%; and the proportion of those saying they were worried about AI systems rose from 49% to 62%.
Yet, at the same time, most people use AI today in some form or another. The poll found that the proportion of organizations reporting that they’ve adopted AI technology went from 34% in 2022 to 71% in 2024; consequently, the proportion of employees who use AI at work went from 54% to 67% in the same time period.
Outside of work, in terms of their personal lives, 20% of respondents said they never use AI, but 51% said they use AI daily, weekly or monthly. For the most part, when people are using AI, it is usually a general purpose public model: 73% said this is what they use for work, versus 18% who are using AI tools developed or customized to their particular organization.
However, while more people are using AI, fewer say they know enough about it. The poll found that nearly half, 48%, reported their AI knowledge as “low” while a further 31% rated it as “moderate.” Only 21% said they had a high amount of knowledge on AI.
Despite this, most who use AI believe they’re pretty good at using it effectively. The poll found 62% saying they could skillfully use AI applications to help with daily work or activities; 60% said they could communicate effectively with AI applications; 59% said they can choose the most appropriate AI tool for the task; and 55% said they can evaluate the accuracy of AI responses. Those saying they lacked confidence in any area hovered between 21% to 24%.
The report suggested that this disparity might be due to AI solutions having intuitive interfaces that people can quickly grasp: just as one may not need to know much about cars to drive one, maybe people don’t need to know how AI works to use it well.
This could be borne out by the benefits people say they have personally witnessed from using AI. A clear majority, 67%, of those using AI at work said they have become more efficient, 61% say it has improved access to accurate information, 59% say it has improved idea generation and innovation, 58% say the quality or accuracy of work and decisions has improved, and 55% say they have used it to develop skills and ability.
However, other viewpoints are more contentious. Yes, 36% say it has saved them time on repetitive and mundane tasks but 39% say it has increased time; 40% say it has decreased their workload but 26% say it has increased it; meanwhile, 36% say it has led to less pressure and stress at work, but 26% say it has added more. Tellingly, while 19% say AI has reduced privacy and compliance risks, 35% say it has made them worse, and while 13% think it has led to less monitoring and surveillance of employees, 42% say AI has amplified it.
While more people are using AI, they are not always doing so in ways their organizations would approve. The poll found, for example, that about 31% have contravened specific AI policies at their organizations, 34% admit they uploaded copyright material or intellectual property to a generative AI tool, and 34% said they uploaded company information. Meanwhile, 38% admitted to using AI tools when they weren’t sure if it was allowed and 31% used AI tools in ways that might be considered inappropriate (though the specifics of what that might mean was not mentioned.)
People are also not entirely forthcoming when they have used AI, as the survey found 42% avoided revealing AI use in their work and 39% have passed off generative AI content as their own.
The poll also found that AI has had impacts on how people work: 51% concede they’ve gotten lazier because of AI, 42% say they’ve relied on AI output without evaluating the information, and 31% admit they’ve made mistakes in their work because of AI.
This might explain, at least partially, why 43% overall have reported personally witnessing negative outcomes from AI. The three biggest problems people have personally seen with AI are “loss of human interaction and connection” with 55% saying they’ve seen this; inaccurate outcomes, at 54%; and misinformation or disinformation, at 52%. Meanwhile, though they remain the lowest in the list, a still-troubling 31% said they saw bias or unfair treatment due to AI, 34% have witnessed both environmental impacts and the undermining of human rights due to AI, and 40% said they have seen manipulation and harmful use of AI (though, again, the specifics of this were not elaborated upon.) While right now many still believe the benefits outweigh the risks, this proportion has actually lowered from 50% in 2022 to 41% in 2024.
However, 83% report they would be more willing to trust an AI system when such assurance mechanisms are in place. The survey also found strong support for the right to opt out of having their data used by AI systems, 86%, as well as for monitoring for accuracy and reliability, 84%, training employees on safe and responsible AI use, 84%, allowing humans to override the system’s recommendations and output, 84%, and effective AI laws or regulations, 84%. The poll also found that the clear majority, 74%, support third party independent assurance for AI systems.
“Employees are asking for greater investments in AI training and the implementation of clear governance policies to bridge the gap between AI’s potential and its responsible use,” said Bryan McGowan, trusted AI leader for KPMG. “It’s not enough for AI to simply work; it needs to be trustworthy. Building this strong foundation is an investment that will pay dividends in future productivity and growth.”