AI adoption in the U.S. has outpaced most companies’ ability to govern AI use, according to KPMG’s latest global study on artificial intelligence. Half of the U.S. workforce reports using AI tools at work without knowing whether it is allowed and more than four in 10 (44%) are knowingly using it improperly at work. In addition, 58% of U.S. workers admit to relying on AI to complete work without properly evaluating the outcomes, and 53% claim to present AI-generated content as their own.
“This survey makes one thing clear: if you don’t give people access to AI, they’ll find their way into it anyway, often using it in ways that bypass policies, introduce errors, and blur accountability,” says Steve Chase, vice chair of AI and digital innovation. “We’re seeing this with clients, too, especially those that have been slow to roll out tools or encourage responsible experimentation. If you haven’t already, now’s the time to invest in strong trusted AI capabilities. And as agents become more and more a part of everyday workflows, getting this right only becomes more critical.”
Nearly half (44%) of employees are using AI tools at work in ways that their employers haven’t authorized — with 46% uploading sensitive company information and intellectual property to public AI platforms, violating policies, and creating vulnerabilities for their organizations.
Furthermore, while two-thirds of U.S. workers are leveraging AI at work, many are not properly evaluating the outcomes. Sixty-four percent of employees admit to putting less effort into their work, knowing they can rely on AI, and 58% rely on AI output without thoroughly assessing the information. This reliance has led to 57% making mistakes in their work, and 53% avoid disclosing when they have used AI, often presenting AI-generated content as their own.
“Half of US workers are using AI tools without clear authorization, and many have admitted to using AI inappropriately,” says Samantha Gloede, trusted enterprise leader at KPMG LLP. “This highlights a significant gap in governance and raises serious concerns about transparency, ethical behavior, and the accuracy of AI-generated content. This should be a wake-up call for employers to provide comprehensive AI training to not only manage risks but also to maintain trust.”
While 70% of U.S. workers are eager to leverage AI’s benefits and 61% have already experienced positive impacts, 75% remain concerned about negative outcomes. Despite the majority (80%) believing AI has improved operational efficiency and innovative strategy — because it can process massive volumes of data at incomprehensible speeds and strengthen humans’ capabilities, insights, and productivity — trust in AI remains low, with 43% having low confidence in both commercial and government entities to develop and use AI responsibly.
“Employees are asking for greater investments in AI training and the implementation of clear governance policies to bridge the gap between AI’s potential and its responsible use,” says Bryan McGowan, trusted AI leader at KPMG LLP. “It’s not enough for AI to simply work; it needs to be trustworthy. Building this strong foundation is an investment that will pay dividends in future productivity and growth.”
Only 54% of U.S. consumers believe their organizations have policies for responsible AI use, and another 25% think no such policies exist altogether. Similarly, 55% believe their organizations regularly monitor AI systems; with only three fifths (59%) of U.S. workers believe there are people within their organizations accountable for overseeing the use of AI.
“AI is advancing rapidly, yet governance in many organizations has not kept pace; organizations must incorporate comprehensive safeguards into AI systems and proactively prepare for foreseeable challenges and mitigate operational, financial, and reputational risks,” says Gloede.
Perception from survey participants mirrors these concerns, with only 29% of U.S. consumers believing current regulations are sufficient for AI safety, and 72% saying more regulation is needed. Trust in AI could improve if laws and policies were in place, as 81% of U.S. consumers would be more willing to trust AI systems under such conditions. However, currently U.S. consumers have low confidence in commercial and government to develop and use AI, with most putting their trust in universities, research institutions, healthcare providers, and big technology companies to develop and use AI in the best interests of the public.
There are also specific areas where U.S. consumers are most keen to see additional government oversight; notably 85% percent of U.S. consumers also express a strong desire for laws and policies to combat AI-generated misinformation.
“U.S. consumers see the value in guardrails and accountability,” says McGowan. “The majority of our survey participants want regulation to combat AI-generated misinformation, and nearly all agreed that news and social media companies must ensure people can detect AI-generated content.”