
(Photo by Ascannio on Shutterstock)
…Yet Only 16% Admit They Trust AI ‘A Great Deal’
In A Nutshell
- 58% say AI has influenced their opinions at least occasionally, positioning chatbots as persuaders rather than just information tools
- Regulation support surges to 79% as Americans recognize the stakes when technology influences opinions at scale
- The trust-use paradox deepens: Only 16% trust AI “a great deal,” yet people rely on confident AI answers over verified sources
- Transparency is missing: 32% don’t understand how AI generates answers, leaving no way to evaluate credibility or catch errors
Artificial intelligence isn’t just answering questions anymore. It’s changing minds. A new survey reveals that 58% of Americans say AI-generated answers have influenced their opinions at least occasionally, positioning chatbots and AI answer engines as emerging forces in how people form beliefs and make decisions.
The data comes from Shift Browser’s 2026 AI Consumer Insights Survey of over 1,400 nationally representative respondents. While the technology was designed to retrieve and summarize information, it has quietly assumed a more active role: persuader. Unlike traditional search engines that return a list of links for users to evaluate, AI answer engines deliver confident, declarative responses that can feel authoritative even when they shouldn’t.
Among the 32% who use AI daily, exposure to these persuasive answers is constant. Even occasional users report that AI has shaped their thinking, meaning the effect accumulates over time rather than requiring heavy engagement.
Trust Without Verification
Only 16% of respondents say they trust AI answer engines “a great deal.” Yet 60% trust them at least somewhat, a qualified confidence that sits uneasily alongside the influence these tools wield. People rely on AI-generated answers despite doubts about their reliability, a dynamic that mirrors how many consume social media: aware of its flaws but using it anyway.
Accuracy ranked as the second-biggest concern among respondents, with 36% citing it as a primary worry. Many users recognize the risk of error but proceed regardless. The convenience of an instant, readable answer often outweighs the caution that might lead someone to verify claims or consult multiple sources.
“AI is moving quickly and so are user expectations for transparency and control,” said Michael Foucher, Vice President of Product and Customer Success at Shift. “Consumers clearly see value in AI tools, yet they also want greater clarity and control over how those systems operate.”
The tension between use and trust becomes more troubling when combined with another finding: 32% of respondents said they don’t understand how AI systems generate answers. Without insight into the process, users can’t easily assess whether a response is well-supported or speculative, well-sourced or fabricated.
The Confidence Problem
AI answer engines speak with authority. They don’t hedge, equivocate, or present competing viewpoints unless explicitly programmed to do so. This tone can make answers seem more definitive than they are, particularly when the underlying model has stitched together information from unreliable sources or filled gaps with plausible-sounding fabrications.
Fifty-three percent of respondents said AI improves their online experience, a figure that reflects satisfaction with the user interface and ease of use. But satisfaction doesn’t equal accuracy. People may enjoy the streamlined experience of getting an answer in seconds without clicking through a dozen websites, even if that answer is incomplete or wrong.
The influence effect operates in subtle ways. A person researching a medical symptom, evaluating a product, or exploring a political issue may not realize how much weight they’re giving to AI-generated summaries. Over time, these small nudges accumulate, shaping opinions on everything from consumer purchases to public policy positions.
Privacy and Transparency Concerns Intensify
Eighty-one percent of respondents worry about AI systems accessing personal data or private conversations. Privacy topped the list of concerns at 48%, followed by accuracy and transparency. People fear both what AI knows about them and what it tells them.
Lack of transparency ranked third at 32%, a concern that directly relates to influence. When users can’t see how an answer was constructed or which sources informed it, they have no way to evaluate its credibility. Traditional journalism and academic research come with attribution and sourcing norms. AI-generated content often doesn’t, leaving users to accept or reject answers based on gut feeling rather than evidence.
Forty-four percent of respondents expressed worry about AI taking actions without approval. While this concern typically refers to autonomous features like scheduling meetings or drafting emails, it also applies to opinion formation. An AI that subtly steers a user toward one interpretation over another is taking an action, even if that action is cognitive rather than practical.
Who Uses AI and How Often
Daily engagement is highest among 25- to 34-year-olds and working professionals. These groups are most likely to integrate AI into workflows, using it for research assistance (54% of all respondents prioritized this), article summarization (34%), and task automation (32%). The more someone uses these tools, the more opportunities exist for influence to take hold.
Adults 65 and older are least likely to use AI, with 20% of all respondents saying they never engage with it. This age-related divide may mean older Americans are insulated from some of AI’s influence, though it also means they’re less equipped to recognize it when they encounter it secondhand through family members or media.
For many respondents, AI improves digital efficiency without delivering transformational time savings. Practical applications dominate, meaning people see these tools as helpful rather than revolutionary. But helpfulness can be deceptive. A tool that saves time while subtly shaping beliefs operates on two levels, and users may only notice the first.
The Regulation Response
Seventy-nine percent of respondents favor some level of government regulation for AI answer engines, with 35% calling for strong oversight. This near-consensus means the public recognizes the stakes. When a technology influences opinions at scale, the question of accountability becomes urgent.
Fifty-seven percent of respondents also expressed concern about the energy required to power AI systems, indicating that environmental impact is entering public discourse alongside privacy and accuracy.
Fifty-one percent said the ability to customize or limit AI features is important, and 26% report difficulty managing or turning off these features once enabled. Both figures point to a desire for control that current systems often don’t provide. Tools that auto-enable features or bury opt-out mechanisms feed distrust and reduce users’ ability to moderate AI’s influence on their thinking.
A New Information Gatekeeper
AI answer engines are becoming information gatekeepers in ways search engines never were. A search engine returns options; an AI delivers conclusions. That shift changes the relationship between user and tool. Instead of empowering people to evaluate sources and draw their own conclusions, AI encourages passive consumption of pre-digested answers.
The 58% who report AI influence may be underestimating the effect. Influence often operates below conscious awareness, shaping assumptions and framing issues without users realizing it. A person who asks an AI about climate policy, vaccine efficacy, or economic trends may walk away with opinions that reflect the biases embedded in the model’s training data rather than a balanced view of the evidence.
As these tools become more sophisticated and their use more widespread, the influence effect is likely to grow. The question is whether users, regulators, and developers will recognize the problem in time to build systems that inform without persuading, assist without steering, and answer without deciding.
Survey Methodology
The survey was conducted among 1,448 respondents and weighted to be nationally representative by income, ethnicity, age, gender, and region. Shift Browser, which commissioned the research, is part of the Redbrick portfolio of companies and operates as a Certified B Corp. The company produces a customizable browser designed for professionals managing multiple accounts and apps.








AI seems to have made people more racist than ever before. Guess that is an improvement.