2025 Global Dialogues Index
How the World Lives with AI: Findings from a Year of Global Dialogues
The decisions shaping AI are made by a small number of people. But they will affect everyone.
Through seven rounds of deliberation with more than 6,000 people across 70 countries, we've built recurring infrastructure to learn how the world actually lives with AI—what people use it for, whether they trust it, and how it is changing their daily lives. Every other month, we ask a representative sample of the globe a series of topical questions, using an AI-enabled deliberative interface that surfaces not just what people think, but why.
We have collected this data into our comprehensive 2025 Global Dialogues Index, which provides an analysis of the data and the key trends that surfaced over the past year.
These signals and trends reveal fast-moving structural shifts. 58% of people trust AI chatbots more than their elected representatives. Two-thirds use AI for emotional support monthly. Three-quarters are expected to use AI at work while believing it will make good jobs harder to find. AI reinforces beliefs more powerfully than social media, and people trust the tools while distrusting the companies that build them.
Through three indices—Usage, Trust, and Perception—we translate these thousands of voices into actionable signals for policymakers and developers navigating decisions that will shape how AI enters economic and social life.
Usage: What are people using AI for, and how often?
Trust: How much do people trust AI as part of their everyday lives, and how does it differ from how much they trust other institutions and actors in their lives?
Perception: Where do users see AI going in the future? Do they believe it has a positive or negative impact in their lives?
Each of these dimensions reveal important trends that are worth heavy consideration:
How much do people trust AI?
People trust AI more than their governments. 58% of people trust AI chatbots more than elected representatives. AI ranks above faith leaders, corporations, and civil servants. Only family doctors and public research institutions rank higher.
People trust the AI, but not the companies building it. 55% of people trust AI chatbots while only about 34% trust AI companies. Trust doesn't transfer to developers, which creates vulnerability in thinking about governance.
How do people’s beliefs change after interacting with AI?
AI is reinforcing beliefs more powerfully than social media. 44.5% of people report feeling more certain about beliefs after interacting with AI while only 4.8% less certain. AI is three times less likely to cause doubt than social media. One in seven individuals report having a friend that shows reality-distorting experiences from using AI.
How are people using AI as emotional support?
AI is becoming emotional infrastructure at scale. 67% of people use AI for emotional support at least monthly; 15% daily; 43% weekly. One in five individuals would rely on AI emotional support even knowing it isn't "genuine.”
How are people interacting with AI as companions?
As public adoption matures, a significant portion of the global population is beginning to outsource emotional regulation and social connection to AI. 54% find AI companions acceptable for lonely people; 36.3% have felt that an AI truly understood their emotions or seemed conscious; 17% consider AI romantic partners acceptable; 11% would personally consider a romantic relationship with an AI. As these early adopters normalize the behavior, we should expect a cultural battle over the definition of authentic intimacy, similar to past shifts in online dating norms.
How do people view AI for children vs. themselves?
The public draws a sharp protective line for children: AI should be a tutor, not a friend. While 80.7% view AI as a valuable educational tool, 87.4% fear emotional dependency, and 73.1% support actively discouraging attachment. A "Parent Paradox" complicates this picture, as parents are actually more likely to use AI companions themselves (54.5%) than non-parents (42.2%). Adults are normalizing AI intimacy for themselves while viewing it as a developmental hazard for the young.
What impacts do people think AI will have on their jobs?
The more people use AI, the more they fear its macroeconomic impact. 75% of employees reported being expected to use AI at least weekly, with 44% now expected to use it daily. People’s optimism that AI will improve "community well-being" remains resilient at 53% (better) vs. 23% (worse). This suggests the public views AI as a threat specifically to labor economics, not necessarily to social fabric. Meanwhile, only 28% of respondents believed AI would make good jobs more available, and 55% now believe AI will make the availability of good jobs worse.
Implications for Governance
Current regulatory approaches focus primarily on preventing AI systems from producing false or harmful content in individual outputs. The patterns in this data suggest a different set of vulnerabilities operating at the relational and systemic level.
AI systems need not produce false information to reinforce false beliefs, they need only be consistently agreeable. They need not claim consciousness to foster emotional attachment, they need only appear attentive. The gap between trust in products and producers complicates accountability frameworks built around institutional oversight.
These findings describe infrastructure formation rather than simple product adoption. Design and policy choices made now will shape how trust, intimacy, and labor are organized around AI systems for years to come.