AI Sentiment Analysis Report: 2025
Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey
Authors: Jacy Reese Anthis, Janet V.T. Pauketat, Ali Ladak, Aikaterina Manoli
This report distills key findings from the AIMS survey, offering critical insights into public perceptions of AI sentience, moral implications, policy support, and future forecasts. The rapid evolution of AI technology necessitates a deep understanding of human-AI interaction dynamics for safe and beneficial development.
Executive Impact & Key Findings
The AIMS survey (N=3,500, 2021-2023) reveals surprising and significantly increasing public mind perception and moral concern for AI, alongside clear policy preferences.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
This finding from the 2023 AIMS survey highlights a surprisingly high and growing public perception of AI's current sentience, significantly impacting human-AI interaction dynamics. (RQ1)
Evolution of AI Mind Perception (2021-2023)
The AIMS survey reveals a consistent and significant increase in public attribution of mental faculties to AI systems from 2021 to 2023, suggesting a rapidly evolving understanding of AI capabilities. (RQ1)
Mental Faculty | General AI (Mean 2023) | Large Language Models (Mean 2023) |
---|---|---|
Thinking Analytically | 67.1 | 57.7 |
Being Rational | 53.8 | 48.0 |
Experiencing Emotions | 36.8 | 32.7 |
Having Feelings | 36.5 | 31.9 |
A substantial portion of the U.S. public supports legal rights for sentient AI, highlighting a growing moral consideration that could influence future policy and design. (RQ3)
Statement | Agree (All AI) | Agree (Sentient AI) |
---|---|---|
Torturing is wrong | 60.7% | 76.4% |
Deserve respect | 55.7% | 71.1% |
Physically damaging without consent is wrong | 46.2% | 65.1% |
Support legal rights | 26.8% | 37.7% |
Public Support for AI Development Regulations (2023)
The AIMS survey reveals strong public consensus for slowing down and regulating advanced AI development, with a majority supporting bans on sentient and superintelligent AI. (RQ3)
Public Awareness of AI Safety and Existential Risk
The survey found that 47.9% of participants agreed, 'AI is likely to cause human extinction', indicating a significant level of public concern regarding advanced AI risks. This sentiment aligns with a broader recognition that AI development poses profound societal challenges.
Furthermore, 72.4% of participants agreed, 'The safety of AI is one of the most important issues in the world today'. This underscores the urgency and public demand for robust safety measures and responsible AI governance strategies. (RQ2)
The median 2023 forecast for the arrival of sentient AI is just five years, reflecting a perception of rapid advancement in AI capabilities among the public. (RQ4)
AI Milestone | Median Years to Arrival | % Believe Already Exists / Will Never Exist |
---|---|---|
Artificial General Intelligence (AGI) | 2 years | 19% / 10% (median across categories) |
Human-Level AI (HLAI) | 5 years | 19% / 10% (median across categories) |
Superintelligence | 5 years | 19% / 10% (median across categories) |
Sentient AI | 5 years | 20% / 10% |
AI ROI Calculator: Optimize Your Enterprise Strategy
Estimate the potential annual hours reclaimed and cost savings by integrating AI into your operations. Adjust the parameters to see the impact.
Navigating the Future of Digital Minds: A Strategic Roadmap
The AIMS survey highlights critical areas for immediate action in AI design, policy, and research. Our roadmap outlines key phases for a safe and beneficial AI future.
Phase 1: Human-Centered Design & Explainable AI (XAI)
Designers must prioritize transparent and explainable AI systems. This includes selective tuning of anthropomorphism based on system function (e.g., mental health chatbots vs. autonomous vehicles) to harness benefits and minimize risks. Addressing public perception requires adapting design to evolving human-AI interaction schemas. (Implications for Designers)
Phase 2: Safety-Focused Policy & Public Engagement
Policymakers need to account for widespread public concern by prioritizing safety-focused policies like the EU AI Act. This requires increasing their own technological literacy and fostering meaningful public engagement to ensure democratic participation and evidence-based policymaking, particularly concerning bans and slowdowns of advanced AI development. (Implications for Policymakers)
Phase 3: Deep Research on Human-AI Coexistence
Researchers must address critical open questions: What drives variations in public opinion about AI sentience? How can HCI theories be enriched to account for advanced AI systems? And what are global perspectives on sentient AI? This research will inform design choices and policy frameworks to guide humanity towards a utopian, rather than dystopian, future with digital minds. (Open Research Questions)
Ready to Transform Your Enterprise with AI?
Schedule a personalized consultation to explore how these insights can be applied to your specific business challenges and opportunities.