Skip to main content
Enterprise AI Analysis: Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey

AI Sentiment Analysis Report: 2025

Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey

Authors: Jacy Reese Anthis, Janet V.T. Pauketat, Ali Ladak, Aikaterina Manoli

This report distills key findings from the AIMS survey, offering critical insights into public perceptions of AI sentience, moral implications, policy support, and future forecasts. The rapid evolution of AI technology necessitates a deep understanding of human-AI interaction dynamics for safe and beneficial development.

Executive Impact & Key Findings

The AIMS survey (N=3,500, 2021-2023) reveals surprising and significantly increasing public mind perception and moral concern for AI, alongside clear policy preferences.

U.S. Adults Believe Some AI is Currently Sentient
Support Legal Rights for Sentient AI
Support Banning Smarter-than-Human AI
Median Forecast for Sentient AI Arrival

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

1 in 5 U.S. Adults Believe Some AI is Currently Sentient

This finding from the 2023 AIMS survey highlights a surprisingly high and growing public perception of AI's current sentience, significantly impacting human-AI interaction dynamics. (RQ1)

Evolution of AI Mind Perception (2021-2023)

Perceived AI Mind Faculties: Rational (M=51.4), Analytical (M=62.7), Emotional (M=34.3), Feelings (M=33.7) in 2021
Significant Increases Across All Faculties by 2023
2023: Rational (M=53.8), Analytical (M=67.1), Emotional (M=36.8), Feelings (M=36.5)
Overall Trend: Growing Attribution of Mind-like Qualities to AI Systems

The AIMS survey reveals a consistent and significant increase in public attribution of mental faculties to AI systems from 2021 to 2023, suggesting a rapidly evolving understanding of AI capabilities. (RQ1)

While both General AI and LLMs are perceived to have mental faculties, LLMs consistently rank lower across all categories, indicating nuanced public perceptions depending on AI type. (RQ1)

Mental Faculty General AI (Mean 2023) Large Language Models (Mean 2023)
Thinking Analytically67.157.7
Being Rational53.848.0
Experiencing Emotions36.832.7
Having Feelings36.531.9
Mind Perception: LLMs vs. General AI
38% Support Legal Rights for Sentient AI

A substantial portion of the U.S. public supports legal rights for sentient AI, highlighting a growing moral consideration that could influence future policy and design. (RQ3)

Moral concern for AI welfare is significantly higher when AI is specified as 'sentient,' and is also strongly correlated with human-like and animal-like attributions. (RQ2, RQ3)

Statement Agree (All AI) Agree (Sentient AI)
Torturing is wrong60.7%76.4%
Deserve respect55.7%71.1%
Physically damaging without consent is wrong46.2%65.1%
Support legal rights26.8%37.7%
Moral Concern: Sentient AI vs. All AI & Anthropomorphism

Public Support for AI Development Regulations (2023)

63% support banning smarter-than-human AI
69% support banning sentient AI
48.9% believe AI development is 'too fast'
71% support government regulation to slow down AI development
69.1% support a 6-month pause on advanced AI development

The AIMS survey reveals strong public consensus for slowing down and regulating advanced AI development, with a majority supporting bans on sentient and superintelligent AI. (RQ3)

Public Awareness of AI Safety and Existential Risk

The survey found that 47.9% of participants agreed, 'AI is likely to cause human extinction', indicating a significant level of public concern regarding advanced AI risks. This sentiment aligns with a broader recognition that AI development poses profound societal challenges.

Furthermore, 72.4% of participants agreed, 'The safety of AI is one of the most important issues in the world today'. This underscores the urgency and public demand for robust safety measures and responsible AI governance strategies. (RQ2)

5 Years Median Forecast for Sentient AI Arrival

The median 2023 forecast for the arrival of sentient AI is just five years, reflecting a perception of rapid advancement in AI capabilities among the public. (RQ4)

Public forecasts show surprisingly short timelines for major AI milestones, with a significant minority believing these capabilities either already exist or never will. (RQ4)

AI Milestone Median Years to Arrival % Believe Already Exists / Will Never Exist
Artificial General Intelligence (AGI)2 years19% / 10% (median across categories)
Human-Level AI (HLAI)5 years19% / 10% (median across categories)
Superintelligence5 years19% / 10% (median across categories)
Sentient AI5 years20% / 10%
Public Forecasts: Key AI Milestones (Median Years, 2023)

AI ROI Calculator: Optimize Your Enterprise Strategy

Estimate the potential annual hours reclaimed and cost savings by integrating AI into your operations. Adjust the parameters to see the impact.

Annual Hours Reclaimed 0 hours
Estimated Annual Savings $0

Navigating the Future of Digital Minds: A Strategic Roadmap

The AIMS survey highlights critical areas for immediate action in AI design, policy, and research. Our roadmap outlines key phases for a safe and beneficial AI future.

Phase 1: Human-Centered Design & Explainable AI (XAI)

Designers must prioritize transparent and explainable AI systems. This includes selective tuning of anthropomorphism based on system function (e.g., mental health chatbots vs. autonomous vehicles) to harness benefits and minimize risks. Addressing public perception requires adapting design to evolving human-AI interaction schemas. (Implications for Designers)

Phase 2: Safety-Focused Policy & Public Engagement

Policymakers need to account for widespread public concern by prioritizing safety-focused policies like the EU AI Act. This requires increasing their own technological literacy and fostering meaningful public engagement to ensure democratic participation and evidence-based policymaking, particularly concerning bans and slowdowns of advanced AI development. (Implications for Policymakers)

Phase 3: Deep Research on Human-AI Coexistence

Researchers must address critical open questions: What drives variations in public opinion about AI sentience? How can HCI theories be enriched to account for advanced AI systems? And what are global perspectives on sentient AI? This research will inform design choices and policy frameworks to guide humanity towards a utopian, rather than dystopian, future with digital minds. (Open Research Questions)

Ready to Transform Your Enterprise with AI?

Schedule a personalized consultation to explore how these insights can be applied to your specific business challenges and opportunities.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking