Shilun Zhou, The University of Hong Kong, Faculty of Law (Published: 19 November 2025)
Deconstructing 'Responsible AI': An Examination of Legal and Ethical Accountability Through Virtue Jurisprudence
This article deconstructs the legal semiotic of “Responsible AI” through the lens of virtue jurisprudence, addressing ethical dilemmas in technology-driven knowledge creation within the humanities. It critiques the misleading anthropomorphisation of AI, arguing that "Responsible AI” should be understood as “responsible in name only" and "accountable in reality". By distinguishing between moral agency and legal accountability, it highlights Al's dual legal attributes, including its anthropomorphic intelligent dimension and its distinct artificial nature. While the terms of reliability and AI could be semantically related at first glance, the virtue jurisprudence approach could distinguish the semiotic implications of “responsible AI" and "accountable AI”, by highlighting humans' unique moral assessment capacity, which AI lacks, making AI accountable but not responsible. Emphasising such moral capacity not only justifies human's refusal to be treated like machines but also provides a theoretical basis for a human-centred AI framework and guides the development of accountability AI in current legal practice. By examining the interplay between human virtue and technological systems, it calls for a renewed focus on human-centric ethical principles in the age of AI-driven knowledge production.
Keywords: Responsible artificial intelligence, Legal responsibility, Legal accountability, Virtue jurisprudence, Ethical accountability
Unpacking 'Responsible AI': A Virtue Jurisprudence Perspective on Accountability
This research delves into the critical distinction between 'Responsible AI' and 'Accountable AI', leveraging virtue jurisprudence to argue for a human-centered approach to AI ethics.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
AI: Responsible in Name, Accountable in Reality
Dual Nature Unveiled AI's Dual Nature: Artificial & IntelligentDeconstructing AI's Dual Nature
AI's intelligent attribute refers to its capacity for generating new outputs, engaging in human-like interactions, and exhibiting anthropomorphic traits. This leads to discussions about its potential 'responsibility' for unforeseeable errors, mirroring human agency. However, this is largely an anthropomorphic metaphor. For example, AI-Da, a humanoid robot artist, or conversational AIs like ChatGPT exhibit advanced autonomy and human-like interaction. This attribute raises questions about foreseeability and liability when errors occur beyond direct human control. The principle of foreseeable liability suggests that if an error's outcome is unforeseeable to anyone, no one should be held directly liable for it.
AI's artificial attribute emphasizes its constructability and human-controlled nature. From 'birth' (development) to 'death' (decommissioning), humans remain in control. AI lacks intrinsic personality attributes and genuine autonomy, distinguishing it from humans. It is an instrument, not a moral agent. This perspective asserts that 'responsible AI' is merely a metaphor; true responsibility, tied to moral assessment, resides with humans. Entities with personality attributes possess rights and obligations, which AI, being artificial, does not. Thus, AI is accountable but not genuinely responsible, reinforcing a human-centered view of liability.
The Moral Capacity Gap
Humans: Moral Agents AI Lacks Moral Assessment Capacity, Humans Possess ItDistinguishing Accountability from Responsibility
AI can be accountable as a traceable tool, meaning its actions and outputs can be monitored, explained, and attributed to human creators or operators. Accountability emphasizes external traceability and transparency, allowing for auditing and consequence management. However, AI lacks the capacity for independent moral judgment, self-discipline, or the cultivation of virtue through experience and reflection. Its behavior is strictly governed by algorithms and data, making it a procedural instrument rather than a true moral agent. Therefore, AI can be held *accountable* for its outputs, but it cannot be *responsible* in the ethical sense.
Humans are genuinely responsible because they possess moral assessment capacity, the ability to make independent judgments, exercise self-discipline, and cultivate virtue. This includes the capacity to discern right from wrong, refrain from immoral actions, and engage in self-reflection and moral improvement. Responsibility requires not only bearing consequences but also acting based on moral reasoning. This uniquely human capacity ensures that humans remain the ultimate moral agents in AI development and deployment, safeguarding against anthropomorphizing AI and preserving human autonomy.
Human-Centred AI Governance Flow
| Entity | Scope of Subject | Scope of Objects |
|---|---|---|
| EU AI Act | Providers, users, distributors, notifying bodies, public authorities |
|
| China's Frameworks | Developers, providers, users (New-Generation AI; Generative AI Services) |
|
The Hong Kong Tycoon Case: AI Error and Legal Action
The case of Samathur Li Ki-kan, a Hong Kong tycoon, suing investment broker Raffaela Costa after losing over $20 million due to a malfunction of an algorithm via a robot-advisory system, highlights the practical implications of AI errors. This marks the first known instance of individuals pursuing legal action due to investment losses attributed to autonomous machines. From a virtue jurisprudence perspective, this case underscores the importance of human oversight and accountability for AI systems, even when they exhibit high autonomy, reinforcing that ultimately, human agents bear the responsibility for AI's outputs.
Calculate Your Potential AI Impact
Estimate the efficiency gains and cost savings your enterprise could achieve by implementing a human-centred, accountable AI framework.
Your Journey to Accountable AI
A strategic roadmap for integrating virtue-jurisprudence into your AI governance, ensuring ethical and responsible deployment.
Phase 1: Virtue-Centred AI Policy Development
Establish clear policies emphasizing human moral agency and oversight in AI design and deployment, integrating virtue jurisprudence principles.
Phase 2: Technical Accountability Frameworks
Implement robust technical systems for AI transparency, explainability, and traceability, ensuring human review and intervention points.
Phase 3: Ethical Training & Human-AI Collaboration
Develop training programs for developers and users to cultivate moral assessment capacities and foster responsible human-AI collaboration.
Phase 4: Continuous Auditing & Iteration
Regularly audit AI systems for ethical alignment and performance, iterating on both technology and governance to uphold human-centric values.
Ready to Transform Your AI Strategy?
Schedule a free consultation with our experts to explore how virtue jurisprudence and human-centred AI can benefit your enterprise.