Skip to main content
Enterprise AI Analysis: University staff and student perspectives on competent and ethical use of Al: uncovering similarities and divergences

University staff and student perspectives on competent and ethical use of Al: uncovering similarities and divergences

Bridging the AI Divide: Competence & Ethics in Higher Education

This study delves into the contrasting perceptions of UK university staff and students regarding AI literacy, focusing on competent and ethical AI tool use. Building on prior research, it highlights a lack of shared understanding among stakeholders. The study combines insights to detail concerns over AI competence and ethical implications, revealing significant disparities in AI tool usage, particularly for conversational GenAI (cGenAI). Students extensively use cGenAI for various tasks, while staff largely limit it to brainstorming or generating teaching tasks. While most see cGenAI use as AI competence, nuanced differences arise based on the tool's application. Ethical issues are prominent in both groups, with staff reporting more negative systemic concerns like inherent bias, transparency, and data ownership. Over 90% of staff find cGenAI for essay-generation problematic (vs. 58% of students), mainly due to academic integrity concerns. These findings underscore the need for institutional guidelines and dialogue to bridge ethical concerns and align expectations for effective AI literacy integration in higher education.

Bridging the AI Divide: Competence & Ethics in Higher Education

Our analysis reveals a critical gap in AI perception and use between university staff and students, with profound implications for academic integrity and future skill development. Addressing these divergences is crucial for integrating AI effectively and ethically into higher education.

0 Staff find cGenAI for essay generation problematic
0 Students find cGenAI for essay generation problematic
0 Staff never used cGenAI tools for any purpose surveyed
0 Students never used cGenAI tools for any purpose surveyed

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

0 Percentage of students considering cGenAI for essays as AI competence

While a large majority of staff (80% academics, 76% professional services) consider cGenAI for essay generation as AI competence, students are more critical, with only 70% holding this view. This suggests a more cautious stance among students regarding AI-assisted writing for complex tasks.

Competence Perception: Staff vs. Students
AI Tool/Application Staff Perception of Competence Student Perception of Competence
Spell Checker
  • Low (non-cGenAI)
  • Low (non-cGenAI)
Typing Assistant
  • Low (non-cGenAI)
  • Low (non-cGenAI)
PowerPoint Designer
  • Higher (68-74%)
  • Lower (49%)
Conversational GenAI (Brainstorming)
  • High
  • High
Conversational GenAI (Essay Generation)
  • High (80% academics, 76% PS staff)
  • Moderate (70%)
Conversational GenAI (Task Generation)
  • High
  • High

Across both groups, cGenAI tools are generally seen more favorably as AI competence than non-cGenAI tools. However, students show a greater percentage viewing non-cGenAI tools (like PowerPoint Designer) as AI competence compared to staff. There's a notable divergence in perceiving essay generation via cGenAI as competence, with students being more critical.

Defining AI Competence: 'Deeper Understanding' vs. 'Generative Output'

Some academics argue that AI competence requires a 'deeper understanding' of how AI works, not just its simple use. They often select none of the questionnaire options as demonstrating AI competence. In contrast, other participants across all groups define competence more broadly, including any activity involving generative AI output that requires checking and evaluation. The perception of competence is also influenced by the novelty of cGenAI tools, which are seen to demand a higher level of user skill and understanding compared to 'old-school' automated systems.

0 Percentage of staff flagging cGenAI for essay generation as problematic

Over 90% of academic and professional services staff express significant ethical concerns about using cGenAI to generate essays or writeups, primarily due to academic integrity and intellectual ownership issues. This contrasts with 58% of students, highlighting a notable perception gap.

Ethical Concerns Flow for AI Use

AI Tool Use (e.g., cGenAI for Essay)
Potential Academic Misconduct (Claiming as own)
Disruption of Intellectual Process
Bias in AI Data/Output
Lack of Transparency/Data Ownership
Need for Institutional Guidelines & Dialogue

Ethical concerns broadly cover academic integrity, intellectual ownership, and the disruption of critical thinking processes. Staff also raise systemic concerns like inherent bias in AI systems, transparency issues, and data ownership, which are less prominent in student responses. This divergence underscores a need for comprehensive institutional policies.

Staff vs. Student Ethical Boundaries

Staff tend to adopt a more cautious stance on ethical AI use, particularly for cGenAI applications in learning and assessment. Many express reservations about cGenAI for task generation and brainstorming due to concerns about data bias, transparency, and intellectual ownership. Students, while concerned about essay generation, show less unanimity and focus more on representational use. This suggests differing ethical boundaries and a need for clearer, shared understanding and guidelines.

0 Percentage of students using cGenAI for learning tasks 'often' or 'sometimes'

There is a significant disparity in cGenAI tool usage, with 82% of students using it 'often' or 'sometimes' for learning support tasks. In contrast, only 36% of academic staff and 24% of professional services staff report similar usage for teaching material development, highlighting a considerable usage gap.

AI Tool Usage Frequency: Staff vs. Students
AI Tool/Application Staff Usage (Often/Sometimes) Student Usage (Often/Sometimes)
Spell Checker
  • Very High (>90%)
  • Very High (>90%)
Typing Assistant
  • Low (12-16%)
  • Higher (41%)
PowerPoint Designer
  • Higher (68-74%)
  • Lower (49%)
Conversational GenAI (Brainstorming)
  • Low (17% academics)
  • Higher (48%)
Conversational GenAI (Essay/Writeup Generation)
  • Very Low (~15%)
  • High (>50%)
Conversational GenAI (Task Generation for Learning)
  • Low (24-36%)
  • High (82%)

Students report significantly higher use of cGenAI tools for generating essays/writeups and supporting learning tasks compared to staff, who primarily use cGenAI for brainstorming or material development. This usage gap points to potential differences in AI adoption, perceived competence, and ethical perspectives.

The 'AI Generation Gap' in Higher Education

The study reveals a pronounced 'AI generation gap,' with students being early adopters and extensively using cGenAI, while a larger fraction of staff remains averse or less engaged. This gap could be due to differing self-perceived competence, varying perspectives on ethical use, or a general aversion to change among staff. Addressing this requires targeted training and upskilling initiatives for staff to align AI literacy across the institution.

Unlock Enterprise Efficiency

Estimate the potential annual time and cost savings by strategically integrating AI tools into your enterprise workflows.

Potential Annual Savings $0
Annual Hours Reclaimed 0

Strategic AI Implementation Roadmap

Our phased implementation strategy ensures a smooth, ethical, and effective integration of AI into your academic and operational frameworks.

Phase 1: AI Literacy Assessment & Baseline Definition

Conduct a comprehensive assessment of current AI literacy levels among staff and students. Define clear baseline metrics for AI competence and ethical understanding. Identify specific areas of divergence and training needs.

Phase 2: Policy Development & Guideline Formulation

Develop transparent institutional policies and guidelines for the competent and ethical use of AI tools in learning, teaching, and assessment. This includes defining acceptable uses, addressing academic integrity, and ensuring data privacy.

Phase 3: Targeted Training & Upskilling Workshops

Implement tailored AI literacy workshops for both staff and students, focusing on practical skills for effective AI tool use, critical evaluation of AI outputs, and fostering ethical considerations. Address specific competence gaps identified in Phase 1.

Phase 4: Assessment Rethinking & Curricular Integration

Rethink assessment practices to embrace AI-generated outputs while ensuring authenticity and intellectual ownership. Integrate AI literacy into the curriculum, promoting critical thinking and responsible AI engagement. Pilot new assessment methods like reflective accounts and process-based submissions.

Phase 5: Continuous Monitoring & Feedback Loop

Establish mechanisms for continuous monitoring of AI tool adoption and its impact. Collect feedback from stakeholders to iteratively refine policies, training programs, and assessment strategies, ensuring adaptability to evolving AI technologies.

Ready to Transform Your University's AI Strategy? Let's Discuss!

Book a free 30-minute consultation to explore how our AI integration roadmap can address your institution's unique challenges and opportunities.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking