Skip to main content
Enterprise AI Analysis: Beyond the Benefits: A Systematic Review of the Harms and Consequences of Generative Al in Computing Education

Enterprise AI Analysis

Beyond the Benefits: A Systematic Review of the Harms and Consequences of Generative Al in Computing Education

Authors: Seth Bernstein, Ashfin Rahman, Nadia Sharifi, Ariunjargal Terbish, Stephen MacNeil

A systematic literature review (SLR) focusing on GenAI's harms in computing education found 224 relevant papers (2022-2025). The review identified six broad categories of harm: cognitive, metacognitive, assessment and academic integrity, equity, social, and instructional/logistical. Cognitive and metacognitive harms were most prevalent, while social harms were least frequently discussed. Many harms still lack strong empirical evidence, often being based on speculation rather than measurement. The study highlights methodological gaps and under-explored harms (e.g., social interactions, collaboration). It aims to provide educators, students, researchers, and developers with a clear picture of GenAI's negative impacts to enable effective mitigation.

Key Findings from the Research

This systematic review consolidates critical data points, offering a clear perspective on the landscape of GenAI's impact in computing education.

0 Papers Reviewed
0% Harms Discussed (2024)
0% Cognitive Harms (Most Prevalent)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Systematic Literature Review Process

Identification
Screening
Eligibility
Included

GenAI Harms by Category

Harm Category Definition Key Findings (Paper Analysis)
Cognitive Impacts how students acquire/apply knowledge.
  • Most prevalent (32.4%).
  • Often related to hallucinations, shallow learning, decline in problem-solving/coding skills.
  • High proportion of evidence-based harms (28.2%).
Metacognitive Disrupts planning, monitoring, and adapting problem-solving strategies; over-reliance and trust calibration issues.
  • Second most frequent.
  • Students often over-rely, struggle to detect AI mistakes, or parrot solutions.
Assessment & Academic Integrity Includes AI-enabled cheating, plagiarism, and fairness/enforcement issues.
  • Third most common.
  • Many focused on factors leading to academic misconduct (time pressure, lack of motivation).
  • Often speculative, lacking measured evidence (8.4%).
Equity Model biases (language, race, gender) and unequal access/benefits.
  • Concerns in 29% of papers.
  • Poorer performance for low-resourced languages, reinforcement of stereotypes.
  • Uneven usage and benefits (e.g., experienced users benefit more).
Social Harms Reduces collaboration, increases isolation, affects peer interactions.
  • Least frequently mentioned (8.5%).
  • Students feel pressured to use AI, affecting social interactions and leading to guilt.
Instructional & Logistical Challenges for instructors: data privacy, new competencies, changing relevance of skills, scaffolding AI use.
  • Includes concerns about data privacy, undermined motivation, need for AI literacy.
  • AI changes required skills (e.g., prompt engineering).

Impact of GenAI on Learning Communities

A recent study by Hou et al. [93] showed negative impacts on peer interactions and learning communities. Students reported relying on ChatGPT to avoid socio-emotional barriers of help-seeking, stating it's 'easier for me to ask ChatGPT because I don't have to worry... I feel like it is kind of rude to continuously ask for more and more details.' Arora et al. [20] found that LLMs hindered collaboration due to compatibility issues between LLM-generated code from different team members.

0% Papers with Measured Evidence for Harms (2024)

Calculate Your Potential AI Impact

Understand the potential efficiency gains or risks by modeling AI integration within your organization. This calculator provides a preliminary estimate based on key operational inputs.

Annual Cost Savings $0
Hours Reclaimed Annually 0

Your AI Implementation Roadmap

A strategic phased approach to integrate GenAI while mitigating risks and maximizing benefits in computing education.

Phase 1: AI Readiness Assessment

Evaluate current computing education curricula, instructor preparedness, and student AI literacy levels. Identify key areas where GenAI tools are currently being used and where harms are emerging.

Phase 2: Policy & Ethical Framework Development

Establish clear guidelines for GenAI use in assignments, assessments, and learning activities. Develop ethical frameworks addressing academic integrity, data privacy, and fairness. Communicate policies effectively to students and staff.

Phase 3: Pedagogical Re-design & Scaffolding

Revise assignments to be 'AI-resistant' or 'AI-integrated' focusing on higher-order thinking, problem specification, and critical evaluation of AI outputs. Implement scaffolding strategies to guide responsible and effective AI tool use.

Phase 4: Instructor Training & Support

Provide professional development for educators on integrating AI tools, detecting misuse, and fostering critical AI literacy. Support instructors in adapting their teaching practices to address new challenges and opportunities.

Phase 5: Continuous Monitoring & Research

Regularly collect data on student AI usage, learning outcomes, and perceptions of harms. Conduct empirical studies to build evidence-based practices and inform ongoing adjustments to policies and pedagogies.

Ready to Navigate the Future of AI in Education?

Schedule a personalized session with our experts to discuss how to strategically implement GenAI, mitigate risks, and empower your students and instructors.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking