Skip to main content
Enterprise AI Analysis: Comparing Student Preferences for AI-Generated and Peer-Generated Feedback in AI-driven Formative Peer Assessment

Enterprise AI Analysis

Comparing Student Preferences for AI-Generated and Peer-Generated Feedback in AI-driven Formative Peer Assessment

Authored by Insub Shin, Su Bhin Hwang, Yun Joo Yoo, Sooan Bae, and Rae Yeong Kim

Formative assessment can enhance student learning and improve teaching practices by identifying areas for growth and providing feedback. However, practical obstacles remain, such as time constraints and students' passive participation and the low quality of peer feedback. Artificial intelligence (AI) has been explored for its potential to automate grading and provide timely feedback, making it a valuable tool in formative assessment. Nevertheless, there is still limited research on how AI can be used effectively in the context of formative peer assessment. In this study, we conducted an AI-driven formative peer assessment with 108 secondary school students. During the peer assessment process, students not only evaluated peers' responses and received peer-generated feedback, but also evaluated AI-generated responses and received AI-generated feedback. This research focused on analyzing the differences in preference between AI-generated and peer-generated feedback using trace data and dispositional data. In scenarios where student participation was low or the quality of peer feedback was insufficient, students showed a higher preference for AI-generated feedback, demonstrating its potential utility. However, students with high Math Confidence and AI Interest preferred peer-generated feedback. Based on these findings, we will propose practical strategies for implementing AI-driven formative peer assessment.

Unlocking Learning Potential: The Role of AI in Formative Assessment

Our analysis of 'Comparing Student Preferences for AI-Generated and Peer-Generated Feedback in AI-driven Formative Peer Assessment' reveals a nuanced landscape where AI significantly augments traditional peer assessment, addressing critical limitations and enhancing learning outcomes in diverse educational settings. Enterprise-level integration can streamline feedback processes, improve quality, and personalize student support, particularly in challenging learning environments.

0% Variance Explained for AI Feedback Preference
0% Variance Explained for Peer Feedback Preference
0 Classes Showed Higher AI Feedback Preference
0 Class Showed Higher Peer Feedback Preference

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Peer Assessment: Benefits vs. Challenges

Benefits of Peer Feedback Challenges in Implementation
  • Giving a sense of autonomy and ownership; improving motivation.
  • Encourages responsibility for learning and development.
  • Treats assessment as part of learning; mistakes seen as opportunities.
  • Practices transferable skills for life-long learning.
  • Negative attitudes and opposition to peer assessment.
  • Low participation, insufficient engagement, inadequate discussions.
  • Poor quality feedback due to lack of experience.
  • Often perceived as inaccurate and unreliable.
AI Feedback Addresses Time Constraints & Quality Issues in Formative Assessment

AI's ability to automate grading and provide timely, valid feedback is a significant asset, especially in text-based assessments, making it a valuable tool to overcome practical obstacles in formative peer assessment.

AI-Driven Formative Peer Assessment Workflow

Ask Stage (Teacher/AI/Peer elicites question)
Answer Stage (Student provides response)
Analyze Stage (Teacher/Peer/AI review & feedback)
Adapt Stage (Students receive feedback & final attempt)

Classroom Environment Impact on AI Feedback Preference

Our study revealed significant variations in AI feedback preference across different classroom environments. Classes with lower engagement and math achievement (e.g., Class A, C) showed a higher preference for AI-generated feedback. This suggests AI can play a complementary role in environments challenged by low participation or poor feedback quality. Conversely, Class D, characterized by high math confidence, interest, and AI interest, preferred peer-generated feedback, often critically evaluating AI feedback due to perceived hallucinations.

Dispositional Influences on Feedback Preference

Disposition Level Preferred Feedback Type Key Finding
Low Math Confidence, Value, AI Relevance, AI Interest AI-Generated Feedback AI feedback is significantly preferred, especially when students struggle or perceive math as less important.
High Math Confidence & AI Interest Peer-Generated Feedback Students with strong subject confidence and AI interest tend to prefer human-generated feedback, valuing nuance and potentially being more critical of AI.

Practical Strategies for AI-Driven Peer Assessment

Based on our findings, AI-generated feedback is recommended to:

  • Activate peer assessment in classes with low participation rates.
  • Support formative assessments in classes experiencing math learning difficulties.
  • Be utilized during peer assessments on problems with low correct response rates.
For students with high AI Interest, ensure appropriate evaluation design to validate AI feedback and mitigate sensitivity to hallucination. Consider a pairing algorithm to prioritize human peers over AI for students with high Math Confidence and Math Value.

Advanced ROI Calculator

Quantify the potential efficiency gains from integrating AI into your feedback processes.

Projected Annual Savings $0
Annual Hours Reclaimed 0

Your AI Integration Roadmap for Formative Assessment

A phased approach to seamlessly integrate AI into your educational or enterprise feedback systems.

Phase 1: Pilot & Data Collection (1-3 Months)

Implement AI-driven feedback in a controlled pilot environment. Collect trace data on student interactions, feedback preferences, and dispositional data to understand initial impact and identify areas for optimization. Focus on low-stakes assessments and classrooms with identified challenges (e.g., low participation).

Phase 2: Feedback Quality Refinement & Training (3-6 Months)

Utilize collected data to refine AI prompts and models, improving feedback accuracy and relevance. Develop training modules for educators and students on effective peer assessment and AI feedback utilization, addressing potential sensitivities to AI hallucination, especially for high-achieving students.

Phase 3: Scaled Deployment & Customization (6-12 Months)

Expand AI-driven formative assessment across more courses/departments. Implement adaptive pairing algorithms to match students (human vs. AI) based on their dispositional data (e.g., prioritizing human feedback for high math confidence/AI interest students). Continuously monitor feedback effectiveness and student outcomes.

Ready to Transform Your Assessment Processes?

Book a strategic consultation to explore how AI-driven formative assessment can elevate learning outcomes and operational efficiency in your organization.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking