Thesis
ABSTRACT
Peer Code Review (PCR) is a key practice in Computer Science (CS) education, yet students often struggle to provide meaningful feedback. Self-Determination Theory (SDT) highlights the importance of autonomy, competence, and relatedness in sustaining intrinsic motivation, but traditional peer review methods often fail to support these needs. This study explores Game-Based Learning (GBL) as an alternative approach to increasing intrinsic motivation and feedback quality in CS PCR. Using a mixed-methods, quasi-experimental design, 42 third-year CEGEP-level CS students participated in PCR before engaging in a card-based game intervention rooted in the game design theory of meaningful play. In-game resources were tied to prior feedback quality, creating a competitive incentive structure. A Wilcoxon Signed-Rank test assessed feedback improvements, while pre- and post-test results measured changes in motivation using independent t-tests. Additionally, a thematic analysis of open-ended responses revealed student perspectives on motivation and game design. Results indicate a significant pre-post increase in feedback quality (
Key words: game-based learning, peer feedback, motivation
LIST OF TABLES
LIST OF FIGURES
LIST OF ACRONYMS
- CS: Computer Science
- PCR: Peer Code Review
- GBL: Games-based Learning
- SDT: Self-Determination Theory
- IMI: Intrinsic Motivation Inventory
- CRT: Code Review Taxonomy
- LLM: Large Language Model
- LMS: Learning Management System
- CEGEP: Collège d'enseignement général et professionnel
INTRODUCTION
Peer Code Review (PCR) is a fundamental practice in software development, enabling programmers to evaluate one another's work for quality, functionality, and adherence to best practices. In educational contexts, PCR has the potential to deepen learning by fostering reflection, collaboration, and critical thinking (Hamer, Purchase, Denny, & Luxton-Reilly, 2009; Hundhausen, Agrawal, & Agarwal, 2013). However, despite its pedagogical value, students often conduct PCR superficially, offering vague or minimal feedback that limits the effectiveness of the exercise (Indriasari, Luxton-Reilly, & Denny, 2020; Petersen & Zingaro, 2018).
A key factor influencing the quality of peer feedback is student motivation. Many learners perceive peer review as a compliance task rather than an educational experience. To improve its impact, educators must consider the motivational dynamics that shape how students participate in feedback exchanges.
Self-Determination Theory (SDT) provides a robust framework for understanding intrinsic motivation in educational settings (Deci & Ryan, 1985, 1994). According to SDT, students are more likely to participate meaningfully when three basic psychological needs are met: competence (feeling effective), autonomy (experiencing volition), and relatedness (feeling socially connected). Yet in many peer review settings, these needs are unmet. Students may lack confidence in their evaluative abilities, feel constrained by rigid instructions, or experience limited connection with their peers.
To address these limitations, this study investigates the use of Game-Based Learning (GBL) to enhance motivation and feedback quality in PCR. Distinct from gamification, which layers game elements like points and badges onto existing activities, GBL involves designing games that intrinsically align with learning objectives (Papastergiou, 2009). Research suggests that GBL can promote sustained motivation and deepen learning in Computer Science (CS) education (Ardic & Tuzun, 2021), but its application to peer feedback remains under-explored. While gamified PCR environments have shown promise (Indriasari, Denny, Lottridge, & Luxton-Reilly, 2023), more integrative, meaningfully playful approaches may better support student motivation. Specifically, this study examines whether embedding PCR within a GBL framework improves the quality of student feedback and influences their perceived competence, autonomy, and relatedness, as conceptualized by Self-Determination Theory (SDT).
The study took place at a CEGEP in Quebec, Canada, as part of a post-secondary CS program. Third-year students participated in a pre-post experimental design that integrated a custom card game into the PCR process. The game's mechanics were tied to the quality of peer feedback provided, introducing a strategic layer in which in-game advantages were earned through meaningful academic participation. A mixed-methods approach was used: feedback quality was evaluated using a code review taxonomy, motivation was assessed through pre/post surveys, and qualitative reflections provided insight into students' perceptions of the intervention.
This study found that the game-based intervention significantly improved the quality of peer feedback and increased students' perceived autonomy. While changes in competence and relatedness were not statistically significant, qualitative responses indicated that many students were more intentional and reflective in their feedback. These findings contribute to an emerging dialogue at the intersection of feedback practices, motivation theory, and game-based pedagogy. By examining how GBL can support better quality peer feedback, the study offers practical guidance for educators seeking to cultivate intrinsic motivation in CS classrooms.
CHAPTER 1: PROBLEM STATEMENT
A fundamental part of professional software development (Li, 2006), Peer Code Review (PCR) involves developers evaluating each other's code based on style guides and best practices. These reviews often focus on aspects such as naming conventions, function scope, spacing, and documentation, and typically lead to a back-and-forth dialogue aimed at improving code quality. PCR is widely adopted in the industry as a key quality assurance practice, and educational research suggests it can also support student learning by encouraging reflection, collaboration, and analytical thinking (Powell & Kalina, 2009; Race, 2001).
Despite these pedagogical benefits, a common challenge in Computer Science (CS) education is that students are often unmotivated to provide high-quality peer feedback. This lack of motivation can stem from time constraints, unclear incentives, or uncertainty about the value of the review process (Indriasari, Luxton-Reilly, & Denny, 2021).
Many students experience PCR as a task done out of obligation rather than personal interest. They may view it as a hoop to jump through rather than an opportunity for learning, especially when peer feedback activities are tied to marks or framed primarily as accountability tools (Falchikov, 2013). When students lack meaningful choice or understanding of the activity's purpose, they tend to engage at a surface level, writing generic or rushed comments that do little to support learning (Pintrich, 2003; Ramsden, 2003). In college contexts such as CEGEP, this is further complicated by systemic pressures like the R-score, which ranks students relative to their peers and amplifies external motivators (Dagres, 2017).
Students may also hesitate to provide detailed or critical feedback because they feel unqualified to evaluate a peer's work (Falchikov, 2013). This is especially true in technical domains like programming, where skill gaps between students can be significant and perceived expertise carries social weight (Perez-Quinones & Turner, 2009). Lacking confidence in their own abilities, some students resort to vague praise or neutral observations rather than offering concrete suggestions for improvement. The theory of self-efficacy underscores the importance of perceived competence in determining effort and persistence in learning tasks (Bandura, 2012). Without support to develop feedback literacy, students may miss opportunities to learn from the review process themselves (Indriasari, Luxton-Reilly, & Denny, 2020; Petersen & Zingaro, 2018).
Finally, PCR can feel disconnected and impersonal, especially when carried out anonymously or asynchronously. Without visible social cues or shared norms, students may worry that their feedback could be misinterpreted or cause tension with classmates (Falchikov, 2013). This fear can lead to overly cautious comments or avoidance altogether, weakening the collaborative potential of the activity (Powell & Kalina, 2009). When students do not feel a sense of community or shared responsibility, the peer review process risks becoming transactional and isolated (Indriasari, Denny, Lottridge, & Luxton-Reilly, 2023). Building peer trust and social presence is therefore essential to creating a classroom environment where feedback is both valued and effective.
While logistical and interpersonal challenges can also impact the effectiveness of PCR (Falchikov, 2013; Indriasari, Denny, Lottridge, & Luxton-Reilly, 2023), the motivational barriers described above remain particularly challenging in traditional peer review settings. Therefore, what has been unveiled is a need to explore alternative instructional strategies that can better support these psychological needs and improve efficacy within the feedback process in CS education.
CHAPTER 2: CONCEPTUAL FRAMEWORK
Providing effective code review feedback is a fundamental skill for Computer Science (CS) students as they prepare to enter the workforce (Sadowski, Söderberg, Church, Sipko, & Bacchelli, 2018). From my experience as a professional software developer in the industry, I can attest to how important Peer Code Review (PCR) is for programmers. From my experience as a CS student, I know that traditional academic approaches do not always motivate learners, especially for the PCR process. Superficial feedback benefits neither the reviewer nor the reviewee and does little to improve code quality or encourage deep learning (Ramsden, 2003). It is important to create an environment where feedback is constructive and empowers students as part of the development process (Hattie & Timperley, 2007). This is particularly beneficial for developing essential feedback skills in professional software developers. As a CS teacher, PCR sessions often reveal a lack of student motivation, reflected in feedback that tends to be brief, vague, or lacking in constructive value.
Low motivation during PCR in the classroom poses a significant challenge for educators aiming to maximize the effectiveness of PCR practices. As such, Self-Determination Theory (SDT) is a meta-theory of human motivation and personality, grounded in psychological science, that provides a potential lens for understanding this phenomenon by highlighting the of three fundamental psychological needs (competence, autonomy, relatedness) for intrinsic motivation (Deci & Ryan, 1985, 1994). These terms are defined as: competence, where students may doubt their ability to provide valuable feedback or feel that the focus is solely on error-finding or quality-assurance testing; autonomy, where limited choices in how to conduct PCR (code to review, feedback format, etc.) may stifle student ownership; and relatedness, where a lack of community focus or a shared sense of purpose can diminish the feeling that PCR is a collaborative improvement process. Traditional PCR approaches may fail to adequately support these needs.
Game-Based Learning (GBL) offers a promising approach to address these motivational barriers hindering effective PCR. GBL prioritizes immersion, challenge, and (sometimes) social interaction (Papastergiou, 2009). These elements have the potential to: enhance competence, where well-designed challenges and in-game rewards can build confidence as coding proficiency increases; foster autonomy, where GBL systems can offer choices within a structured learning experience, increasing student agency; and promote relatedness, where narrative and collaborative gameplay can make PCR feel more purposeful and community-oriented (Proulx, Romero, & Arnab, 2017; Uysal & Yildirim, 2016).
Students learn more effectively when they are agents in constructing their own knowledge, both individually and in collaboration with others (Vygotsky, 1978). This belief is foundational to my interest in the peer feedback process and aligns with a social-constructivist understanding of learning that emphasizes shared meaning-making through interaction. To encourage PCR, feedback systems must be intentionally designed to align with intended learning outcomes and assessment criteria (Biggs, 2012). When learning outcomes related to professional behaviour and feedback literacy are clearly connected to assessment, students are more likely to see value in the process (Ladyshewsky, 2012).
As an avid player of both digital and analogue games, my experience in gaming also influences my interest in this topic. In the world of gaming, especially in multiplayer games, communication and teamwork are paramount for success. Similarly, in the area of PCR, effective communication and collaboration are essential for producing high-quality code. The problem-solving and critical thinking skills honed through gaming also translate to the world of programming and code review. The analytical mindset and attention to detail required in gaming parallel the skills needed for thorough code review (Schmitz, Czauderna, & Klemke, 2011). Understanding how to motivate students in the context of PCR aligns with the principles of game design, where the goal is to create meaningful play through game mechanics that link player action to future outcomes (Salen & Zimmerman, 2003). I believe my experience playing games and teaching CS provides me with a unique perspective on the dynamics of PCR and drives me to delve deeper into this topic.
This study examines students' perceived motivation to give quality PCR feedback through the lens of SDT, focusing on how a meaningful GBL intervention might transform PCR into a more intrinsically motivating and valuable learning experience. The conceptual framework guiding this research is presented in [Figure 1], which illustrates how GBL and PCR contribute to the fulfillment of psychological needs, competence, autonomy, and relatedness, and how these, in turn, affect intrinsic motivation and improved feedback quality.
Figure 1
Conceptual Framework
flowchart TD %% Inputs GBL("`**Game-Based Learning** _Approach_`") PCR("`**Peer Code Review** _Activity_`") MP[\"`**Meaningful Play** _Design_`"/] A("Autonomy") C("Competence") R("Relatedness") %% Outcomes MOTIVATION[/"`Intrinsic Motivation`"\] FEEDBACK("`Improved Feedback Quality`") Define Actors actor Student actor Instructor participant Moodle participant Game rect rgb(180, 190, 254) # Light blue for Pre-Intervention (Async) Note over Student, Game: Pre-Intervention (Asynchronous, Week 9) Student->>Moodle: Submits peer feedback Instructor->>Moodle: Scrapes & scores feedback Moodle->>Instructor: Provides feedback quality scores Instructor->>Game: Assigns yellow action cards based on feedback quality end rect rgb(166, 227, 161) # Light green for Class Session 1 (Sync) Note over Student, Game: Intervention (Synchronous, Week 10) Instructor->>Student: Informed consent & Pre-test survey Instructor->>Student: Explains game rules & hands out cards Student->>Game: Plays first game session Instructor->>Student: Reveals feedback-based card distribution end rect rgb(249, 226, 175) # Light yellow for Post-Intervention (Async) Note over Student, Game: Intervention (Asynchronous, Week 11) Student->>Moodle: Submits second peer feedback Instructor->>Moodle: Scores updated feedback Moodle->>Instructor: Provides updated scores Instructor->>Game: Assigns new yellow action cards end rect rgb(243, 139, 168) # Light red for Class Session 2 (Sync) Note over Student, Game: Post-Intervention (Synchronous, Week 12) Instructor->>Student: Distributes updated game cards Student->>Game: Plays second game session Instructor->>Student: Post-test survey end
Note. This diagram visualizes the chronological sequence of events in the study across four key phases. Time progresses from top to bottom. The entities on the top and bottom represent the roles or systems involved: Student, Instructor, Moodle (an online learning management system), and Game (the card-based peer feedback intervention). The arrows represent direct actions (e.g., submitting feedback or handing out game cards). Each coloured section represents a week in the semester, distinguishing asynchronous phases (done outside of class time) and synchronous phases (conducted during scheduled class time).
Pre-Intervention Phase
Prior to this study, students had been engaging in traditional peer feedback activities since Week 4 of the semester, using the PCR Rubric (Appendix A) as a reference for evaluating their peers' work. This rubric provided a structured framework that guided their feedback, ensuring consistency and clarity in their evaluations. These prior experiences with peer review helped establish a baseline understanding of feedback expectations before the intervention was introduced.
Prior to the intervention, students participated in asynchronous peer feedback through the Moodle Learning Management System's (LMS) Workshop activity (Moodle, 2024). Each student provided feedback to three peers, and this feedback was extracted using a custom scraper (Appendix E) developed by the author. The extracted feedback was anonymized and analyzed using a Large Language Model (LLM) (OpenAI, 2024), which categorized comments based on a Code Review Taxonomy (Appendix B). The taxonomy classifies feedback into distinct categories based on specificity and constructiveness, such as "SA" (Specific Actionable), "G+" (General Positive), or "G0" (General Neutral).
To guide the LLM's classification, a few-shot approach (Appendix G) was used, in which the model was provided with a small number of labeled examples to infer how to apply the taxonomy to new comments. This strategy allows LLMs to generalize effectively without extensive training data (Anglin & Ventura, 2024). To verify in the LLM's classification, a subset of outputs was manually reviewed by the author. During this process, the prompting strategy and card-distribution scripts (Appendix E) were refined iteratively to improve classification consistency. While this verification process was informal and not independently validated, the reviewed samples showed a high level of agreement with the intended taxonomy categories, suggesting the LLM output was sufficiently reliable for the purposes of this exploratory study.
To quantify the quality of feedback for analysis, each taxonomy category was assigned a numerical score using a predefined conversion system (Table 1). These scores were then used to determine the number of cards received at the start of the game, introducing a performance-based starting condition for the intervention.
Table 1
Numerical Conversion of Feedback Quality Scores
Code | Description | Score |
---|---|---|
SA | Specific Actionable | 5 |
S+/S- | Specific Positive/Negative | 4 |
S0 | Specific Neutral | 3 |
G+/G-/GA | General Positive/Negative/Advice | 2 |
G0/PV | General Neutral/Placeholder Value | 1 |
OT | Off-topic/Irrelevant | 0 |
Each student provided feedback to three peers, and the median of these three numerical scores was used as their individual feedback quality score in statistical analysis. The median was chosen to reduce the influence of outliers or inconsistencies in individual comments, providing a more robust measure of typical feedback quality for each student.
To ensure that the game could be reasonably completed within a class session, a simulation was developed (Appendix E) to play 1,000 rounds of the game under varying conditions. The results indicated that the average game lasted 13 turns, with the longest game reaching 24 turns. In terms of duration, the simulation estimated an average game time of 19 minutes, with the longest recorded game taking 35 minutes. These findings informed the game design parameters, such as the number of starting resources and the inclusion of time-limiting mechanics to maintain feasibility within the allotted class period.
Intervention Phase
During a synchronous class session, students first completed the informed consent form (Appendix C), followed by a pre-test (Appendix D) that measured their perceived autonomy, competence, and relatedness in relation to peer feedback, along with baseline questions about their gaming habits and attitudes. They were then placed into groups of four and received physical card decks for gameplay. The instructor displayed a table assigning yellow action cards to each student, prompting their curiosity about the distribution.
Students played the card game (Appendix F) under standard conditions, engaging with mechanics centred on resource collection, strategic decision-making, and competition. Although peer feedback was not a direct action within the game, it was embedded in the game structure: students' starting resources (yellow action cards) were determined by the quality of their feedback in the previous peer review activity. Each student's feedback was analyzed and scored using a code review taxonomy (Appendix B), and their score was used to assign an initial advantage in the game. Since the course was Game Programming, the game's entities (e.g., State Machine, Timer, Collision, Sprite) were drawn from foundational development concepts covered in class, enhancing topical relevance and familiarity. This feedback-performance link was revealed after the first game session during a debriefing, when students were shown how their starting cards were derived from their peer feedback scores. This design choice created a delayed but meaningful incentive for quality feedback, connecting academic effort to in-game success.
Post-Intervention Phase
Following the first game session, students completed another asynchronous peer feedback activity through the Moodle LMS, knowing that their feedback quality would impact their performance advantages in a future game session. The second iteration of the game followed the same structure as the first, with students receiving yellow action cards based on their new feedback quality scores. After playing the game for the second time, students completed the post-test survey (Appendix D), measuring changes in their perceptions of competence, autonomy, and relatedness in relation to peer feedback, along with two open-ended questions to solicit suggestions about improved game mechanics and any comments about the game influencing their motivation.
Instruments
Code Review Taxonomy (RQ1)
The Code Review Taxonomy (Appendix B) was used to operationalize the concept of feedback quality for RQ1, which asked whether the GBL intervention improved the quality of PCR. This taxonomy categorized feedback comments into distinct types (Hamer, Purchase, Luxton-Reilly, & Denny, 2015; Indriasari, Denny, Lottridge, & Luxton-Reilly, 2023). Feedback was classified as either positive or negative, depending on whether it reinforced correct code implementation or identified issues. Additionally, comments were categorized based on whether they provided actionable advice or suggestions for improvement. The taxonomy also distinguished between general feedback (addressing broader coding concepts) and code-specific feedback (focusing on particular lines of code or implementation details). These categories provided a structured framework for analyzing feedback quality.
While no formal psychometric validation (e.g., inter-rater reliability or construct validity) is reported for this taxonomy, it has been used in multiple studies in computing education to analyze the quality of peer code review comments. Indriasari et al. (2023) adopted the taxonomy from Hamer et al. (2015), noting that it aligns with characteristics of effective written feedback outlined in broader feedback literature, such as specificity, constructive suggestions, and reinforcement of strengths (Gehringer, 2017; Voelkel, Varga-Atkins, & Mello, 2020). This alignment with pedagogical goals supports its use as a practical framework for categorizing feedback in this context.
Intrinsic Motivation Inventory (RQ2)
The Intrinsic Motivation Inventory (IMI) was used to address RQ2, which focused on whether the intervention influenced students' motivation as conceptualized by SDT. The IMI is a validated Likert-style survey that assesses SDT sub-scales for competence, autonomy, relatedness (Ryan, Mims, & Koestner, 1983). It utilizes a 5-point scale (1 = not at all true to 5 = very true). Survey questions were adapted to reflect the PCR experience with the full list of pre-test and post-test questions included in Appendix D. For example, competence-related questions asked whether students thought their feedback was useful to others. Autonomy-related questions asked students whether they felt they had choices in how they provided peer feedback or whether they had input in deciding how to evaluate their peers' work. Relatedness was assessed through questions that explored whether students felt connected to their peers during the peer review process and whether they felt comfortable giving feedback.
The IMI has demonstrated strong validity and internal consistency across multiple domains (McAuley, Duncan, & Tammen, 1989). The SDT research community recognizes that minor wording adjustments and even shorter versions can be used without compromising reliability (Self-Determination Theory, n.d.). This flexibility makes the IMI particularly well-suited to educational contexts like this one, where survey fatigue and contextual relevance are concerns.
Data Analysis
Data analysis was organized around the two research questions, each targeting a distinct dependent variable. The independent variable was the implementation of the game-based learning intervention, specifically, the peer feedback card game played by the students in Weeks 10 and 12.
To address RQ1, which asked whether the intervention improved the quality of peer feedback, the dependent variable was students' feedback quality scores. Each student provided feedback to three peers in both the pre- and post-intervention phases. To account for variability across different peer reviews, the median feedback quality score from each student's three evaluations was used for the analysis. Because these scores were ordinal, the Wilcoxon Signed-Rank Test was used to assess pre-post differences.
To address RQ2, which investigated whether the intervention influenced students' perceived competence, autonomy, and relatedness, the dependent variables were the sub-scale scores from the adapted IMI. Independent t-tests were conducted on the mean scores for each sub-scale, as the pre- and post-tests were completed anonymously and thus could not be paired. Perceived autonomy was measured using items Q5, Q6, Q8, and Q9; however, Q6 was excluded from analysis due to ambiguous wording, and Q9 was reverse-scored. Perceived competence was measured using Q2, Q3, and Q4, while relatedness was measured using Q1 and Q7.
A significance level of
In addition to quantitative data, students' open-ended responses from the post-test survey were analyzed using thematic coding. Responses were reviewed inductively to identify emergent themes related to students' motivation, perceptions of the game's mechanics, and suggestions for its improvement. This qualitative data supported interpretation of the quantitative results and helped contextualize student experiences during the intervention.
Ethical Considerations
This study received ethical approval from both the Université de Sherbrooke (Appendix H) on April 16, 2024 and John Abbott College (Appendix I) on May 14, 2024. The researcher also completed the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2: CORE 2022) training (Appendix J) on March 23, 2024, which certifies adherence to Canadian standards for research ethics. Data collection took place during Weeks 9 to 12 of the Fall 2024 semester. The results were analyzed during the Winter 2025 term, and the thesis was written in parallel to complete the requirements for submission to the Université de Sherbrooke by Spring 2025. This schedule ensured that all research activities were conducted within the approved ethical review period.
The researcher's dual role as both instructor and investigator raised potential concerns regarding coercion and power dynamics. To mitigate this, explicit informed consent was obtained (Appendix C), and students were informed that participation was entirely voluntary and would not affect their grades. They had the option to withdraw at any time without penalty. Students were given a clear explanation of the study's purpose, procedures, potential risks and benefits, as well as methods of data collection and use, thereby ensuring informed decision-making.
Anonymity and confidentiality were maintained throughout the process. Pre- and post-test survey responses were collected anonymously to protect students' motivational data. Feedback quality data, however, was linked to individual students to enable the intervention's game mechanic; in these cases, only the researcher had access to identifiable data. Before analysis, all peer feedback was anonymized and scrubbed of identifying details. All data were stored securely on Canadian servers via the Moodle LMS and Microsoft Forms.
The dissemination of findings poses minimal risk to participants. All results are reported in aggregate or anonymized form to ensure that individual students cannot be identified. No quotations or specific feedback samples are attributed to individual students. Findings will be shared through academic presentations, conferences, journals, and the final thesis submission, with no foreseeable negative impact on participants.
CHAPTER 5: RESULTS
Student Gaming Background
Before evaluating the impact of the game-based learning (GBL) intervention, a pre-test was conducted to assess participants' gaming preferences, prior exposure to GBL, and familiarity with card-based mechanics. The majority of students reported a strong preference for games, with a mean of 4.64 (on a 5-point scale) for the statement, "I enjoy playing games (analogue or digital)." Nearly all participants (93%,
Despite this enthusiasm for gaming, prior exposure to GBL in academic settings was limited. Only 5% of participants reported frequent exposure, while 57% had rarely experienced such learning activities, and 42% had never participated in GBL. Given that the intervention used a card-based peer feedback game, familiarity with card games was also assessed, with 95% of students indicating at least some familiarity.
When it came to gameplay preferences, 64% of students preferred playing with others, 26% had no preference, and only 10% preferred playing alone. Competitive games were slightly more popular, with 52% favouring them, while 28% had no preference, and 20% preferred cooperative experiences.
Peer Feedback Quality (RQ1)
Each student (
Figure 2
Pre- and Post-Intervention Peer Feedback Scores
To test whether the quality of peer feedback increased after the game-based intervention, a Wilcoxon signed-rank test was conducted to assess if the median feedback quality scores significantly increased after the intervention. The results indicated that post-intervention feedback scores (
Motivation Sub-Scales (RQ2)
To assess changes in students' motivation as conceptualized by Self-Determination Theory (SDT), pre- and post-test surveys measured perceived autonomy, competence, and relatedness using adapted items from the Intrinsic Motivation Inventory (Appendix D). These were administered before and after the game-based intervention. [Figure 3] presents grouped box plots comparing pre- and post-test scores across the three sub-scales.
Figure 3
Pre- and Post-Test Scores for Autonomy, Competence, and Relatedness
To determine whether any of these differences were statistically significant, independent t-tests were conducted for each sub-scale. Results are summarized in Table 2.
Table 2
Independent t-test Results for SDT Sub-Scales
Sub-Scale | Pre-Test Mean (SD) | Post-Test Mean (SD) | t(79) | p (two-tail) |
---|---|---|---|---|
Autonomy | 3.39 (0.72) | 3.78 (0.69) | -2.46 | 0.016 |
Competence | 3.52 (0.76) | 3.61 (0.76) | -0.54 | 0.592 |
Relatedness | 3.29 (0.86) | 3.67 (0.93) | -1.92 | 0.058 |
Only autonomy showed a statistically significant increase following the intervention (
Qualitative Findings
As a secondary measure of the intervention's effectiveness, students (
Students also provided open-ended responses to the following question: "Did participating in the card game influence your approach to giving or receiving peer feedback? Please describe any specific ways the game affected your motivation, engagement, or quality of feedback, or if it had no impact." A thematic analysis revealed four primary themes and the distribution of responses is summarized in Figure 4.
Figure 4
Thematic Breakdown of Student Feedback
Over half of the students (54%) reported that the game-based intervention increased their motivation to provide better feedback. Many described the incentive structure as a driving force behind their work. One student shared, "100%, before I only put 'good job' or 'error in x.js,' but now I went in-depth knowing it would give me an edge while playing the game" (Participant 27). Another noted, "It simply motivated me to look at their code further and give more insightful feedback" (Participant 13). Many students described writing longer comments, checking the code more carefully, or being more specific because their feedback now had an impact on the game.
Despite this, 21% of students reported that the game had no meaningful effect on their approach to peer feedback. Some explained that they were already motivated to provide detailed responses and did not feel the game altered their process. One student reflected, "Not necessarily. My approach for giving peer feedback is being as fair as possible, and I don't let the thought of it affecting the card game sway my decisions" (Participant 21).
For 15% of students, the game increased their awareness of feedback quality without fully changing their habits. One student shared, "Not really, however I think this is just because I am not someone easily swayed and I had good results to begin with" (Participant 10). Another noted, "I wouldn't go out of my way to provide feedback, but if it was an easy bug fix that I could provide, I would give them the feedback on it" (Participant 14).
A smaller group (10%) reported feeling pressured or confused about how the game tied into their feedback quality. One student expressed concern that "After the card game, I felt that the bar for effort in grading was set from our previous results. It put pressure to either match how we graded previously or improve while discouraging doing any less than that" (Participant 32). Another noted, "Today was the first day that I played the card game, and honestly, I had no idea how it worked, so I never thought of the card game when I was grading before" (Participant 36).
These qualitative insights highlighted the varying degrees to which students responded to the intervention. Student responses varied, with some expressing strong motivation, others showing minimal change, and a few feeling pressure or confusion.
CHAPTER 6: DISCUSSION
Impact on Feedback Quality (RQ1)
The first research question asked: "Does a game-based learning intervention increase the quality of feedback provided during Computer Science peer code review?" Following the intervention, students' feedback became more detailed and specific compared to earlier rounds of traditional PCR, suggesting a potential relationship between the GBL context and increased feedback quality. This aligns with previous research showing that peer review quality improves when students are more invested in the process and understand its purpose (Huisman, Saab, Van Driel, & Van Den Broek, 2018; Indriasari, Luxton-Reilly, & Denny, 2020).
A key design element of the intervention was the integration of academic performance into the logic of the game. Students' feedback scores influenced the number of yellow action cards they received, which shaped their strategic options during play. This structural connection reflects the concept of meaningful play, in which actions must have visible and integrated consequences within the system (Salen & Zimmerman, 2003). Rather than functioning as an isolated academic task, peer feedback became a meaningful in-game action that affected future possibilities. This aligns with prior studies emphasizing that feedback tasks are more effective when they are situated in authentic, high-stakes contexts (Brown, Walia, Radermacher, Singh, & Narasareddygari, 2020).
Many students described this relationship as motivating. One wrote, "I wrote way more, and spent more time looking at the code, and debugged it" (Participant 12). Another shared, "It gave me an incentive to put more effort and details into my peer feedback than before" (Participant 7). Others echoed similar sentiments:
- "Knowing that you get more yellow action cards definitely prompted me to giving more in-depth peer feedback" (Participant 9).
- "It simply motivated me to look at their code further and give more insightful feedback" (Participant 13).
- "It improved my motivation to write better comments and improved feedback" (Participant 33).
These responses support research suggesting that well-designed game environments can enhance student motivation by making learning tasks more personally meaningful (Papastergiou, 2009; Proulx, Romero, & Arnab, 2017). While this pattern was common, it was not universal. One student noted, "Not really, I just maybe a slightly more detailed version of what I would usually write" (Participant 5). Another explained, "Knowing that the cards I would get for the card game was impacted by my peer review assessment, it didn't really impact the way I graded people or in the game" (Participant 20). These responses reflect the variability in how students responded to the intervention, shaped by individual preferences, perceptions of fairness, or prior experiences with peer review (Falchikov, 2013). Taken together, the findings suggest that embedding peer feedback into the mechanics of a game environment can encourage increased effort and depth in peer review for many students.
Motivational Outcomes (RQ2)
In addition to changes in peer feedback quality, the study also asked: "Does the game-based learning intervention influence students' perceived competence, autonomy, and relatedness, as conceptualized by Self-Determination Theory?" This examined how the game-based intervention may have influenced students' motivation to participate in peer review. Drawing on SDT, the following sections explore how students experienced the core psychological needs of autonomy, competence, and relatedness during the intervention.
Autonomy
The observed increase in autonomy suggests that the game may have supported a more self-directed and engaging peer review experience. However, this finding should be interpreted with caution due to inconsistencies in survey wording. Most post-test items referenced the game explicitly (e.g., "The card game made me feel more in control of the peer feedback I gave"), while the pre-test items used neutral phrasing (e.g., "I feel in control of the peer feedback I provide"). This shift may have influenced how students interpreted the questions, potentially measuring not just their sense of control, but their perception of how much the game changed that control. In the case of the autonomy scale, Q6 scores declined post-intervention, suggesting either that students felt less in control because of the game, or that they felt no additional autonomy and rated the game accordingly. Due to this ambiguity, Q6 was excluded from the autonomy scale, as noted in the Methodology chapter.
Despite this issue, the game design included features aligned with autonomy-supportive principles. According to SDT, autonomy is enhanced when learners experience meaningful choice and self-direction (Deci & Ryan, 1994). Students had flexibility in how they structured their feedback and when to evaluate, aligning with game design principles like meaningful play (Salen & Zimmerman, 2003). Prior research supports the motivational value of player agency and strategic freedom in learning games (Indriasari, Denny, Lottridge, & Luxton-Reilly, 2023; Papastergiou, 2009). Future iterations should standardize survey items across time points and consider adding new game mechanics that enhance autonomy more explicitly, such as choosing focus areas for feedback, or unlocking branching outcomes based on their peer review decisions.
Competence
Although feedback quality improved significantly, students did not report a corresponding increase in perceived competence. One possible explanation is that students' self-perceptions, or lack thereof, did not align with their actual performance. While the game encouraged more detailed peer reviews, it may not have provided the kind of feedback or reinforcement that helps learners recognize skill development. Research suggests that competence is best supported when learners receive clear indicators of success and opportunities for guided improvement (Bandura, 2012; Hattie & Timperley, 2007).
Some students acknowledged that the game motivated them but did not necessarily make them feel more skilled. One student noted, "Yes, the card game influenced me to give more feedback but up to a certain degree. I wouldn't go too out of my way, but if it was an easy bug fix that I could provide, I would give them the feedback on it" (Participant 15). This suggests that while the game increased effort, it may not have reinforced growth in evaluative ability, a key aspect of perceived competence in SDT.
To address this, future iterations could incorporate scaffolding mechanisms such as worked examples, or formative peer review training. These supports not only help students give better feedback but also build evaluative confidence. Additionally, creating explicit opportunities for students to compare their feedback across game rounds, through guided reflection or pre-post exemplars, may help them recognize their progress and strengthen their sense of competence.
Relatedness
While relatedness did not show a statistically significant improvement, qualitative responses revealed mixed experiences. Some students found the game engaging and social, while others did not see a strong connection between the game and peer interactions. The competitive nature of the game may have contributed to this, as competition can sometimes emphasize individual performance over collaboration (Nicholson, 2015). One student noted, "The game made me a bit competitive, which affected the way I gave feedback to have a better chance of winning the card game" (Participant 19). As shown in Figure 5, while the competitive format aligned with most students' preferences, it may have been less effective for those who value collaboration. Still, the strong preference for playing games with others suggests that the social aspect of the card game was welcomed, even if the structure didn't consistently maintain a sense of peer connection.
Figure 5
Player Preferences
Some students may have perceived the feedback process as a social exchange, adjusting their tone or approach in response to the awareness of a peer audience. One student noted, "I did make sure the way I wrote the feedback was more professional," (Participant 28) suggesting that the intervention may have encouraged more thoughtful peer-to-peer communication.
These responses point to a need for a more balanced motivational structure. While competition can enhance motivation, research shows that environments emphasizing shared goals, peer support, and mutual respect are more likely to foster relatedness (Ardic & Tuzun, 2021; Powell & Kalina, 2009). Future iterations of the intervention could incorporate hybrid game mechanics, such as team-based challenges, collaborative missions, or peer mentoring roles. These additions may help ensure that students not only remain motivated but also feel a stronger sense of connection to one another through the feedback process.
Limitations and Future Directions
Sample and Contextual Limitations
While the study demonstrated statistically significant improvements in feedback quality and autonomy, several contextual factors limit the generalizability and sustainability of the findings. The intervention was conducted over two weeks in a single CS course, with a modest sample size (
Furthermore, the novelty of the intervention may have influenced initial perceptions. Pre-test data indicated that most students had limited prior experience with GBL, with 57% having rarely participated in it and 24% never having done so. Prior research suggests that novelty effects can temporarily increase motivation (Papastergiou, 2009). Future studies should explore longer-term interventions or repeated gameplay sessions to assess whether motivation is sustained once the novelty wears off.
Additionally, the researcher also served as the instructor. While steps were taken to anonymize survey responses and ensure voluntary participation, it is possible that this dual role influenced student behaviour or motivation. Future implementations could be replicated in classrooms led by independent instructors to reduce the risk of researcher bias and further assess the generalizability of the results.
Methodological Considerations
One limitation of the study is the absence of a control group, which restricts the ability to attribute observed changes solely to the GBL intervention. Without comparing outcomes to a group using traditional peer feedback practices, causal inferences remain tentative. Future studies should include a control or comparison condition to isolate the specific effects of GBL on student motivation.
Another methodological consideration relates to the use of a Large Language Model (LLM) to classify peer feedback based on a predefined taxonomy. Although a subset of LLM-generated classifications was manually reviewed and found to align well with human-coded interpretations, the validation process was informal and not independently verified. While iterative refinements to the prompt and scoring logic improved consistency, the reliability of the automated analysis cannot be fully confirmed. Future research should incorporate more formal validation procedures, such as inter-rater reliability with multiple human coders, to increase confidence in the accuracy and reproducibility of automated feedback scoring (Cheng, Chen, Foung, Lam, & Tom, 2018).
While the Intrinsic Motivation Inventory (IMI) is a validated tool for measuring perceived motivation, it relies on self-reported data. Prior research cautions that students' perceptions of their learning or motivation do not always align with actual learning outcomes or behavioural changes (Persky, Lee, & Schlesselman, 2020). Furthermore, while paired-sample t-tests would have been more appropriate for comparing pre- and post-intervention motivation scores, the anonymity of survey responses made it impossible to link individuals across time points. This necessitated the use of independent-sample t-tests, which are less sensitive to within-student changes. Future studies may consider implementing pseudonymous identifiers to enable matched analyses while maintaining participant confidentiality.
Game Design Implications
Although peer feedback was not an action students performed within the game, it played a central role in determining gameplay conditions. Students' prior feedback quality influenced how many yellow action cards they received at the start of each game session, directly affecting their strategic options and potential for success. This mechanic reflects the principles of meaningful play, in which player actions must be both discernible and consequential within the system (Salen & Zimmerman, 2003).
By embedding peer feedback into the logic of gameplay, the intervention went beyond superficial gamification and aligned more closely with GBL, where learning tasks are structurally integrated rather than layered on as external rewards (Deterding, Dixon, Khaled, & Nacke, 2011; Nicholson, 2015). While some definitions might describe aspects of this intervention as gamification, GBL is not mutually exclusive with gamification and in this case provides a more comprehensive framing (Al-Azawi, Al-Faliti, & Al-Blushi, 2016). The learning task was not merely incentivized, it became embedded in the game's structure and progression, consistent with GBL design principles.
While this integration supports meaningful play, the connection between peer review and in-game performance was only revealed after the first game. This delayed feedback may have limited players' ability to perceive how their actions shaped outcomes, one of the core conditions for meaningful play. Future iterations could make this relationship more visible during gameplay by incorporating mid-game PCR bonuses, dynamic feedback quality indicators, or real-time adjustments to player status. These additions would preserve the game's existing structure while enhancing the immediacy and clarity of the learning-gameplay connection.
Student feedback also revealed valuable insight to refine gameplay. While many enjoyed the competitive format, others noted underdeveloped mechanics or a lack of narrative context. Comments such as "There should be some lore to the game, like why are we collecting all the green cards?" (Participant 12) and "The trade [card] is almost not used" (Participant 14) point to ways the game could improve in thematic coherence and strategic balance. Enhancing underutilized mechanics and incorporating light narrative framing could boost immersion and perceived relevance. Additionally, integrating cooperative elements, such as team-based objectives or collaborative card effects, may better support students who are socially motivated, thereby reinforcing the SDT need for relatedness (Powell & Kalina, 2009).
Implications for Practice
This study reinforces the need to design peer learning environments that make evaluative tasks meaningful, socially relevant, and autonomy-supportive. As a CS educator, I plan to continue using game-based interventions to promote high-quality PCR, particularly by embedding learning goals into well-structured game systems. The intervention's success suggests that students are more likely to invest effort when feedback is clearly linked to in-game outcomes. However, student responses also highlight the importance of transparency, balance, and narrative coherence in game design. For example, some students expressed confusion about game mechanics or a desire for deeper thematic immersion, indicating that instructional games should be refined not only for pedagogical impact but also for clarity.
Drawing from both the literature and student feedback, I plan to refine future iterations of this intervention by:
- emphasizing the relationship between feedback quality and gameplay success during play;
- strengthening under-utilized mechanics (e.g., trade cards) and rebalancing elements to sustain strategic depth;
- adding light narrative framing to reinforce coherence and purpose (e.g., explaining why students are "collecting" resources);
- incorporating cooperative mechanics (e.g., team-based bonuses, peer mentoring cards) to support students who are more socially motivated and to foster relatedness.
These recommendations echo best practices identified earlier in the study. From a social-constructivist perspective, learning is most effective when it is shared, situated, and actively constructed through interaction. In peer review, this means creating structures that promote thoughtful critique and reflective dialogue, not only procedural compliance. Similarly, as outlined in SDT, students are more intrinsically motivated when they feel competent, autonomous, and connected. The design of the game should continue to support these needs, both through gameplay and the surrounding instructional context.
Finally, this study supports my broader aim to align peer feedback with constructive alignment principles, ensuring that learning outcomes, assessments, and activities reinforce one another. If peer feedback is to be valued as a professional skill, it must be assessed as such and designed in ways that connect classroom instructional strategies to authentic professional practices. Integrating these principles into my course design, especially in preparation for students' industry internships, will help bridge the gap between educational and workplace expectations.
CHAPTER 7: CONCLUSION
This study examined whether a game-based learning (GBL) intervention could improve the quality of peer feedback and increase students' intrinsic motivation to participate in Peer Code Review (PCR) in a Computer Science (CS) course. By embedding feedback performance into the mechanics of a classroom card game, the intervention aligned with principles of meaningful play and Self-Determination Theory (SDT), aiming to support autonomy, competence, and relatedness. The findings contribute to ongoing efforts to enhance peer feedback in technical disciplines by demonstrating how thoughtfully designed GBL can increase student motivation in authentic learning tasks.
The results for the first research question show that students' feedback became significantly more specific and actionable after the GBL intervention. This improvement was reinforced by student reflections describing increased effort and attention during the review process. The structural integration of feedback performance into gameplay created a meaningful incentive that many students found motivating. These results suggest that linking peer feedback quality to game mechanics can serve as an effective strategy for increasing motivation and depth in PCR activities.
The intervention, with respect to the second research question, was associated with a statistically significant increase in students' perceived autonomy during peer review. This outcome indicates that the game design supported a greater sense of ownership and self-direction in how students approached the feedback process. However, no significant gains were observed in perceived competence or relatedness. These findings highlight the need for additional design elements, such as formative scaffolds and collaborative mechanics, to more fully support the psychological needs that underlie intrinsic motivation.
The conclusions are constrained by limitations in scope, including (but not limited to) the short intervention period, modest sample size, and the inability to conduct paired-sample analyses due to anonymous data collection. Nonetheless, the results offer a promising foundation for further investigation into how GBL can enhance student motivation and feedback practices in CS education. Future research should explore the sustainability of these effects over time, assess outcomes in more diverse populations, and examine hybrid approaches that combine individual competition with team-based collaboration.
When learning becomes part of the game, and the game reflects the quality of learning, students begin to care not just about playing well, but about thinking well. This study shows that when peer feedback is meaningfully embedded in gameplay, students respond with greater effort, agency, and intention. For educators seeking to move beyond surface participation in peer review, GBL offers more than fun gameplay, it offers instructional alignment. It connects motivation to mastery and turns feedback into a shared, dynamic experience. The classroom, like any good game, thrives on clear rules, meaningful choices, and a sense of purpose. When designed thoughtfully, it becomes a space where students play not just to win, but to grow.
Appendix A: PCR RUBRIC
Replace with clean code rubric
Appendix B: CODE REVIEW TAXONOMY
Code Review Taxonomy
- S+: Comments in this category provided positive feedback about a specific element of the code.
- S−: Comments in this category provided specific negative feedback about the functionality, style or correctness of the program.
- S0: Comments in this category were specific, but were not obviously positive or negative in tone.
- SA: Comments in this category provided specific advice to a student about how to improve their code.
- G+: Comments in this category are general comments that are positive. The comments do not relate to a specific element of style or requirement specified in the assignment.
- G−: Comments in this category are general negative comments. They do not refer to any specific elements of code, but are instead comments directed at the overall quality (summary comments).
- G0: Comments in this category are general comments that do not have either positive or negative connotations.
- GA: Comments in this category provided general advice to peers, but did not refer to specifics within the code.
- OT: Comments in this category were off-topic.
Appendix C: CONSENT FORM
Purpose
This project is being conducted by Vikram Singh, a Computer Science Teacher at John Abbott College, for the completion of a Master's Degree in College Teaching, accredited by the Université de Sherbrooke. This study explores how different approaches to peer code review affect student motivation and feedback quality.
Procedure
- Before beginning the activity, you'll complete a short survey regarding your perspectives on peer feedback.
- You will participate in a brief card game related to your course material.
- After the card game, you'll complete a survey about your experience.
- Your feedback on the course assignments, along with survey responses, will be analyzed to better understand factors influencing feedback quality and student motivation in peer code review.
Potential Risks & Benefits
There are no known risks for participation in this study.
By investigating what makes peer code review motivating (or not), your participation could lead to the design of interventions that make peer code review a more engaging and beneficial process for everyone. Your participation could help students develop stronger feedback skills, crucial for both their success in CS courses and future careers. The findings could provide valuable information to your instructor and others about how to refine peer code review practices, potentially leading to widespread changes that enhance the learning experience for many CS students. For your interest, the results of the study will be sent to you after the study has been completed, if so desired.
Confidentiality
Your participation in this study is confidential in the following ways:
- Your name will not appear in the research results.
- The researcher/teacher will never know if you agree or do not agree to participate in this study, therefore the choice to participate or not has no impact on your final grade, nor on any future interaction with your teacher.
- The survey results will be anonymous and kept for five years in Microsoft OneDrive behind two-factor authentication.
- The Microsoft Forms questionnaire will be completed anonymously and your personal information will not be revealed. The servers for Microsoft Forms and OneDrive are stored in Canada and therefore your data is protected by Canadian laws.
Your participation in this research is completely voluntary. You have the right to not consent or withdraw consent at any time. If you have any questions about the content or methods of this study, please feel free to contact the teacher/researcher, Vikram Singh, at [email protected] or the supervisor, Paul Darvasi, at [email protected].
If you have any questions about your rights or treatment during this study, please contact the Research and Innovation Officer at JAC, Teresa Hackett, at [email protected].
Statement of Consent
I attest that I have read the above information and freely consent to participate in the study on peer code review within the context of my 420-5P6 Game Programming course during the Fall 2024 semester at John Abbott College. I understand that my peer feedback data from the course assignments, which may include identifiable information, will be used to facilitate the card game activity and subsequent analysis. I also acknowledge that while this data may be referenced during the activity, my name or any other personal identifiers will not appear in the final research report.
- Student Name
- Student ID
- Date
I wish to receive the results of the study. My email is:
Appendix D: PRE & POST SURVEYS
Pre-Test Questions
- I enjoy playing games (analogue or digital). (0-5)
- What types of games do you enjoy playing? (Check all that apply)
- Board games
- Card games
- Video games (console or PC)
- Mobile games
- Role-playing games (ex. Dungeons & Dragons)
- Puzzles
- Sports
- How often do you play games?
- Always (everyday)
- Frequently (a fe times a week)
- Occasionally (a few times a month)
- Rarely (a few times a year)
- Never
- What do you enjoy most about playing games? (Check all that apply)
- Strategic Thinking
- Social Interaction
- Problem-Solving
- Competition
- Relaxation/Fun
- Achieving Goals
- Have you ever participated in learning activities through games in other courses?
- Never
- Rarely (once or twice per semester)
- Occasionally (every few weeks)
- Frequently (weekly or more often)
- How familiar are you with card games specifically?
- Not at all familiar
- Somewhat familiar
- Very familiar
- Do you prefer playing games alone or with others?
- Alone
- With others
- No preference
- Do you prefer competitive or cooperative games?
- Competitive (I enjoy games where I compete against others)
- Cooperative (I enjoy games where I work with others toward a common goal)
- No preference
Post-Test Questions
- I enjoyed playing the card game. (0-5)
- Do you have any feedback about the mechanics or design of the card game? Feel free to comment on any aspects of the game itself, including gameplay, rules, or overall enjoyment. (Open-ended)
- Did participating in the card game influence your approach to giving or receiving peer feedback? Please describe any specific ways the game affected your motivation, engagement, or quality of feedback, or if it had no impact. (Open-ended)
Likert-Scale Questions
Question | Pre-Test | Post-Test | SDT |
---|---|---|---|
Q1 | I enjoy providing feedback to my peers. | I enjoyed providing feedback to my peers more knowing how it affects the card game. | Relatedness |
Q2 | I usually put a lot of effort into the feedback I give. | I put more effort into giving peer feedback knowing how it affects the card game. | Competence |
Q3 | I believe the feedback I give is helpful in improving my peers' work. | I believe the feedback I give is helpful in improving my peers' work. | Competence |
Q4 | I believe the peer feedback I receive is useful for improving my work. | The feedback I received was helpful in improving my work. | Competence |
Q5 | I feel confident about my feedback improving others' work. | I feel confident about my feedback improving others' work. | Autonomy |
Q6 | I feel in control of the peer feedback I provide. | The card game made me feel more in control of the peer feedback I gave. | Autonomy |
Q7 | I feel connected to my peers when giving/receiving feedback. | Playing the card game made me feel more connected to my peers when giving/receiving feedback. | Relatedness |
Q8 | I am motivated to give quality peer feedback. | I was more motivated to give peer feedback knowing it affected the card game. | Autonomy |
Q9 | I feel anxious when giving peer feedback. | I felt anxious giving peer feedback knowing it affected the card game. | Autonomy |
Appendix E: SCRIPTS
Replace with scripts
Appendix F: GAME RULES
Initial Notes (2024.08.28)
- what is the game via SDT
- could have a control, or have the same group play no game and then play the game
- keep the game simple
- add a layer to the moodle workshop activity
- create a currency or payment system
- starting money for monopoly game (gamified) and the money is earned from the peer review
- how to unlock thing or lootboxes?
- Narritive games, creating a simple game board game where the currency is earned by feedback
- Something that would move a piece, move the piece to the finish based on resources
- car across you need wheels gas engine
- engine need x things that are distributed
- trading game? every coder has 10 cards, you can pull a card from the deck and its random. I have three wheels, I have to trade for an engine.
- A trade card that gets burned when you trade. When you provide good feedback, you get a trade card.
- two phases.
- resource gathering phase is based on the feedback based on the code review taxonomy
- Is the person that receives the feedback the judge?
- Maybe an LLM could take in the CRT and it could reward the student based on the feedback
- AI is a tool just like dice/cards/etc.
- car across you need wheels gas engine
- Keep in mind we're still measuring the reliability of GPT and it's not essential to the outcome
- No game and then game, so don't keep sections separate to have. When doing no game, I still have to tell them that their is an experiment but I don't have to say specifically what I'm looking for. The consent form will cover both sessions, game and not game.
- Either they run their own prompt and generate their own cards, or they bring back the feedback
- OR they start the game before feedback, they get invested a little bit, we have no tires or an engine. Could be an existing card game.
- Three phases, be very explicit about each step of the game and feedback process.
- Keep the game SIMPLE
Game
- Resources (Blue)
- Drawn every turn and can used to build structures.
- Structures (Orange)
- A player can obtain these by spending resources.
- Perhaps there are only a certain number of these based on the number of players.
- These would have to be in a separate pile for players to buy.
- Building the castle structure is the win condition and requires many resources.
- Events (Red)
- If a player draws this, they must lose either 3 resources or 1 structure.
- Trade (Green)
- A player may use this card to trade with another player to get a card they need. These will be given out based on the peer feedback quality.
Every turn a card is drawn. The player may spend resources to build structures while avoiding events. The structures grant additional resources per turn. First player to build a castle, they win!