Investigating Accuracy and Perceived Value of Feedback in Peer Code Review Using Gamification

Authors: Theresia Devi Indriasari, Andrew Luxton-Reilly, Paul Denny

Date: 2021-06-26

Abstract

The practice of peer code review has been shown to deliver a variety of benefits to programming students. These include learning from producing and receiving feedback, and from being exposed to a range of problem-solving approaches and solutions. However, the success of a peer code review activity depends on the quality and accuracy of the reviews that students produce, and prior work has shown that these can sometimes be poor. One approach for addressing this problem is to incorporate motivational incentives directly into the design of the code review platform. In this research, we explore the use of gamification in an online peer code review tool, where game-like elements are used to reward students for generating accurate and helpful reviews. We report the results of a randomized controlled study (n=171) that measures both review accuracy and the perceived value of the feedback produced. Although quantitative ratings of the review quality did not differ significantly between control and experimental conditions, we observed interesting trends relating to the perceived value of the feedback. Students in both groups had similar views regarding the usefulness of the feedback they received on their own work, however students in the experimental condition tended to express more positive sentiments towards the quality of the feedback they produced for their peers and observed from other reviewers.