Solving the Feedback Problem in Crowdsourcing Games: Design Lessons from Smorball

Max Seidman, Mary Flanagan and Gili Freedman

Abstract

In recent work (Seidman, Flanagan, Rose-Sandler & Lichtenberg, 2016), we outline the algorithms and processes by which we verify player’s responses in crowdsourcing games, and are able to determine accurate responses. In this paper, we focus on design. Crowdsourcing games, which seek to motivate users to complete human intelligence tasks through enjoyable gameplay, pose unique design challenges not encountered when designing entertainment-only (non-impactful) games or non-game crowdsourcing applications. These challenges often present themselves in the form of a trade-off: risk lower task efficiency or data quality to improve gameplay and user experience, or vice versa. Chief among the challenges to designing crowdsourcing games is the “feedback problem”: a game must be able to provide feedback to the player about whether her action was correct or incorrect. While a crowdsourcing application can thank a user for contributing data without commenting on whether that data was helpful or not, a crowdsourcing game, in order to be compelling, ought to be able to reward the player for submitting good data and avoid rewarding the player for submitting low quality data. Since, however, the crowdsourcing game system by definition cannot tell whether player-submitted data is correct (otherwise the task could be automated), providing feedback to the player risks reinforcing the wrong behaviors: marking the player incorrect when she has completed the task correctly, or vice versa. This paper investigates design aspects, and in particular, how the feedback problem in the creation of crowdsourcing games can be effectively addressed. We illustrate this concept using the design of the document transcription game Smorball (2015), winner of the Boston Festival of Indie Games’ Best Serious Game award, as a case study.