Tools and methods in evaluating citizen science
The tools and methods used for evaluation in citizen science mostly tend to follow standard social science practice, ranging from questionnaires, interviews, focus groups, participant observations, and documented self-reflections from the involved scientists and volunteers. In their overview of citizen science projects in biodiversity, Peter et al. (2019) report on a great diversity of study designs and methods for evaluation, with many projects relying on self-reported data.
Surveys are amongst the most frequent instruments to be applied for self-reported data, aiming mainly at collecting evidence for learning outcomes for the participants. Citizen science practitioners can nowadays turn to a number of shared resources online that help to collect insights into participants’ motivations, satisfaction, benefits, self-efficacy, etc. (Phillips et al. 2018) - see also the previous chapter on evaluating individual outcomes.
Interviews are another instrument frequently used for evaluation. These range from structured or semi-structured sets of questions to very open and exploratory formats. Scholars have published their interview guidelines to gather insights into their participants’ motivations, engagement activities, and benefits, amongst others (exemplary interview guidelines can be found in the previous chapter). But we also find narratives and forms of storytelling approaches as part of the evaluation spectrum. For example, Constant and Roberts (2017) combine narrative interviews with instruments like photo essays, research diaries, and storyboards to reveal the context-based, tacit, and intangible factors involved in personal outcomes.
Other evaluation approaches are built into the interaction process or are simply applied on the data available without an a priori evaluation design. An example of the former is the embedded assessment approach, where a series of games or quizzes are part of the citizen science activity and help to collect insights on participants’ increased skills and knowledge in playful ways without people being aware that their knowledge is tested (Becker-Klein et al. 2016). The non-intrusive, non-design specific, approach can be exemplified by Luczak-Rösch et al. (2014) who analysed the comments shared by and amongst their online citizen scientists and measured how far citizen scientists adopted technical terms in their language as a sign of new knowledge gains.