Research on Evaluation
-
Open Evaluation Science
I explain open science and describe how evaluators can use open science. I anticipate a future survey of evaluators on their perceptions of open science and prevalence of open science practices in the field.
-
Published RoE
In collaboration with Dr. Alicia Stachowski, we are working on a follow-up to Coryn et al. (2017) examining the next five years of published studies in evaluation journals to see the continued trend of published RoE.
-
RoE in 2019 AEA Conference
Members of the AEA RoE TIG examined the accepted 2019 AEA conference proposals to see the extent to which proposals are considered RoE. Overall, we found 14.7% were RoE.
-
What is evaluation?
I asked evaluators and researchers to define evaluation and differentiate evaluation from research. Evaluators are more likely than researchers to think evaluation has unique aspects from research.
-
Who are evaluators?
I am working with Dr. Bianca Montrosse-Moorhead on evaluator identity and the professional boundaries between evaluation and other similar professions. We interviewed 40 applied professionals, half of whom identified primarily as an evaluator.
-
Retrospective pretests
We demonstrate in an easy-to-follow guide how to use measurement invariance testing to check for retrospective bias. We provide R code, step-by-step guidance, and easy to understand figures for researchers and evaluators new to testing measurement invariance.
-
Titles in DataViz
MTurk participants were randomly assigned to a 2 (title: descriptive vs informative) x 2 (graph: simple vs complex) x 3 (valence: positive, mixed, negative) experiment. Informative titles require less mental effort and are more aesthetically pleasing than descriptive titles.
-
Politics in Evaluation
Evaluators were asked to describe political situations they had experienced in the evaluation process and how they responded to those situations. Evaluation was most susceptible to politics when identifying stakeholders and reporting findings.
-
Logic Models in Evaluation
MTurk participants were randomly assigned to either the original logic model or the revised logic model in color or black and white. Overall, the revised logic models had greater visual efficiency than the original, demonstrating the importance of applying data visualization principles to logic models.
-
Developmentally Appropriate Evaluations
My thesis involved asking evaluators to design an evaluation for a hypothetical program. Evaluators were randomly assigned to a program serving children, adolescents, or young adults. Evaluators were less participatory when participants were children than when they were adults.
-
Relationships in RPPs
I surveyed evaluators, researchers, and practitioners working in research practice partnerships or evaluation partnerships on the interpersonal factors, research/evaluation factors, and use. Interpersonal factors were the most strongly related to use above and beyond research/evaluation factors.