Blake’s interests in evaluation capacity building lie at the intersection of the individual evaluator and the evaluation context. Although evaluation is touted as an agent of positive change, through its potential to facilitate evidence-based decision making, the use of evidence to make decisions is undervalued by decision makers and the general population alike. Understanding the systems in which evaluators find themselves, the factors influencing decision makers, and the potential of the evaluator to be influential in this context are important factors in understanding how to leverage evaluative findings to make a difference. Accordingly, Blake views one part of evaluation capacity as the capacity of the individual evaluator to: 1) accurately diagnose the system in which evaluation occurs and 2) exert influence in that system.
His research interests include understanding the role of interpersonal skills in evaluation, systems thinking, and pro-science advocacy.
The Role of Trustworthiness on Perceptions of the Accuracy, Relevance, and Utility of Evaluative Feedback
Abstract: The current study sought to understand the role of trustworthiness in a simulated evaluation context. It explored whether an evaluator could increase her/his perceived trustworthiness through the modification of behaviors during brief interactions with study participants. Second, it examined whether feedback valence (i.e., feedback positivity or negativity) contributed to participant’s perceptions of feedback accuracy, relevance, and utility above and beyond the evaluator’s trustworthiness. Using random assignment across three separate samples of participants, the study found that trustworthiness behaviors could reliably increase perceptions of an evaluator’s trustworthiness. Additionally, after controlling for feedback valence, perceptions of the evaluator’s overall trustworthiness was found to be significantly associated with perceptions of feedback accuracy, and marginally associated with perceptions of feedback relevance and utility.
Colin received his Bachelor’s in Psychology at Brigham Young University and his MA in Psychology at Cal State University Fullerton. He is currently a third year PhD student in the Evaluation and Applied Research program at Claremont Graduate Univeristy. Colin’s interests primarily focus on evaluation research design, survey development within the context of evaluation, and quantitative data analysis. His Master’s Thesis is titled Measuring Attitudes Towards Media Violence Using Item-Response Theory.
Since coming to CGU, Heather has worked on a number of evaluation projects, both international and domestic, in the areas of education and training, health, and professional development. Prior to beginning her doctoral studies, Heather worked in the Canadian government sector in the areas of post-secondary education policy and finance, international education, and intergovernmental relations.
Evaluative Thinking: The Myth or Magic of the Profession of Evaluation
Abstract: Evaluative thinking refers to critical thinking applied in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence. Application of evaluative thinking involves identifying assumptions, posing thoughtful questions, pursuing deeper understanding, and informing decisions for action. Anecdotal evidence suggests that evaluative thinking is an important component of evaluation practice. However, little empirical research has been conducted on the topic, especially with regards to its significance to evaluation practitioners. In a survey of 592 American Evaluation Association (AEA) members, evidence of a five-factor model underlying one scale of evaluative thinking is presented. Additionally, the study found that, on average, respondents with more years of experience scored higher on evaluative thinking. Last, a relationship between evaluative thinking and self-efficacy was identified such that higher levels of evaluative thinking were associated with higher levels of self-efficacy. Limitations and suggestions for future research are discussed.
Kathleen is a current doctoral student studying Evaluation and Applied Research Methods. She possesses BA’s in Political Science and Psychology, with a minor in Women’s Studies, from Chapman University, as well as a MA in Evaluation from Claremont Graduate University. During her time at CGU, Kathleen has had the privilege to collaborate on an array evaluation projects, spanning numerous topics, including evaluation capacity building.
Most recently, Kathleen served as an Evaluation Associate on the Evaluation of the Stronger Hearts™ Helpline Pilot and currently serves as the Strategic Learning and Evaluation Lead on the CGU-Pitzer Educating Socially Responsive Evaluators Project under Dr. Fierro.
Kathleen first fell in love with the field of evaluation during her year-long term of national service with AmeriCorps. After this transformative experience, Kathleen began her graduate studies at CGU and immersed herself in several evaluation projects. Daily she is compelled by the power of evaluation to connect people and improve lives.
Kathleen strives to utilize systematic rigor and unwavering professional ethics as mechanisms for social change in her evaluation practice. She brings knowledge and experience in democratic/participatory approaches, evaluation design, survey creation, data analysis, and mixed methodology.
Democratic Evaluation in Action: Interviews with the Experts
Abstract: Democratic evaluation approaches strive to promote social justice, equity, and increased stakeholder inclusion in the evaluation process. While there is ample literature on democratic approaches theoretically, a deficit in research exists regarding the application of the theories in practice. As such, ambiguity surrounds evaluators’ conceptualization of the theories in practice and the contextual considerations that promote or inhibit their use. This paper presents the initial qualitative interview findings from a mixed-methods study that investigated these topics. Interviews with six leading democratic evaluation theorists offer valuable insights on the application of democratic evaluation approaches. Most notable, theorists’ commentary reveals the powerful influence of contextual considerations, especially time and stakeholder receptivity, on the implementation of democratic evaluation approaches. These theorists indicate that while democratic evaluations are demanding to implement, they nevertheless offer an approach to evaluation that meaningfully engages with the important values of equity and fairness.
Exploring the Motivations for Evaluation Capacity Building in Nonprofits
Abstract: Many conceptual models of evaluation capacity building (ECB) exist in the evaluation literature. Although the models often include motivation as a component, and scholars generally perceive motivation as important to understanding ECB, to date there is little information about motivating factors. In addition, evaluation of nonprofit organizations contains a unique dilemma between accountability concerns and the desire for programmatic improvement, a dilemma which ECB can help to alleviate. The present study employs an explanatory mixed-methods design across two phases to explores the internal and external motivations for nonprofits in choosing to adopt ECB and also examines possible organizational factors which might moderate their effects using an to design. Findings from surveys of 16 individuals at 11 nonprofits suggested
organizations were motivated by internal factors such as programmatic improvement and could be influenced by organizational size and maturity. Integrating these results with the findings from focus groups consisting of ten nonprofit employees and five ECB scholars suggests that nonprofits can differ in their motivations across levels in the organizational hierarchy and are driven to “tell the story” of their organizations by situating them in a complex environment rather than to address accountability concerns to funders with varying requirements. Additional research is suggested to explore the complex interplay of motivations and the possible curvilinear relationship with organizational demographics.
First-year Project (i.e., thesis equivalent):
Evaluative Thinking in Two Parallel Contexts: Evaluation and Emergency Medicine
Abstract: Evaluative thinking has a multiplicity of definitions and descriptions, which attenuates efforts to develop and apply it as a skill. The abounding definitions and descriptions may be consolidated through the dual systems theory of thinking, and this consolidation may enable a coherent understanding of how evaluative thinking manifests within and beyond evaluation. This thesis applied Q-methodology in two parallel contexts: evaluation and emergency medicine. Empirical results suggest that evaluative thinking manifests differently for different people. However, evaluative thinking may nonetheless manifest in similar ways between people belonging to the same setting, even when they are not engaged in the same work. The results further suggest that a single context may encompass multiple coexisting styles of evaluative thinking, and different contexts may give rise to distinct styles. The information acquired by this thesis may help inform future work that builds upon existing knowledge of diverse evaluative thinking styles in various contexts.
As an NECB lab member, Nina is interested in increasing the evaluation capacity of evaluation practitioners beyond the scope of traditional program evaluation. Her MA thesis contributes a theoretical framework for evaluating cross-sector partnerships among businesses, governments, and nonprofits. Through her dissertation research, she hopes to increase the business capacity of evaluation entrepreneurs and develop an empirically driven workshop to foster entrepreneurial mindsets among evaluators. She is also conducting research on evaluation education in undergraduate programs to understand the extent of evaluation capacity fostered through different academic trajectories.
Jennifer has over 15 years of experience working in research, evaluation, and organizational development. She currently is an Evaluation Capacity Building Associate in the NECB Laboratory and an Evaluation Associate in the Positive Organizational Development Laboratory at CGU. She also is a Strategist for G&L Pacheco and Associates in Montebello, CA.
Prior to attending CGU, Jennifer was the Administrative Director of the Neuromodulation Division at UCLA, where she spent eleven years working with senior leadership to manage a dynamic research and clinical program and oversee a multi-million dollar annual budget. She also recruited, trained, and supervised a team of clinical personnel, administrative staff, and volunteers; and oversaw development, planning and coordination of events, including local and national conferences and donor events.
Jennifer recently completed a participatory evaluation of Union Station Homeless Services in Pasadena, CA, where in addition to evaluating their program, she worked closely with program stakeholders to build their evaluative capacity for future evaluation efforts. Jennifer’s primary research interest is in exploring organizational and individual level outcomes associated with evaluation capacity building. As such, she hopes to help streamline the evaluation capacity building process to include empirically tested components from organizational behavior, leadership, and evaluation theories. As an evaluator and organizational development practitioner, she believes that the prioritization of building capacity is a critical step in sustaining evaluative thinking within organizations, which contributes to continued improvement and member well-being.