Evaluation is a relatively new discipline that is in high demand. As such, it is important for academic institutions like Claremont Graduate University to engage in efforts that help us to better understand how to improve the effectiveness of evaluation practice. In addition, it is imperative that we also systematically generate insights about how we can create a context within nations so they are conducive to commissioning, supporting, and using evaluations—this is the primary focus of the research performed by our team.

Creating this conducive context requires that we think broadly about the many systems, institutions, and individuals who play a role in the evaluation enterprise. In our lab, we have elected to draw upon Bronfrenbrenner’s Ecological Model to guide our thinking about where to focus our research efforts. As a result, we consider what research to perform to enhance capacity at the individual level (e.g., evaluators, people who commission evaluations, individuals who consume evaluative information), the interpersonal level (i.e., groups of individuals such as teams, coalitions), the organizational level (e.g., federal/state/local agencies, foundations, nonprofit organizations, private firms), the community level (e.g., professional associations for disciplines in which evaluation is often performed such as public health, education, social work, public administration), and the societal level (e.g., legislative bodies that fund evaluations and set overarching evaluation policies at national, state, local levels; systems of higher education). Some examples of our research projects follow.

Evaluative Thinking in Practice: The Centers for Disease Control and Prevention’s (CDC) National Asthma Control Program

  • Levels addressed: Individual and interpersonal
  • Study description: “Although evaluative thinking lies at the heart of what we do as evaluators and what we hope to promote in others through our efforts to build evaluation capacity, researchers have given limited attention to measuring this concept. We undertook a research study to better understand how instances of evaluative thinking may present in practice-based settings—specifically within four state asthma control programs funded by the Centers for Disease Control and Prevention’s National Asthma Control Program. Through content analyses of documents as well as interviews and a subsequent focus group with four state asthma control programs’ evaluators and program managers we identified and defined twenty-two indicators of evaluative thinking. Findings provide insights about what practitioners may wish to look for when they intend to build evaluative thinking and the types of data sources that may be more or less helpful in such efforts.” (Fierro et al., 2018, p. 49)
  • Team members: NECB Lab – Heather Codd, Phung Pham, Piper Grandjean Targos; CDC – Sarah Gill & Maureen Wilce
  • Publication: Fierro, L.A., Codd, H., Gill, S., Pham, P.K., Grandjean Targos, P.T., and Wilce, M. (Summer 2018). Evaluative thinking in practice: The National Asthma Control Program. In A. Vo & T. Archibald (Eds.), Evaluative Thinking. New Directions for Evaluation.158, 49-72.

Organizational Unlearning

  • Levels addressed: Individual, interpersonal, and organizational
  • Study description: The overall purpose of this study is to develop new knowledge regarding the occurrence and nature of organizational unlearning in evaluation contexts. In particular, through this study, we are interested in collecting primary data on: (1) the extent to which organizational unlearning is a factor in evaluation contexts, (2) the implications of organizational unlearning on evaluation work, and (3) strategies used by evaluation practitioners to facilitate organizational unlearning among evaluation stakeholders.
  • Team members: Heather Codd (PI), Piper Grandjean Targos, Phung Pham

Evaluation Policies of U.S. Federal Government Agencies

  • Level addressed: Organizational
  • Study description: Evaluation policy has the potential to affect change within organizations. As defined by Trochim (2009) an evaluation policy is, “…any rule or principle that a group or organization uses to guide its decisions and actions when doing evaluation” (p. 13). Despite the potential importance of these policies, empirical examinations of their development, content, and effects are scarce. Through this study we are seeking a better understanding of: (i) which agencies have explicit evaluation policies, (ii) the factors that led to the creation of these policies, and (iii) the specific content of these evaluation policies.
  • Team members: NECB lab – Carlos Echeverria-Estrada, Kandice Ocheltree, Piper Grandjean Targos; UCLA – Christina A. Christie & Emi Fujita-Conrads

The U.S. Federal Evaluation Marketplace

  • Level addressed: Societal
  • Study description: It is widely thought that there is a high-demand for evaluation in the U.S., however, very little is known about the resources that are currently expended for/allocated to external evaluation activities. In this study, we examined publicly available data to garner a better understanding of funding allocations for evaluation between fiscal years 2010 and 2017 across the majority of federal agencies subject to the provisions of the 2010 Government Performance and Results Modernization Act. In addition, we delved into additional details for the Department of Health and Human Services to learn more about the contracting of evaluation services.
  • Team members: NECB lab – Leslie A. Fierro; UCLA – Sebastian Lemire, Alana Kinarsky, Emi Fujita-Conrads, Christina A. Christie
  • Publication: Lemire, S., Fierro, L.A., Kinarsky, A., Fujita-Conrads, E., and Christie, C.A. (Winter, 2018). The U.S. Federal evaluation market. In Nielsen, Lemaire, and Christie (Eds.), Evaluation Marketplace. New Directions for Evaluation. 160, 63-80.