Daily Schedule

8:00 am–9:00 am: Check-In and Continental Breakfast
9:00 am–9:15 am: Morning Welcome and Introductions
9:15 am: Workshops Begin
10:45 am–11:00 am: Break
12:00 pm – 1:30 pm: Lunch Break
3:00 pm–3:15 pm: Break
4:45 pm: Workshops Conclude

For those attending webcasts, please note that all times listed are in the Pacific time zone. The Morning Welcome will not be webcast; webcasts begin at 9:15 am, but webcast attendees are advised to log in by 9 am to test their connection.

Walk-in registration is still available for in-person attendance of workshops. To attend online workshops, please use the Registration button on the right side of this page.

Full List of 2017 Workshops

Thursday, August 17

Friday, August 18

Saturday, August 19

Sunday, August 20

Monday, August 21

Tuesday, August 22

Workshop Descriptions

Thursday, Aug. 17

Basics of Evaluation & Applied Research Methods
Stewart I. Donaldson and Christina A. Christie

headshot of Stewart Donaldson headshot of Christina Christie

This workshop will provide participants with an overview of the core concepts in evaluation and applied research methods. Key topics will include the various uses, purposes, and benefits of conducting evaluations and applied research, basics of validity and design sensitivity, strengths and weaknesses of a variety of common applied research methods, and the basics of program, policy, and personnel evaluation. In addition, participants will be introduced to a range of popular evaluation approaches including the transdisciplinary approach, program theory-driven evaluation science, experimental and quasi-experimental evaluations, empowerment evaluation, fourth-generation evaluation, inclusive evaluation, utilization-focused evaluation, and realist evaluation. This workshop is intended to provide participants with a solid introduction, overview, or refresher on the latest developments in evaluation and applied research, and to prepare participants for intermediate and advanced level workshops in the series.

Recommended background readings include:

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

Friday, Aug. 18

Quasi-Experimental Design
William D. Crano

headshot of William Crano

Conducting, interpreting, and evaluating research are important aspects of the social scientist’s job description.  To that end, many good educational programs provide opportunities for training and experience in conducting and evaluating true experiments (or randomized controlled trials—RCTs—as they sometimes are called).  In applied contexts, the opportunity to conduct RCTs often is quite limited, despite the strong demands on the researcher/evaluator to render “causal” explanations of results, as they lead to more precise understanding and control of outcomes. In such restricted contexts, which are absolutely more common than those supporting RCTs, quasi-experimental designs sometimes are employed. Though they usually do not support causal explanations (with some noteworthy exceptions), they sometimes provide evidence that helps reduce the range of plausible alternative explanations of results, and thus, can prove to be of real value. This workshop is designed to impart an understanding of quasi-experimental designs. After some introductory foundational discussion focused on “true” experiments, we will consider quasi-experimental designs that may be useful across a range of settings that do not readily admit to experimentation. These designs will include time series and interrupted time series methods, nonrandomized designs with and without control groups, case control (or ex post facto) designs, regression-discontinuity analysis, and other esoterica. Participants are encouraged to bring to the workshop design issues they are facing in real world contexts.

Questions regarding this workshop may be addressed to William.Crano@cgu.edu.

Survey Research Methods
Jason T. Siegel

headshot of Jason Siegel

The focus of this hands-on workshop is to instruct attendees how to create reliable and valid surveys to be used in applied research. A bad survey is very easy to create. Creating an effective survey requires a complete understanding of the impact that item wording, question ordering, and survey design can have on a research effort. Only through adequate training can a good survey be discriminated from the bad. The daylong workshop will focus specifically on these three aspects of survey creation. The day will begin with a discussion of Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique the item writing of selected national surveys. Next, attendees will work in groups to create survey items of their own. Using Sudman, Bradburn, and Schwatrz’s (1996) cognitive approach, attendees will then be informed of the various ways question order can bias results. As practice, attendees will work in groups to critique the item ordering from selected national surveys. Next, attendees will propose an ordering scheme for the questions created during the previous exercise. Lastly, using several sources, the keys to optimal survey design will be provided. As practice, the design of national surveys will be critiqued. Attendees will then work with the survey items created, and properly ordered, in class and propose a survey design.

Questions regarding this workshop may be addressed to Jason.Siegel@cgu.edu.

Culturally Responsive Evaluation
Katrina L. Bledsoe

headshot of Katrina Bledsoe

The beauty of the field of evaluation is in its potential responsiveness to the myriad of contexts in which people—and programs, policies, and the like—exist. As the meaning and construction of the word “community” expands, the manner in which evaluation is conducted must parallel that expansion. Evaluations must be less about a community and more situated and focused within the community, therefore increasing its responsiveness to the uniqueness of the setting/system. To do this, however, requires an expanded denotative and connotative meaning of community. Moreover, it requires us to think innovatively about how we construct and conduct evaluations, and to broadly consider the kind of data that will be credible to stakeholders, and consumers. The goal of this workshop is to engage the attendee in thinking innovatively about what evaluation looks like within a community, rather than simply about a community. We will engage in a process called “design thinking” (inspired by Design Innovation Consultants IDEO and Stanford’s Design School) to help us consider how we might creatively design responsive and credible community-based evaluations. This interactive course includes some necessary foundation-laying, plenty of discussion, and of course, opportunities to think broadly about how to construct evaluations with the community as the focal point.

Questions regarding this workshop may be addressed to Katrina.Bledsoe@gmail.com.

Saturday, Aug. 19

Grant Writing
Allen M. Omoto

headshot of Allen Omoto

This workshop covers some of the essential skills and strategies needed to prepare successful grant applications for education, research, and/or program funding. It will provide participants with tools to help them conceptualize and plan research or program grants, offer ideas about where to seek funding, and provide suggestions for writing and submitting applications. Some of the topics covered in the workshop include the pros and cons of grant-supported work, strategies for identifying sources of funding, the components and preparation of grant proposals, and the peer review process. Additional topics related to assembling a research or program team, constructing a project budget, grants management, and tips for effective writing also will be covered. The workshop is intended primarily as an introduction to grant writing, and will be most useful for new or relatively inexperienced grant writers. Workshop participants are invited to bring their own “works in progress” for comment and sharing. There will be limited opportunities for hands on work and practice during the workshop. At its conclusion, workshop participants should be well positioned to read and evaluate grant applications, as well as to assist with the preparation of applications and to prepare and submit their own applications in support of education, research, or program planning and development activities.

Questions regarding this workshop may be addressed to Allen.Omoto@cgu.edu.

Theory-Driven Evaluation Science: Finding the Sweet Spot between Rigor & Relevance in Evaluation Practice
Stewart I. Donaldson

headshot of Stewart Donaldson

This workshop is designed to provide participants with an opportunity to increase their understanding of theory-driven evaluation science, and to learn how to use this approach to improve the rigor and usefulness of evaluations conducted in complex “real world” settings. We will examine the history and foundations of theory-driven evaluation science, theories of change based on stakeholder and social science theories, how these theories of change can be used to frame and tailor evaluations, and how to gather credible and actionable evidence to improve evaluation accuracy and usefulness. Lecture, exercises, discussions, and a wide range of practical examples from evaluation practice will be provided to illustrate main points and key take-home messages, and to help participants to integrate these concepts into their own work immediately.

Recommended background readings include:

Suggested overview readings:

  • Leeuw, F. L., & Donaldson, S. I. (2015). Theories in evaluation: Reducing confusion and encouraging debate.  Evaluation: The International Journal of Theory, Research, & Practice, 21(4), 467-480.
  • Donaldson, S.I., & Lipsey, M.W. (2006). Roles for theory in contemporary evaluation practice: Developing practical knowledge. In I. Shaw, J.C. Greene, & M.M. Mark (Eds.), The Handbook of Evaluation: Policies, Programs, and Practices (pp. 56-75). London: Sage.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

Structural Equation Modeling: An Introduction
Andrew Conway

headshot of Andrew Conway

The purpose of this workshop is to introduce students to structural equation modeling (SEM). This is an advanced statistics topic and so it is assumed that students have completed courses on basic statistics and multiple regression. In this workshop we will cover various forms of SEM, including path analysis, confirmatory factor analysis, and structural regression models. All examples will be presented using the lavaan package in R.

Questions regarding this workshop may be addressed to Andrew.Conway@cgu.edu.

Evaluability Assessment and Program Complexity
Mike Trevisan and Tamara M. Walser

headshot of Tamara Walser headshot of Mike Trevisan

Level: Advanced beginner; participants should have prior familiarity with evaluation terminology.

The purpose of this workshop is to illustrate the use of evaluability assessment (EA) to address program complexity. EA addresses program complexity through its reliance on stakeholder involvement; investigation of program characteristics, context, implementation, and logic; and inclusion of both research and practitioner knowledge in program theory. This workshop will include background on current EA theory and practice, alignment of EA with systems thinking, features of program complexity and why complexity is often difficult to evaluate, and strategies for using EA when evaluating complex programs. Case examples will be used to apply the principles and ideas learned.

You will learn:

  • The essential elements of EA and how they are incorporated into the EA model presented.
  • What makes programs complex and why complexity is often difficult to evaluate with traditional evaluation approaches.
  • How to implement the EA model presented, particularly as it pertains to addressing program complexity.

Questions regarding this workshop may be addressed to Michael Trevisan.

Sunday, Aug. 20

Evaluating Capacity Building: Theory to Practice
Huey T. Chen

headshot of Huey Chen

Level: Beginner or intermediate-level professionals; familiarity with capacity-building activities is helpful.

This workshop will discuss three major questions related to capacity building:

  1. What is capacity building and how can it be evaluated?
  2. How can evaluators assist stakeholders in planning capacity-building initiatives?
  3. How to assess the process and outcomes of capacity building?

The workshop will discuss two evaluation tools: Logic models, as a popular planning tool, and the action model/change model schema, as a complementary model to identify not only major components of programs, but also underlying causal mechanisms and contextual factors that are essential to produce desired outcomes. There will be applications and exercises throughout the workshop to practice and enhance participants’ competency to use these tools.

With this background, the workshop will provide a step-by-step approach for evaluating capacity building. These steps include:

  1. Engaging key stakeholders in discussion about planning the capacity-building initiative in a way that allows for evaluation.
  2. Applying the logic model to the initiative to identify and describe the major elements of capacity building.
  3. Deciding whether or not further clarification of the initiative is necessary through the application of the action model/change model schema.
  4. Determining the type and scope of evaluation.
  5. Determining performance measures.
  6. Data collection and analysis.
  7. Report writing and dissemination.

Participants will learn:

  • How to define and articulate areas of capacity-building.
  • What are the differences between capacity-building outcomes and public health/social betterment outcomes.
  • How to apply logic models and the action model/change model schema to identify major components of a capacity-building program/initiative.
  • How to apply evaluation approaches and methods for evaluating capacity building.

Questions regarding this workshop may be addressed to Huey Chen.

Data Visualization
Tarek Azzam

headshot of Tarek Azzam

The careful planning of visual tools will be the focus of this workshop. Part of our responsibility as evaluators is to turn information into knowledge. Data complexity can often obscure main findings, or hinder a true understanding of program impact. So how do we make information more accessible to stakeholders? Often this is done by visually displaying data and information, but this approach, if not done carefully, can also lead to confusion. We will explore the underlying principles behind effective information displays. These are principles that can applied in almost any area of evaluation, and draw on the work of Edward Tufte, Stephen Few, and Johnathan Koomey to illustrate the breadth and depth of their applications. In addition to providing tips to improve most data displays, we will examine the core factors that make them effective. We will discuss the use of the common graphical tools, and delve deeper into other graphical displays that allow the user to visually interact with the data.

Questions regarding this workshop may be addressed to Tarek.Azzam@cgu.edu.

Learning from Success: Incorporating Appreciative Inquiry in Evaluation
Tessie Catsambas

headshot of Tessie Catsambas

In her blog, “Value-for-Money: Value-for-Whom?” Caroline Heider, Director General of the Independent Evaluation Group of the World Bank, pushes evaluators to make sure that “the questions we ask in our evaluations hone in on specifics that deepen the understanding of results and past experience,” and to ask ourselves what difference our recommendations will make once implemented, and what added value they will create.

Applying Appreciative Inquiry to evaluation provides a way to drive an evaluation by vision and intended use, builds trust to get more accurate answers to evaluation questions, and offers an avenue to increase inclusion and deepen understanding by incorporating the systematic study of successful experiences in the evaluation.

Appreciative evaluation is just as serious and systematic as problem analysis and problem solving; and it is probably more difficult for the evaluator, because it requires continuous reframing of familiar problem-focused language.

In this one-day workshop, participants will be introduced to Appreciative Evaluation and will explore ways in which it may be applied in their own evaluation work. Participants will use appreciative interviews to focus an evaluation, to structure and conduct interviews, and to develop indicators. Participants will practice “reframing” and then reflect on the power of appreciative and generative questions. Through real-world case examples, practice case studies, exercises, discussion and short lectures, participants will learn how to incorporate AI into their evaluation contexts.

Workshop Agenda

  • Introduction: Theoretical Framework of Appreciative Inquiry (lecturette)
  • Logic and Theory of Appreciative Inquiry (lecturette)
  • Imagine phase: Visions (case study: small-group work) Lunch Reframing deficits into assets (skills building)
  • Good questions exercise (skills building)
  • Innovate: Provocative propositions/possibility statements, links to developing indicators (case study: small group work)
  • Applications of AI—tying things together (lecturette and discussion)
  • Questions and Answers
  • Evaluation

Questions regarding this workshop may be addressed to Tcatsambas@encompassworld.com.

Sustainable Development Goals (SDG) Evaluation: What we need and why we don’t have it yet!
Deborah L. Rugg

headshot of Deborah Rugg

Level: Advanced beginner; appropriate for a broad range of participants and backgrounds

The adoption of the 2030 Agenda for Sustainable Development and the important and visible role that the agenda assigns to evaluation in the agenda’s follow-up and review process will require efforts to further strengthen evaluation practices in developed and developing countries alike.

The aspirational nature and interconnectedness of many of the agenda’s targets will require those conducting evaluation, as well as those commissioning evaluation, to have a thorough understanding of the agenda and its goals, targets and indicator framework.

This workshop will describe the background of the development of the sustainable development goals, why it is unprecedented, some current approaches to “SDG responsive evaluation” and why we have a very long way to go.

Consideration of a rights-based approach to performance-based versus systems-based approaches will be highlighted. We will conclude with a discussion of the current status of evaluation as a strategic tool to help achieve the Sustainable Development Goals, and what we now need to do better.

You will specifically learn the following:

  1. What are the SDGs, how did they develop, why are they unprecedented, how did evaluation come to be included, and what are the current challenges?
  2. How are the SDGs being monitored and evaluated nationally and globally? What are the SDG targets and indicators and how do they relate to evaluation?
  3. What is “SDG-responsive evaluation” and what are some of the different approaches, such as performance-based versus systems based approaches?
  4. What is the status of evaluation in the current global context? What is the 2020 Global Agenda for Evaluation? What is needed now? How do we learn more and become involved?

Questions regarding this workshop may be addressed to Deborah Rugg.

Monday, Aug. 21

Introduction to Positive Organizational Psychology: Theory, Research, and Applications
Stewart I. Donaldson and Jeffrey Yip

headshot of Stewart Donaldson headshot of Jeffrey Yip

Since its formal introduction at the American Psychological Association Convention in 1998, the positive psychology movement has blossomed, giving birth to a vibrant community of scholars and practitioners interested in improving various aspects of society.

Positive organizational psychology has been defined as the scientific study of positive subjective experiences and traits in the workplace and positive organizations, and its application is to improving the effectiveness and quality of life in organizations. The purpose of this workshop is to introduce participants to cutting edge theory, research, and applications of positive psychology in the workplace.

In this full-day workshop, you will be provided with an overview of the positive psychology movement, and its relationship with positive organizational psychology, behavior, and scholarship. Through lectures, small group discussions, exercises, and cases, you will learn how to apply positive psychology and design thinking principles to improve the quality of work life, well-being of all organizational stakeholders, and organizational effectiveness.

Topics such as positive leadership, exemplary teamwork, flow in the workplace, positive career development and mentoring, appreciative inquiry, and positive organizational development will be explored.

Recommended reading:

  • Donaldson, S. I., Dollwet, M., & Rao, M. (2015). “Happiness, excellence, and optimal human functioning revisited: Examining the peer-reviewed literature linked to positive psychology.” Journal of Positive Psychology, 9(6), 1–11.
  • Donaldson, S. I., & Dollwet, M. (2013). “Taming the waves and wild horses of positive organizational psychology.” Advances in Positive Organizational Psychology, 1, 1–21.
  • Donaldson, S. I., & Ko, I. (2010). “Positive organizational psychology, behavior, and scholarship: A review of the emerging literature and evidence base.” Journal of Positive Psychology, 5 (3), 177–191.

Please contact Stewart.Donaldson@cgu.edu or Jeffrey.Yip@cgu.edu if you have questions.

Principles-Focused Evaluation
Michael Quinn Patton

headshot of Michael Quinn Patton

Evidence about program effectiveness involves systematically gathering and carefully analyzing data about the extent to which observed outcomes can be attributed to a program’s interventions. It is useful to distinguish three types of evidence-based conclusions:

  1. Single evidence-based program. Rigorous and credible summative evaluation of a single program provides evidence for the effectiveness of that program and only that program.
  2. Evidence-based model. Systematic meta-analysis (statistical aggregation) of the results of several programs all implementing the same model in a high-fidelity, standardized, and replicable manner, and evaluated with randomized controlled trials (ideally), to determine overall effectiveness of the model. This is the basis for claims that a model is a “best practice.”
  3. Evidence-based principles. Synthesis of case studies, including both processes and outcomes, of a group of diverse programs or interventions all adhering to the same principles but each adapting those principles to its own particular target population within its own context. If the findings show that the principles have been implemented systematically, and analysis connects implementation of the principles with desired outcomes through detailed and in-depth contribution analysis, the conclusion can be drawn that the practitioners are following effective evidence-based principles.

Principles-focused evaluation treats principles as the intervention and unit of analysis, and designs an evaluation to assess both implementation and consequences of principles.

Principles-focused evaluation is a specific application of developmental evaluation because principles are the appropriate way to take action in complex dynamic systems.  This workshop will be the worldwide premiere of principles-focused evaluation training.  Specific examples and methods will be part of the training.

Participants will learn:

  • What constitutes a principle that can be evaluated.
  • How and why principles should be evaluated.
  • Different kinds of principles-focused evaluation.
  • The relationship between complexity and principles.
  • The particular challenges, strengths, and weaknesses of principles-focused evaluation.

Questions about this workshop may be addressed to mqpatton@prodigy.net.

Empowerment Evaluation
David Fetterman

This workshop will introduce colleagues to the theory, concepts, principles, and steps of empowerment evaluation.  Theories will include process use and theories of use and action. Concepts covered will include: critical friend, cycles of reflection and action, and a community of learners.

This workshop will also present the steps to plan and conduct an empowerment evaluation, including:

  1. Establishing a mission or unifying purpose of a group or program.
  2. Taking stock—prioritizing evaluation activities and ratings performance.
  3. Planning for the future—establishing goals and strategies to achieve objectives, as well as credible evidence to monitor change.

We will also highlight the use of basic self-monitoring tools such as establishing a baseline, creating goals, specifying benchmarks, and comparing goals and benchmarks with actual performance.

The workshop will be experiential. Lectures and discussion will be combined with hands-on exercises. The workshop will also cover how to select appropriate user-friendly technological tools to facilitate an empowerment evaluation, aligned with empowerment evaluation principles.

David Fetterman is president and CEO of Fetterman & Associates, an international evaluation consulting firm. He has 25 years’ experience at Stanford University in administration, the School of Education, and the School of Medicine. He is the founder of empowerment evaluation and the author of over 16 books, including Empowerment Evaluation:  Knowledge and Tools for Self-Assessment, Evaluation Capacity Building, and Accountability (Sage) and Empowerment Evaluation Principles in Practice (Guilford) with his collaborators Abraham Wandersman and Shakeh Kaftarian. Fetterman is a past-president of the American Evaluation Association, recipient of the Lazarsfeld and Myrdal Evaluation Awards. He is currently co-chair (with Liliana Rodriguez-Campos) of the AEA Collaborative, Participatory and Empowerment Evaluation Topical Interest Group.

Questions regarding this workshop may be addressed to fettermanassociates@gmail.com.

Introduction to Qualitative Research Methods
Kendall Cotton Bronk

headshot of Kendall Bronk

This workshop is designed to introduce you to different types of qualitative research methods, with a particular emphasis on how they can be used in applied research and evaluation. Although you will be introduced to several of the theoretical paradigms that underlie the specific methods that we will cover, the primary emphasis will be on how you can utilize different methods in applied research and consulting settings. We will explore the appropriate application of various techniques, and review the strengths and limitations associated with each. In addition, you will be given the opportunity to gain experience in the use of several different methods. Overall, the workshop is intended to provide you with the basic skills needed to choose an appropriate method for a given project, as well as primary considerations in conducting qualitative research. Topics covered will include field observation, content analysis, interviewing, document analysis, and focus groups.

Questions regarding this workshop may be addressed to Kendall.Bronk@cgu.edu.

Tuesday, Aug. 22

Evaluating to Improve Educational Outcomes Among Students
Tiffany Berry and Rebecca M. Eddy

headshot of Tiffany Berry Headshot of Rebecca Eddy

Level: Advanced beginner; participants should have prior familiarity with evaluation terminology and some experience in educational settings.

Evaluation has the potential to improve educational outcomes among students. However, evaluators need both the strategies and skills to facilitate this improvement. How can educational outcomes be strategically and intentionally improved across diverse educational settings? What can the evaluator do to ensure students receive the maximum benefit from their learning environments? And, given the current accountability and policy environment, how can contemporary educational evaluators be successful in this diverse milieu? This workshop is designed to answer these questions and more.

As practicing educational evaluators and academics for more than 15 years, we will share our experiences in the trenches as well as explore key issues that are important for contemporary educational evaluators to know. Using lecture, interactive activities, and shared discussion, participants will learn:

  • The current educational policy and accountability landscape.
  • How to use logic models to structure strong educational initiatives.
  • Which educational strategies have been shown to improve student learning outcomes.
  • How to measure if educational strategies are implemented well enough to produce measurable changes in student outcomes.
  • How to measure educational outcomes beyond traditional academic indicators, including social-emotional learning and college/career readiness.

These concepts will be explored using fun, interactive activities. The workshop is designed to engage the audience, so be ready to participate and add your voice to the mix! We will also supply a reading list to any participant who wants more evaluation resources about these concepts.

Questions regarding this workshop may be addressed to tiffany.berry@cgu.edu.

Occupational Health Psychology
Norbert K. Semmer

headshot of Norbert Semmer

Level: Advanced beginner; participants should have a basic knowledge of psychology and a basic understanding of research methods.

Stress at work is a major issue both for individuals as well as for organizations, which have an interest in a healthy and productive workforce. This workshop will provide an overview regarding major theories in occupational health psychology and regarding research on stressors and resources and their relationship with health. It will also cover ways of preventing and handling stress, both on the personal level (stress management) and on the organizational level (job and organizational design; organization-level interventions.

You will learn:

  • What is stress?
  • What are typical ways of coping with stress?
  • What are individual factors that make it easier or more difficult to deal with stress?
  • What are major factors at work that induce stress (stressors)?
  • What are major factors at work that prevent stress or make it easier to deal with it (resources)?
  • What are short-term and long-term consequences of stress?
  • What can individuals do to deal with stress?
  • What can organizations do to avoid levels of stress that are potentially harmful?

Recommended reading: Sonnentag, S., & Frese, M. (2013). “Stress in Organizations.” In N. W. Schmitt & S. Highhouse (eds.), Industrial and Organizational Psychology (pp. 560-592). Hoboken, NJ: Wiley.

Questions regarding this workshop may be addressed to norbert.semmer@psy.unibe.ch.

Blue Marble Evaluation for Global Systems Change Initiatives
Michael Quinn Patton

headshot of Michael Quinn Patton

Level: Intermediate

There are three Blue Marble emphases for evaluating global systems change:

  • Thinking beyond interventions and indicators at the nation/state level; instead, thinking, analyzing, acting, and evaluating globally.
  • Thinking beyond silos; instead connecting and interrelating interventions, breaking down silos, examining integration, alignment, and coherence across sectoral specializations and across Sustainability Development Goals (SDGs).
  • Connecting the local with the global, and the global with the local.

Global Systems Change Evaluation includes attention to and analysis of the interconnection of top-down globalization processes and bottom-up processes that incorporate local knowledge and indigenous wisdom. In that regard, the course will examine the middle intersection between top-down and bottom-up processes, and their interactions: this is the midpoint intersection between globalization and local contextual dynamics captured in the admonition to “Think globally, act locally.” Case exemplars of global systems change initiatives and Blue Marble Evaluations will be reviewed and discussed.

The course will look at trends in globalization, pushbacks against globalization (America First), and perceptions of globalization and its effects on people and governments. This course will treat globalization as a reality, not an ideology, and will make it clear that Global Systems Change Evaluation is not about being for or against globalization, but rather recognizing that initiatives taking on global problems need a global evaluation framework to examine the effectiveness of those initiatives.

You will learn:

  • The contributions that Blue Marble Evaluators can make by being at the table as global systems change initiatives are formulated, strategies are determined, and evaluation criteria for effectiveness and impact are established.
  • Blue Marble Evaluation principles that position Blue Marble Evaluation as a specialized niche within the larger panorama and transdisciplinary of professional program evaluation worldwide.
  • Competencies needed for Blue Marble Evaluation including what it takes to be a skilled and effective Blue Marble Evaluator.

Questions about this workshop may be addressed to mqpatton@prodigy.net.

An Introduction to Social Impact Measurement: The Role of Evaluation in Private-Sector Organizations with Social Missions
John Gargani

headshot of John Gargani

Over the past five years, there has been a rapid increase in the number and scale of private-sector companies working to advance the public good. They may be known as social enterprises, impact investors, or green corporations, depending on their business model, impact theory, and the sector in which they operate. To evaluate non-financial performance, they engage in social impact measurement. It is a form of evaluation that incorporates tools, approaches, and theories adapted from the private sector. In this workshop, you will be introduced to social impact measurement and its methods. Our discussion will be grounded in theories and frameworks that will help you understand the role of evaluation in this emerging area of practice.

Over a day, you will learn about:

  • The considerable variety of private-sector actors with social missions, and the contexts in which they work.
  • The concept of impact, how it varies, and its relationship to evaluation methods.
  • Common standards and frameworks for measuring impact.
  • Methods that combine financial analysis and impact measurement, such as social return on investment.
  • Corporate impact reports and how to interpret them.

The workshop is intended as an introduction and overview. It is best suited for individuals with prior evaluation training and/or professional experience. It will not be webcast.

Questions about this workshop may be addressed to john@gcoinc.com.

Contact Us

Claremont Evaluation Center

Claremont Graduate University
175 East 12th Street
Claremont, CA 91711
(909) 607-2475
sherry.nissen@cgu.edu