Daily Schedule

8:00 am–9:00 am: Check-In and Continental Breakfast
9:00 am–9:15 am: Morning Welcome and Introductions
9:15 am: Workshops Begin
10:45 am–11:00 am: Break
12:00 pm–1:30 pm: Lunch Break
3:00 pm–3:15 pm: Break
4:45 pm: Workshops Conclude

For those attending webcasts, please note that all times listed are in the Pacific time zone. The Morning Welcome will be webcast; webcasts begin at 9:00 am, but webcast attendees are advised to log in by 8:45 am to test their connections.

Each workshop lasts one full day, from 9:00 am to 4:45 pm. On days when there are multiple workshops, you can physically attend only one, but you may sign up for concurrent online workshops and view them afterward.

To register in advance or for online workshop attendance, visit the registration page. Registration for on-site workshop attendance is available to walk-in registrants.

2018 Workshops

Wednesday, August 15

Thursday, August 16

Friday, August 17

Saturday, August 18

Sunday, August 19

Monday, August 20

Tuesday, August 21

Wednesday, August 22

Workshop Descriptions

Wednesday, August 15

Basics of Evaluation & Applied Research Methods
Stewart I. Donaldson and Christina A. Christie

headshot of Stewart Donaldson headshot of Christina Christie

This workshop will provide participants with an overview of the core concepts in evaluation and applied research methods. Key topics will include the various uses, purposes, and benefits of conducting evaluations and applied research, basics of validity and design sensitivity, strengths and weaknesses of a variety of common applied research methods, and the basics of program, policy, and personnel evaluation. In addition, participants will be introduced to a range of popular evaluation approaches including the transdisciplinary approach, program theory-driven evaluation science, experimental and quasi-experimental evaluations, empowerment evaluation, fourth-generation evaluation, inclusive evaluation, utilization-focused evaluation, and realist evaluation. This workshop is intended to provide participants with a solid introduction, overview, or refresher on the latest developments in evaluation and applied research, and to prepare participants for intermediate and advanced level workshops in the series.

Recommended background readings include:

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

Thursday, August 16

Quasi-Experimental Design
William D. Crano

headshot of William Crano

Conducting, interpreting, and evaluating research are important aspects of the social scientist’s job description.  To that end, many good educational programs provide opportunities for training and experience in conducting and evaluating true experiments (or randomized controlled trials—RCTs—as they sometimes are called).  In applied contexts, the opportunity to conduct RCTs often is quite limited, despite the strong demands on the researcher/evaluator to render “causal” explanations of results, as they lead to more precise understanding and control of outcomes. In such restricted contexts, which are absolutely more common than those supporting RCTs, quasi-experimental designs sometimes are employed. Though they usually do not support causal explanations (with some noteworthy exceptions), they sometimes provide evidence that helps reduce the range of plausible alternative explanations of results, and thus, can prove to be of real value. This workshop is designed to impart an understanding of quasi-experimental designs. After some introductory foundational discussion focused on “true” experiments, we will consider quasi-experimental designs that may be useful across a range of settings that do not readily admit to experimentation. These designs will include time series and interrupted time series methods, nonrandomized designs with and without control groups, case control (or ex post facto) designs, regression-discontinuity analysis, and other esoterica. Participants are encouraged to bring to the workshop design issues they are facing in real world contexts.

Questions regarding this workshop may be addressed to William.Crano@cgu.edu.

Introduction to Qualitative Research Methods
Kendall Cotton Bronk

headshot of Kendall Bronk

This workshop is designed to introduce you to different types of qualitative research methods, with a particular emphasis on how they can be used in applied research and evaluation. Although you will be introduced to several of the theoretical paradigms that underlie the specific methods that we will cover, the primary emphasis will be on how you can utilize different methods in applied research and consulting settings. We will explore the appropriate application of various techniques, and review the strengths and limitations associated with each. In addition, you will be given the opportunity to gain experience in the use of several different methods. Overall, the workshop is intended to provide you with the basic skills needed to choose an appropriate method for a given project, as well as primary considerations in conducting qualitative research. Topics covered will include field observation, content analysis, interviewing, document analysis, and focus groups.

Questions regarding this workshop may be addressed to Kendall.Bronk@cgu.edu.

Grant Writing
Allen M. Omoto

headshot of Allen Omoto

This workshop covers some of the essential skills and strategies needed to prepare successful grant applications for education, research, and/or program funding. It will provide participants with tools to help them conceptualize and plan research or program grants, offer ideas about where to seek funding, and provide suggestions for writing and submitting applications. Some of the topics covered in the workshop include the pros and cons of grant-supported work, strategies for identifying sources of funding, the components and preparation of grant proposals, and the peer review process. Additional topics related to assembling a research or program team, constructing a project budget, grants management, and tips for effective writing also will be covered. The workshop is intended primarily as an introduction to grant writing, and will be most useful for new or relatively inexperienced grant writers. Workshop participants are invited to bring their own “works in progress” for comment and sharing. There will be limited opportunities for hands on work and practice during the workshop. At its conclusion, workshop participants should be well positioned to read and evaluate grant applications, as well as to assist with the preparation of applications and to prepare and submit their own applications in support of education, research, or program planning and development activities.

Questions regarding this workshop may be addressed to Allen.Omoto@cgu.edu.

Evaluation Capacity Building
Leslie Fierro

Leslie Fierro

Evaluation capacity building (ECB) involves implementing one or more interventions to build individual, group, or organizational capacities to engage in and sustain the act of evaluation including, but not limited to commissioning, planning, implementing, and using findings from evaluations. Participants will be introduced to the fundamentals of ECB; intended outcomes of ECB interventions; a range of ECB interventions; and important considerations in measuring evaluation capacity in organizations. This course provides practitioners with an opportunity to consider what types of evaluation capacity they want to build within their organization and why and what needs to change among whom within the organization to attain these goals.

Friday, August 17

Theory-Driven Evaluation Science: Finding the Sweet Spot between Rigor and Relevance in Evaluation Practice
Stewart I. Donaldson

headshot of Stewart Donaldson

This workshop is designed to provide participants with an opportunity to increase their understanding of theory-driven evaluation science, and to learn how to use this approach to improve the rigor and usefulness of evaluations conducted in complex “real world” settings. We will examine the history and foundations of theory-driven evaluation science, theories of change based on stakeholder and social science theories, how these theories of change can be used to frame and tailor evaluations, and how to gather credible and actionable evidence to improve evaluation accuracy and usefulness. Lecture, exercises, discussions, and a wide range of practical examples from evaluation practice will be provided to illustrate main points and key take-home messages, and to help participants to integrate these concepts into their own work immediately.

Recommended background readings include:

Suggested overview readings:

  • Leeuw, F. L., & Donaldson, S. I. (2015). Theories in evaluation: Reducing confusion and encouraging debate.  Evaluation: The International Journal of Theory, Research, & Practice, 21(4), 467-480.
  • Donaldson, S.I., & Lipsey, M.W. (2006). Roles for theory in contemporary evaluation practice: Developing practical knowledge. In I. Shaw, J.C. Greene, & M.M. Mark (Eds.), The Handbook of Evaluation: Policies, Programs, and Practices (pp. 56-75). London: Sage.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

Applications of Correlation and Multiple Regression: Mediation, Moderation, and More
Dale E. Berger

Dale Berger

Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model complex relationships. In this workshop we’ll explore preparing data for analysis, selecting models that are appropriate to your data and research questions, running analyses including mediation and moderation, interpreting results, and presenting findings to a nontechnical audience. The presenter will demonstrate applications from start to finish with SPSS and Excel. In recognition of the fact that it is difficult to remember everything in a presentation, participants will be given detailed handouts with explanations and examples that can be used later to guide similar applications.

Level: Intermediate; participants should have some familiarity with correlation and statistical analyses.

Questions about this workshop may be addressed to Dale.Berger@cgu.edu.

Evaluating to Improve Educational Outcomes Among Students
Tiffany Berry and Rebecca M. Eddy

headshot of Tiffany Berry Headshot of Rebecca Eddy

Level: Advanced beginner; participants should have prior familiarity with evaluation terminology and some experience in educational settings.

Evaluation has the potential to improve educational outcomes among students. However, evaluators need both the strategies and skills to facilitate this improvement. How can educational outcomes be strategically and intentionally improved across diverse educational settings? What can the evaluator do to ensure students receive the maximum benefit from their learning environments? And, given the current accountability and policy environment, how can contemporary educational evaluators be successful in this diverse milieu? This workshop is designed to answer these questions and more.

As practicing educational evaluators and academics for more than 15 years, we will share our experiences in the trenches as well as explore key issues that are important for contemporary educational evaluators to know. Using lecture, interactive activities, and shared discussion, participants will learn:

  • The current educational policy and accountability landscape.
  • How to use logic models to structure strong educational initiatives.
  • Which educational strategies have been shown to improve student learning outcomes.
  • How to measure if educational strategies are implemented well enough to produce measurable changes in student outcomes.
  • How to measure educational outcomes beyond traditional academic indicators, including social-emotional learning and college/career readiness.

These concepts will be explored using fun, interactive activities. The workshop is designed to engage the audience, so be ready to participate and add your voice to the mix! We will also supply a reading list to any participant who wants more evaluation resources about these concepts.

Questions regarding this workshop may be addressed to tiffany.berry@cgu.edu.

Saturday, August 18

Survey Research Methods
Jason T. Siegel

headshot of Jason Siegel

The focus of this hands-on workshop is to instruct attendees how to create reliable and valid surveys to be used in applied research. A bad survey is very easy to create. Creating an effective survey requires a complete understanding of the impact that item wording, question ordering, and survey design can have on a research effort. Only through adequate training can a good survey be discriminated from the bad. The daylong workshop will focus specifically on these three aspects of survey creation. The day will begin with a discussion of Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique the item writing of selected national surveys. Next, attendees will work in groups to create survey items of their own. Using Sudman, Bradburn, and Schwatrz’s (1996) cognitive approach, attendees will then be informed of the various ways question order can bias results. As practice, attendees will work in groups to critique the item ordering from selected national surveys. Next, attendees will propose an ordering scheme for the questions created during the previous exercise. Lastly, using several sources, the keys to optimal survey design will be provided. As practice, the design of national surveys will be critiqued. Attendees will then work with the survey items created, and properly ordered, in class and propose a survey design.

Questions regarding this workshop may be addressed to Jason.Siegel@cgu.edu.

Increasing the Usefulness of Formative Evaluation: Integrating Leading and Lagging Indicators
Brad C. Phillips and Jordan E. Horowitz

Brad Phillips Jordan Horowitz

Michael Scriven notes that formative evaluations help to form programs. However, the findings from formative evaluations often examine metrics that are not actionable by program administrators and staff. These evaluations focus on lagging indicators: the big program impact outcomes such as graduation rates in education or recidivism rates in mental health. These indicators are mostly out of the control of the program being evaluated as they cannot be directly influenced—they are measured on the target population upon completion.
In this workshop participants will be introduced to a new data use model from the leaders’ recent book from Harvard Education Press. Participants will learn to identify leading indicators—metrics that are actionable by program staff and lead to the intended outcomes—and how to successfully incorporate leading and lagging indicators in their evaluations. The workshop includes engaging activities; and participants are encouraged to bring examples from their own practice for customized coaching. Additionally, the leaders will present strategies for improving the way data is presented that are grounded in the latest research in psychology, neuroscience and behavioral economics to influence human judgment and decision-making and organizational habits. The result will be evaluations that are more client-centered with findings that are useful, usable, and actionable.

Recommended reading:
Creating a Data-Informed Culture in Community Colleges: A New Model for Educators, by Brad C. Phillips and Jordan E. Horowitz.

Questions about this workshop may be addressed to Jordan E. Horowitz.

Learning from Success: Incorporating Appreciative Inquiry in Evaluation
Tessie Catsambas

headshot of Tessie Catsambas

In her blog, “Value-for-Money: Value-for-Whom?” Caroline Heider, Director General of the Independent Evaluation Group of the World Bank, pushes evaluators to make sure that “the questions we ask in our evaluations hone in on specifics that deepen the understanding of results and past experience,” and to ask ourselves what difference our recommendations will make once implemented, and what added value they will create.

Applying Appreciative Inquiry to evaluation provides a way to drive an evaluation by vision and intended use, builds trust to get more accurate answers to evaluation questions, and offers an avenue to increase inclusion and deepen understanding by incorporating the systematic study of successful experiences in the evaluation.

Appreciative evaluation is just as serious and systematic as problem analysis and problem solving; and it is probably more difficult for the evaluator, because it requires continuous reframing of familiar problem-focused language.

In this one-day workshop, participants will be introduced to Appreciative Evaluation and will explore ways in which it may be applied in their own evaluation work. Participants will use appreciative interviews to focus an evaluation, to structure and conduct interviews, and to develop indicators. Participants will practice “reframing” and then reflect on the power of appreciative and generative questions. Through real-world case examples, practice case studies, exercises, discussion and short lectures, participants will learn how to incorporate AI into their evaluation contexts.

Workshop Agenda

  • Introduction: Theoretical Framework of Appreciative Inquiry (lecturette)
  • Logic and Theory of Appreciative Inquiry (lecturette)
  • Imagine phase: Visions (case study: small-group work) Lunch Reframing deficits into assets (skills building)
  • Good questions exercise (skills building)
  • Innovate: Provocative propositions/possibility statements, links to developing indicators (case study: small group work)
  • Applications of AI—tying things together (lecturette and discussion)
  • Questions and Answers
  • Evaluation

Questions regarding this workshop may be addressed to Tcatsambas@encompassworld.com.

Sunday, August 19

Introduction to Positive Organizational Psychology
Stewart I. Donaldson and Jeffrey Yip

headshot of Stewart Donaldson headshot of Jeffrey Yip

Since its formal introduction at the American Psychological Association Convention in 1998, the positive psychology movement has blossomed, giving birth to a vibrant community of scholars and practitioners interested in improving various aspects of society.

Positive organizational psychology has been defined as the scientific study of positive subjective experiences and traits in the workplace and positive organizations, and its application is to improving the effectiveness and quality of life in organizations. The purpose of this workshop is to introduce participants to cutting edge theory, research, and applications of positive psychology in the workplace.

In this full-day workshop, you will be provided with an overview of the positive psychology movement, and its relationship with positive organizational psychology, behavior, and scholarship. Through lectures, small group discussions, exercises, and cases, you will learn how to apply positive psychology and design thinking principles to improve the quality of work life, well-being of all organizational stakeholders, and organizational effectiveness.

Topics such as positive leadership, exemplary teamwork, flow in the workplace, positive career development and mentoring, appreciative inquiry, and positive organizational development will be explored.

Recommended reading:

  • Donaldson, S. I., Dollwet, M., & Rao, M. (2015). “Happiness, excellence, and optimal human functioning revisited: Examining the peer-reviewed literature linked to positive psychology.” Journal of Positive Psychology, 9(6), 1–11.
  • Donaldson, S. I., & Dollwet, M. (2013). “Taming the waves and wild horses of positive organizational psychology.” Advances in Positive Organizational Psychology, 1, 1–21.
  • Donaldson, S. I., & Ko, I. (2010). “Positive organizational psychology, behavior, and scholarship: A review of the emerging literature and evidence base.” Journal of Positive Psychology, 5 (3), 177–191.

Please contact Stewart.Donaldson@cgu.edu or Jeffrey.Yip@cgu.edu if you have questions.

Data Visualization
Tarek Azzam

headshot of Tarek Azzam

The careful planning of visual tools will be the focus of this workshop. Part of our responsibility as evaluators is to turn information into knowledge. Data complexity can often obscure main findings, or hinder a true understanding of program impact. So how do we make information more accessible to stakeholders? Often this is done by visually displaying data and information, but this approach, if not done carefully, can also lead to confusion. We will explore the underlying principles behind effective information displays. These are principles that can applied in almost any area of evaluation, and draw on the work of Edward Tufte, Stephen Few, and Johnathan Koomey to illustrate the breadth and depth of their applications. In addition to providing tips to improve most data displays, we will examine the core factors that make them effective. We will discuss the use of the common graphical tools, and delve deeper into other graphical displays that allow the user to visually interact with the data.

Questions regarding this workshop may be addressed to Tarek.Azzam@cgu.edu.

Providing Credible Evidence in Evaluation: Theory and Practice
Huey T. Chen

headshot of Huey Chen

The workshop will discuss two alternative perspectives for providing credible evidence in evaluation. The workshop will start with a review of the classic experimentation paradigm’s theory and approach to credible evidence as discussed by Campbell and his associates (Campbell & Stanley, 1963; Cook & Campbell, 1979; Shadish, Cook, & Campbell, 2002). According to the paradigm, credible evidence means rigor or maximum possible internal validity in evaluation methodology. To obtain rigor, evaluators have to apply randomized controlled trials (RCTs) or quasi-experimental methods to rule out as many confounding factors as possible, and obtain the intervention’s pure independent effects. The workshop will provide a review on the strong influence of the paradigm on how to conduct outcome evaluation and the heated ongoing debates on whether RCTs are the best evaluation methods (Chen, Donaldson, & Mark, 2011).

Many evaluators have concerns on feasibility or usefulness of the paradigm for real-world program; however, they have to follow these principles and methods in the absence of an alternative methodology. Recently, Goodman and associates (2018) brought attention to the lack of alternatives and the need for developing a more inclusive conceptualization of credible evidence. The workshop will introduce the application of an integrated evaluation perspective that provides an inclusive conceptual framework of credible evidence and realistic approaches for obtaining credible evidence.

The integrated evaluation perspective introduces three types of cogency: effectuality, viability, and transferability. The workshop will explain that social betterment interventions are just one piece of the puzzle for solving a community problem. Accordingly, evaluating a real-world program is equivalent to assessing the holistic effect of the intervention, along with the contextual factors (effectuality) rather than attempting to artificially isolate its pure independent effects (efficacy). The integrated evaluation perspective asserts that real-world evaluation should also assess how well an intervention integrates with other pieces of the puzzle (viability), and whether this arrangement can be transferred to another community (transferability).

Workshop participants will learn and be competent on how to systematically provide credible evidence based the experimentation paradigm and the integrated evaluation perspective. Additionally, participants will be competent on how to select a perspective to fit a particular program evaluation circumstances and justify their selection.

Questions about this workshop may be addressed to Huey T. Chen.

Monday, August 20

Leadership in Sustainable Development Goals (SDG) Evaluation: What We Need and Why We Don’t Have It Yet!
Deborah L. Rugg

headshot of Deborah Rugg

Level: Advanced beginner; appropriate for a broad range of participants and backgrounds

The adoption of the 2030 Agenda for Sustainable Development and the important and visible role that the agenda assigns to evaluation in the agenda’s follow-up and review process will require efforts to further strengthen evaluation practices in developed and developing countries alike.

The aspirational nature and interconnectedness of many of the agenda’s targets will require those conducting evaluation, as well as those commissioning evaluation, to have a thorough understanding of the agenda and its goals, targets and indicator framework.

This workshop will describe the background of the development of the sustainable development goals, why it is unprecedented, some current approaches to “SDG responsive evaluation” and why we have a very long way to go.

Consideration of a rights-based approach to performance-based versus systems-based approaches will be highlighted. We will conclude with a discussion of the current status of evaluation as a strategic tool to help achieve the Sustainable Development Goals, and what we now need to do better.

You will specifically learn the following:

  1. What are the SDGs, how did they develop, why are they unprecedented, how did evaluation come to be included, and what are the current challenges?
  2. How are the SDGs being monitored and evaluated nationally and globally? What are the SDG targets and indicators and how do they relate to evaluation?
  3. What is “SDG-responsive evaluation” and what are some of the different approaches, such as performance-based versus systems based approaches?
  4. What is the status of evaluation in the current global context? What is the 2020 Global Agenda for Evaluation? What is needed now? How do we learn more and become involved?

Questions regarding this workshop may be addressed to Deborah Rugg.

Culturally Responsive Evaluation
Katrina L. Bledsoe

headshot of Katrina Bledsoe

The beauty of the field of evaluation is in its potential responsiveness to the myriad of contexts in which people—and programs, policies, and the like—exist. As the meaning and construction of the word “community” expands, the manner in which evaluation is conducted must parallel that expansion. Evaluations must be less about a community and more situated and focused within the community, therefore increasing its responsiveness to the uniqueness of the setting/system. To do this, however, requires an expanded denotative and connotative meaning of community. Moreover, it requires us to think innovatively about how we construct and conduct evaluations, and to broadly consider the kind of data that will be credible to stakeholders, and consumers. The goal of this workshop is to engage the attendee in thinking innovatively about what evaluation looks like within a community, rather than simply about a community. We will engage in a process called “design thinking” (inspired by Design Innovation Consultants IDEO and Stanford’s Design School) to help us consider how we might creatively design responsive and credible community-based evaluations. This interactive course includes some necessary foundation-laying, plenty of discussion, and of course, opportunities to think broadly about how to construct evaluations with the community as the focal point.

Questions regarding this workshop may be addressed to Katrina.Bledsoe@gmail.com.

Expanding Pathways to Leadership
Michelle Bligh

Michelle Bligh

There is no question that leadership profoundly affects our lives through our roles in various types of organizations. Throughout history, the successes and failures of individuals, groups, organizations, and societies have been attributed to leadership. However, leadership is more than just a collection of tools and tips, or even skills and competencies; the essence of an individual’s leadership is fundamentally shaped by her or his values, philosophies, and beliefs. In addition, pathways to organizational leadership are complicated by the various challenges and opportunities rooted in gender, race, ethnicity, age, class, citizenship, ability, and experience.

Through the metaphor of the labyrinth, we will explore the following questions: What is effective leadership, and how can we encourage more individuals to identify both as leaders and proactive followers? How can we develop more inclusive leadership programs that allow diverse leaders to rise to the new challenges and demands of a global world? We will examine what successful 21st-century leaderships looks like, drawing on theories of philosophy and ethics, charismatic and transformational leadership, and followership. Using research, cases, and exercises, we will examine constructs critical to practicing inclusive leadership in modern organizations, including empowerment, authenticity, accountability, courage, influence, and humility.

Questions regarding this workshop may be addressed to Michelle.Bligh@cgu.edu.

Tuesday, August 21

Blue Marble Evaluation for Global Systems Change Initiatives
Michael Quinn Patton

headshot of Michael Quinn Patton

Level: Intermediate

There are three Blue Marble emphases for evaluating global systems change:

  • Thinking beyond interventions and indicators at the nation/state level; instead, thinking, analyzing, acting, and evaluating globally.
  • Thinking beyond silos; instead connecting and interrelating interventions, breaking down silos, examining integration, alignment, and coherence across sectoral specializations and across Sustainability Development Goals (SDGs).
  • Connecting the local with the global, and the global with the local.

Global Systems Change Evaluation includes attention to and analysis of the interconnection of top-down globalization processes and bottom-up processes that incorporate local knowledge and indigenous wisdom. In that regard, the course will examine the middle intersection between top-down and bottom-up processes, and their interactions: this is the midpoint intersection between globalization and local contextual dynamics captured in the admonition to “Think globally, act locally.” Case exemplars of global systems change initiatives and Blue Marble Evaluations will be reviewed and discussed.

The course will look at trends in globalization, pushbacks against globalization (America First), and perceptions of globalization and its effects on people and governments. This course will treat globalization as a reality, not an ideology, and will make it clear that Global Systems Change Evaluation is not about being for or against globalization, but rather recognizing that initiatives taking on global problems need a global evaluation framework to examine the effectiveness of those initiatives.

You will learn:

  • The contributions that Blue Marble Evaluators can make by being at the table as global systems change initiatives are formulated, strategies are determined, and evaluation criteria for effectiveness and impact are established.
  • Blue Marble Evaluation principles that position Blue Marble Evaluation as a specialized niche within the larger panorama and transdisciplinary of professional program evaluation worldwide.
  • Competencies needed for Blue Marble Evaluation including what it takes to be a skilled and effective Blue Marble Evaluator.

Questions about this workshop may be addressed to mqpatton@prodigy.net.

Wednesday, August 22

Successful Evaluation Consulting: How to Build and Sustain Your Business
Michael Quinn Patton

headshot of Michael Quinn Patton

Consulting is a niche business. Success begins with identifying and understanding your niche. Marketing and building the practice follows from that. This course features an indirect marketing strategy that builds and deepens demand through knowing and meeting the needs of clients in your consultation niche. It is a value-added strategy.

Issues addressed include: What does it take to establish an independent consulting practice? How do you find your consulting niche? How do you attract clients, determine how much to charge, create collaborations, and generate return business? Included will be discussion on such topics as marketing, pricing, bidding on contracts, managing projects, resolving conflicts, professional ethics, and client satisfaction. Participants will be invited to share their own experiences and seek advice on situations they’ve encountered. The course is highly interactive and participant-focused.

This class offers the opportunity for participants to learn from someone who has been a successful evaluation consultant for 45 years.

Course book: Facilitating Evaluation: Principles in Practice, by Michael Quinn Patton (Sage, 2018).

Questions about this workshop may be addressed to mqpatton@prodigy.net.

Contact Us

Claremont Evaluation Center

Claremont Graduate University
175 E. 12th Street
Claremont, CA 91711
909-607-9013
omara.turner@cgu.edu