2016 Workshops from
August 18th through August 23rd

Registration for 2016 Workshops now CLOSED!

With financial support from The Rockefeller Foundation,
Scholarships are Available for Evaluators in Developing Countries

The Claremont Evaluation Center is pleased to bring you this longstanding series which provides practical and theoretical training in evaluation and applied research through one-day workshops, taught by academics and practitioners from across the globe.

Previous years have consistently brought hundreds of participants to Claremont from across the globe, representing an exciting cross-section of the private and public sectors. In recent years, we have expanded our offerings to provide extensive online participation to make these workshops even more accessible. This year, almost all the workshops will be offered online as well as in-person!

The workshops will take place at Claremont Graduate University in the Ron W. Burkle Family Building. The Burkle Building is located at 1021 N. Dartmouth Ave, Claremont, CA 91711. Each morning begins with a brief group session in room Burkle 16, adjacent to the Registration Booth.

Parking is free in the parking lots located on College Ave & 8th St, Dartmouth Ave & 9th St, and between College & Dartmouth on 11th St. Street parking is also available during the workshops.

Online Workshops

For those unable to join us in person in Southern California this summer, almost all 2016 workshops will be webcast live. Additionally, recordings of the workshops will be made available online for two weeks after the workshops have concluded.

Daily Schedule

Each workshop will last one full day.
For those attending webcasts, please note that all times listed are in the Pacific time zone.

8:00 a.m. – 9:00 a.m. Check in & Continental Breakfast
9:00 a.m. – 9:15 a.m. Morning Welcome & Introductions
9:15 a.m. – Workshop Begins
10:45 a.m. – 11:00 a.m. Break
12 noon – 1:30 p.m. Lunch Break
3:00 p.m. – 3:15 p.m. Break
4:45 p.m. Workshop Concludes

2016 Registration Rates

Generous gifts to Claremont Graduate University made it possible for us to once again realize our mission of making high-quality evaluation education more accessible to a diverse group of working professionals and students.  In response to economic concerns across the globe, this year’s registration rates have been reduced to make the workshops accessible to the public.

Regular Registration Rates for On-Site Workshops:

  • Professionals: $125 per one full-day workshop
  • Students: $75 per one full-day workshop

Rates for Virtual Classroom Workshops:

  • Professionals: $125 per one full-day workshop
  • Students: $75 per one full-day workshop

Scholarships

Thank you for your interest in the 2016 Professional Development Workshop Series. We are no longer accepting applications for The Rockerfeller Scholarship at this time.

A limited number of financial scholarships are available for U.S. based individuals experiencing financial challenges. To apply, please complete and send your application to Richard Dowlat at Richard.Dowlat2@cgu.edu.

For all Claremont Graduate University students, incoming students, and staff, we are offering one free workshop! Please email Richard.Dowlat2@cgu.edu to register for your free workshop. For more information on how to apply, or for questions about workshop registration, please contact Marcella Camberos at Marcella.Camberos@cgu.edu.

For those admitted into the Certificate of Advanced Study in Evaluation, or our M.A. and Ph.D. Programs, some of these workshops meet various program requirements.

A certificate of participation is available upon request for all participants.For more information about our Ph.D., M.A., and other programs in Evaluation, click here.

Contact us at dbos@cgu.edu (909 607-9013) with any inquiries about group registrations.

students in classroom
Dr. Huey T. Chen presents on theory-driven program evaluation.

2016 Workshop Descriptions

Thursday, August 18

Basics of Evaluation and Applied Research Methods

Stewart I. Donaldson & Tina Christie

This workshop will provide participants with an overview of the core concepts in evaluation and applied research methods. Key topics will include the various uses, purposes, and benefits of conducting evaluations and applied research, basics of validity and design sensitivity, strengths and weaknesses of a variety of common applied research methods, and the basics of program, policy, and personnel evaluation. In addition, participants will be introduced to a range of popular evaluation approaches including the transdisciplinary approach, program theory-driven evaluation science, experimental and quasi-experimental evaluations, empowerment evaluation, fourth generation evaluation, inclusive evaluation, utilization-focused evaluation, and realist evaluation. This workshop is intended to provide participants with a solid introduction, overview, or refresher on the latest developments in evaluation and applied research, and to prepare participants for intermediate and advanced level workshops in the series.

Recommended background readings include:

Copies are available from Amazon.com by following the links above and are also available from the Claremont Evaluation Center for $20 each.  Checks should be made out to Claremont Graduate University and addressed to: Marcella Camberos, Claremont Graduate University/SSSPE, 123 E. 8th Street, Claremont CA 91711.
Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

Friday, August 19

Theory-driven Evaluation Science: Finding the Sweet Spot between Rigor and Relevance in Evaluation Practice

Stewart. I. Donaldson

This workshop is designed to provide participants with an opportunity to increase their understanding of theory-driven evaluation science, and to learn how to use this approach to improve the rigor and usefulness of evaluations conducted in complex “real world” settings.   We will examine the history and foundations of theory-driven evaluation science, theories of change based on stakeholder and social science theories, how these theories of change can be used to frame and tailor evaluations, and how to gather credible and actionable evidence to improve evaluation accuracy and usefulness.  Lecture, exercises, discussions, and a wide range of practical examples from evaluation practice will be provided to illustrate main points and key take-home messages, and to help participants to integrate these concepts into their own work immediately.

Recommended background readings include:

Copies are available from Amazon.com by following the links above and are also available from the Claremont Evaluation Center for $20 each.  Checks should be made out to Claremont Graduate University and addressed to: Marcella Camberos, Claremont Graduate University/SSSPE, 123 E. 8th Street, Claremont CA 91711.

Suggested overview readings:

  • Leeuw, F. L., & Donaldson, S. I. (2015). Theories in evaluation: Reducing confusion and encouraging debate.  Evaluation: The International Journal of Theory, Research, & Practice, 21(4), 467-480. 
  • Donaldson, S.I., & Lipsey, M.W. (2006). Roles for theory in contemporary evaluation practice: Developing practical knowledge. In I. Shaw, J.C. Greene, & M.M. Mark (Eds.), The Handbook of Evaluation: Policies, Programs, and Practices (pp. 56-75). London: Sage.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

Assessment of Cognitive Abilities: The Science, Philosophy, and Art of Latent Variable Models

Andrew Conway

A latent variable model is a statistical model that relates observed (manifest) variables to a set of unobserved (latent) variables. A common example is confirmatory factor analysis. In Psychology, the standard approach is to assume that latent variables, or factors, are reflective. That is, we assume that there is something out there, represented by the factor, and the manifest variables are indicators of this something. For example, in the study of cognitive abilities, it is often assumed that a general factor, g, causes the outcomes on manifest variables. An alternative approach is to assume that the latent variables are formative. In formative models the chain of causation is the opposite. The latent variable emerges because of the manifest variables and not the other way around. For example, in the case of cognitive abilities, g is the result, rather than the cause of the correlations among manifest variables. Similar formative latent variables are socioeconomic status (SES) and general health, each of which tap common variance among measures, but do not explain it. I will discuss how latent variable models are specified and evaluated and then discuss the implications for interpretation when factors are assumed to be formative rather than reflective.

Questions regarding this workshop may be addressed to Andrew.Conway@cgu.edu.

Introduction to Educational Evaluation

Tiffany Berry & Rebecca Eddy

This workshop is designed to provide participants an overview of the key concepts, issues, and current trends in contemporary educational program evaluation. Educational evaluation is a broad and diverse field, covering multiple topics such as curriculum evaluation, K-12 teaching/learning, institutional research and assessment in higher education, teacher education, Science, Technology, Engineering, and Mathematics (STEM), out of school time (OST), and early childhood education. To operate within these varied fields, it is important for educational evaluators to possess an in-depth understanding of the educational environment as well as implement appropriate evaluation methods, procedures, and practices within these fields. Using lecture, interactive activities, and discussion, we will provide an overview of key issues that are important for contemporary educational evaluators to know, such as (1) differentiating between assessment, evaluation and other related practices; (2) understanding common core standards and associated assessment systems; (3) emerging research on predictors of student achievement; and (4) development of logic models and identification of program activities, processes and outcomes. Case studies of recent educational evaluations with a focus on K-12 will be used to introduce and discuss these issues.

Questions regarding this workshop may be addressed to Tiffany.Berry@cgu.edu.

Occupational Health Psychology: Concepts and Findings on Stress and Resources at Work

Norbert Semmer

Stress and resources at work are important for individual health as well as organizational productivity. The workshop is intended to give an overview over stress and resources at work, how they relate to important outcomes, most notably health and well-being, and what challenges do we face when doing research on these issues and when trying to foster health and well-being on an individual and on an organizational basis.
There will be three parts of the workshop: a) on basic mechanisms, b) stressors and resources at work and their effects, and c) intervention.
a) Basic Mechanisms

  • What is stress, what are resources, and what basic mechanisms (e.g., appraisal, coping, action tendencies, physiological activation) are involved

b) Stressors and Resources at work

  • What factors in the (working) environment and in the person are likely to induce stress
  • What factors are likely to prevent or attenuate stress or its consequences (resources)
  • How can all these factors be measured, and what methodological problems have to be dealt with
  • What are major models of stress and resources at work
  • Research designs employed in occupational health psychology
  • Issues of time: How do stressors and resources affect well-being, health, and productivity in the short term (e.g., over days and weeks), and under what circumstances do they effect health, well-being, and productivity in the long run.
  • What are the effects of new economic and technological developments (e.g., new communication technologies)

c) Inteventions

  • What methods can be used to support people in dealing with stress in a better way, and what do we know about their effectiveness (e.g., relaxation, physical activity; cognitive-behavioral methods)
  • What approaches exist to help organizations prevent undue amounts of stress and to promote individual and organizational functioning, relating to job design, the work environment, social and organizational factors
  • What do we know about the effectiveness of these approaches, which factors support, or impede, success of these approaches, and what methodological problems are involved

The workshop is intended to combine lecturing with group discussions and exercises. Active participation and an active attempt to connect theory and research to everyday problems and experiences are expected.

Suggested readings:

Stress at work: Overview
Sonnentag, S., & Frese, M. (2013). Stress in Organizations. In N. W. Schmitt & S. Highhouse (Eds.), Industrial and Organizational Psychology (pp. 560-592). Hoboken, NJ: Wiley.
Warr, P. (2005). Work, well-being, and mental health. In J. Barling, E. Kevin-Kelloway, & M.R. Frone (Eds.), Handbook of work stress (pp. 547-573). Thousand Oaks, CA: Sage.

Basic psychological mechanisms
Semmer, N.K., McGrath, J.E., & Beehr, T.A. (2005). Conceptual issues in research on stress and health. In C.L. Cooper (Ed.), Handbook of Stress and Health (2nd ed., pp. 1-43). New York: CRC Press.

Individual differences
Semmer, N. K., & Meier, L. L. (2009). Individual differences, work stress, and health. In C. L. Cooper, J. Campbell Quick, & M. J. Schabracq (Eds.), International handbook of work and health psychology (3rd ed., pp. 99-121). Chichester, UK: Wiley.

Intervention
Murphy, L.R. (2003). Stress management at work: Secondary prevention of stress. In M.J. Schabracq, J.A. Winnubst, & C.L. Cooper (Eds.), Handbook of Work and Health Psychology (2nd ed., pp. 533-548) Chichester: Wiley.
Semmer, N. K. (2006). Job stress interventions and the organization of work. Scandinavian Journal of Work, Environment and Health, 32, 515-527.
Questions regarding this workshop may be addressed to Norbert.Semmer@psy.unibe.ch.

Introduction to Qualitative Research Methods
Kendall Bronk

This workshop is designed to introduce you to different types of qualitative research methods, with a particular emphasis on how they can be used in applied research and evaluation. Although you will be introduced to several of the theoretical paradigms that underlie the specific methods that we will cover, the primary emphasis will be on how you can utilize different methods in applied research and consulting settings. We will explore the appropriate application of various techniques, and review the strengths and limitations associated with each. In addition, you will be given the opportunity to gain experience in the use of several different methods. Overall, the workshop is intended to provide you with the basic skills needed to choose an appropriate method for a given project, as well as primary considerations in conducting qualitative research. Topics covered will include field observation, content analysis, interviewing, document analysis, and focus groups.

Questions regarding this workshop may be addressed to Kendall.Bronk@cgu.edu.

Every Picture Tells a Story: Flow Charts, Logic Models, Log Frames
Thomas Chapel

Speaker: Thomas Chapel
Level: Advanced Beginner
Description: A host of visual aids are in use in planning and evaluation. This session will introduce you to some of the most popular ones—with an emphasis on flow charts, logic models, project network diagrams, and LogFrames. We will review the content and format of each tool and then compare and contrast their uses so that you can better match specific tools to specific program needs. We will review simple ways to construct each type of tool and work through simple cases both as illustrations and as a way for you to practice the principles presented in the session

You will learn:

  • How to identify the proper visual aid tools for program needs
  • How to construct each type of tool

Audience:  Participants should have prior familiarity with evaluation terminology and some experience in constructing logic models.

Thomas Chapel is the first Chief Evaluation Officer at the Centers for Disease Control and Prevention. He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining CDC in 2001, Tom was Vice-President of the Atlanta office of Macro International (now ICF International) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.  In 2013, he was the winner of AEA’s Myrdal Award for Government Evaluation and currently serves on AEA’s Board of Directors.

Questions regarding this workshop may be addressed to Tkc4@cdc.gov.

Survey Research Methods
Jason Siegel

The focus of this hands-on workshop is to instruct attendees how to create reliable and valid surveys to be used in applied research. A bad survey is very easy to create. Creating an effective survey requires a complete understanding of the impact that item wording, question ordering, and survey design can have on a research effort. Only through adequate training can a good survey be discriminated from the bad. The day long workshop will focus specifically on these three aspects of survey creation. The day will begin with a discussion of Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique the item writing of selected national surveys. Next, attendees will work in groups to create survey items of their own. Using Sudman, Bradburn, and Schwatrz’s (1996) cognitive approach, attendees will then be informed of the various ways question order can bias results. As practice, attendees will work in groups to critique the item ordering from selected national surveys. Next, attendees will propose an ordering scheme for the questions created during the previous exercise. Lastly, using several sources, the keys to optimal survey design will be provided. As practice, the design of national surveys will be critiqued. Attendees will then work with the survey items created, and properly ordered, in class and propose a survey design.

Questions regarding this workshop may be addressed to Jason.Siegel@cgu.edu.

Culturally Responsive Evaluation
Katrina Bledsoe

The beauty of the field of evaluation is in its potential responsiveness to the myriad of contexts in which people—and programs, policies, and the like—exist. As the meaning and construction of the word community expands, the manner in which evaluation is conducted must parallel that expansion. Evaluations must be less about a community and more situated and focused within the community, therefore increasing its responsiveness to the uniqueness of the setting/system. To do this, however, requires an expanded denotative and connotative meaning of community. Moreover, it requires us to think innovatively about how we construct and conduct evaluations, and to broadly consider the kind of data that will be credible to stakeholders, and consumers. The goal of this workshop is to engage the attendee in thinking innovatively about what evaluation looks like with-in a community, rather than simply about a community. We will engage in a process called “design thinking” (inspired by Design Innovation Consultants IDEO and Stanford’s Design School) to help us consider how we might creatively design responsive and credible community-based evaluations. This interactive course includes some necessary foundation-laying, plenty of discussion, and of course, opportunities to think broadly about how to construct evaluations with the community as the focal point.
Questions regarding this workshop may be addressed to Katrina.Bledsoe@gmail.com.

A New Introduction to Evaluation
Michael Scriven

This approach to evaluation develops it from commonsense and simple methods covered in any introductory social science course. The idea is that, Instead of seeing evaluation as either hopelessly unscientific or as a collection of precariously marginal esoterica, the student comes to see it as simply an enlargement of the domain of science into new areas, where the usual scientific standards apply and are used to conquer areas that were previously thought to be beyond its reach, just as that happened with the invention of the calculus, statistics, psychology, information theory, and artificial intelligence. The elementary scientific components on which this approach builds are mainly covered by: the kind of elementary experimental design used to check causal claims, checklist methodology (as in ‘laundry lists’), and simple measurement concepts such as the difference between ordinal and interval scales. The student thus comes to see how evaluation’s foundations are impeccable, and the common dismissal of it as essentially subjective, while true of aesthetic evaluation, is simply irrelevant to the serious business of professional program, product, personnel, and policy evaluation.
Questions regarding this workshop may be addressed to Edgepress@gmail.com.

Sunday, August 21

Designing Real-World Evaluation: From Efficacy Evaluation Theory to Effectiveness Evaluation Theory
Huey-Tsyh Chen

The literature identified two types of outcome evaluation: efficacy evaluation, conducted in controlled settings, and effectiv
eness evaluation, for real-world settings. The Campbellian validity typology has been used as a grand theory and made generous
contribution towards evaluation theory and guided practice successfully for both evaluation types. However, the grand theory mainly addresses internal validity issues, thus better fitting efficacy evaluation. Shortcomings of its application to real world programs have been mentioned in the literature. Effectiveness evaluation has to address internal as well as external validity, not sufficiently satisfied by the grand theory. A theory of effectiveness evaluation has been developed to overcome the one-size-fit-all problem.  This workshop will introduce the theory and its conceptual framework, principles, and approaches, as well as provide examples of applications. Workshop participants will learn how to systematically balance internal and external validity when evaluating real word programs.

Questions regarding this workshop may be addressed to hueychen9@gmail.com.

Enhancing Evaluation Capacity – A Systematic and Comprehensive Approach
Leslie Fierro

Evaluation capacity building (ECB) is a phenomenon that has gained attention in the discipline of evaluation over the past decade as the field has struggled to meet the high demand for evaluation services.  Although many definitions for ECB currently exist, the following is the most often recognized,  “…the intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its uses routine” (Stockdill et al., 2002, p. 14).  In this workshop we will examine what it really means to build evaluation capacity in organizations and broader systems.  The workshop will include an orientation to ECB (existing definitions, frameworks, approaches) as well as provide ample time for highly interactive sessions in which attendees will work together to consider how to build evaluation capacity within their own organizations.  Attendees will walk away from this training with an understanding of how to build knowledge, skills, and attitudes among individuals to do, use, and promote evaluation as well as organizational strategies for creating an infrastructure within organizations that can support ongoing healthy evaluation practices (e.g., organizational evaluation policies).

Questions regarding this workshop may be addressed to Leslie.Fierro@cgu.edu.

Applied Multiple Regression: Mediation, Moderation, and More
Dale Berger

Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships. Issues we’ll explore include preparing data for analysis, selecting models that are appropriate to your data and research questions, running analyses, interpreting results, and presenting findings to a nontechnical audience. The facilitator will demonstrate applications from start to finish with live SPSS and Excel. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

You will learn:

  • Concepts important for understanding regression
  • Procedures for conducting computer analysis, including SPSS code
  • How to conduct mediation and moderation analyses
  • How to interpret SPSS REGRESSION output
  • How to present regression findings in useful ways

Questions regarding this workshop may be addressed to Dale.Berger@cgu.edu.

Learning from Success: Incorporating Appreciative Inquiry in Evaluation
Tessie Catsambas

In her blog, “Value-for-Money: Value-for-Whom?” Caroline Heider, Director General of the Independent Evaluation Group of the World Bank, pushes evaluators to make sure that “the questions we ask in our evaluations hone in on specifics that deepen the understanding of results and past experience,” and ask ourselves what difference our recommendations will make once implemented, and what value-added they will create. Applying Appreciative Inquiry to evaluation provides a way to drive an evaluation by vision and intended use, builds trust to get more accurate answers to evaluation questions, and offers an avenue to increase inclusion and deepen understanding by incorporating the systematic study of successful experiences in the evaluation.
Appreciative evaluation is just as serious and systematic as problem analysis and problem solving; and it is probably more difficult for the evaluator, because it requires continuous reframing of familiar problem focused language.
In this one-day workshop, participants will be introduced to Appreciative Evaluation and will explore ways in which it may be applied in their own evaluation work. Participants will use appreciative interviews to focus an evaluation, to structure and conduct interviews, and to develop indicators. Participants will practice “reframing” and then reflect on the power of appreciative and generative questions. Through real-world case examples, practice case studies, exercises, discussion and short lectures, participants will learn how to incorporate AI into their evaluation contexts.

Workshop Agenda

  • Introduction: Theoretical Framework of Appreciative Inquiry (lecturette)
  • Logic and Theory of Appreciative Inquiry (lecturette)
  • Imagine phase: Visions (case study: small-group work) Lunch Reframing deficits into assets (skills building)
  • Good questions exercise (skills building)
  • Innovate: Provocative propositions/possibility statements, links to developing indicators (case study: small group work)
  • Applications of AI—tying things together (lecturette and discussion)
  • Questions and Answers
  • Evaluation

Questions regarding this workshop may be addressed to Tcatsambas@encompassworld.com.

Monday, August 22

Quasi Experimental Methods
William Crano

Conducting, interpreting, and evaluating research are important aspects of the social scientist’s job description.  To that end, many good educational programs provide opportunities for training and experience in conducting and evaluating true experiments (or randomized controlled trials [RCTs], as they sometimes are called).  In applied contexts, the opportunity to conduct RCTs often is quite limited, despite the strong demands on the researcher/evaluator to render “causal” explanations of results, as they lead to more precise understanding and control of outcomes.  In such restricted contexts, which are absolutely more common than those supporting RCTs, quasi-experimental designs sometimes are employed. Though they usually do not support causal explanations (with some noteworthy exceptions), they sometimes provide evidence that helps reduce the range of plausible alternative explanations of results, and thus, can prove to be of real value. This workshop is designed to impart an understanding of quasi-experimental designs. After some introductory foundational discussion focused on “true” experiments, we will consider quasi-experimental designs that may be useful across a range of settings that do not readily admit to experimentation. These designs will include time series and interrupted time series methods, nonrandomized designs with and without control groups, case control (or ex post facto) designs, regression-discontinuity analysis, and other esoterica. Participants are encouraged to bring to the workshop design issues they are facing in real world contexts.

Questions regarding this workshop may be addressed to William.Crano@cgu.edu.

Data Visualization
Tarek Azzam

The careful planning of visual tools will be the focus of this workshop. Part of our responsibility as evaluators is to turn information into knowledge. Data complexity can often obscure main findings, or hinder a true understanding of program impact. So how do we make information more accessible to stakeholders? Often this is done by visually displaying data and information, but this approach, if not done carefully, can also lead to confusion. We will explore the underlying principles behind effective information displays. These are principles that can applied in almost any area of evaluation, and draw on the work of Edward Tufte, Stephen Few, and Johnathan Koomey to illustrate the breadth and depth of their applications. In addition to providing tips to improve most data displays, we will examine the core factors that make them effective. We will discuss the use of the common graphical tools, and delve deeper into other graphical displays that allow the user to visually interact with the data.

Questions regarding this workshop may be addressed to Tarek.Azzam@cgu.edu.

Program Design + Evaluation: More than the Sum of Its Parts
John Gargani

In this workshop, we will explore the intersection of design and evaluation, the theme of the 2016 conference of the American Evaluation Association. We will consider three areas of evaluation practice—program design, evaluation design, and information design—and how they may be improved by design thinking, design process, and design aesthetics. Participants and the instructor will work as a team to design the workshop, suggest content, and evaluate its success. The workshop itself will be the prototype that we use to investigate how and under what conditions collaborative design leads to better programs. The format will be playful, interactive, and responsive—like design—and systematic, empirical, and practical—like evaluation. Come have some fun.

Questions regarding this workshop may be addressed to John@gcoinc.com.

Leadership Assessment
Becky Reichard

Leadership Assessment PDW
Becky Reichard, PhD
August 22, 2016

Leadership assessment is commonly used by organizations and consultants to inform selection, promotion, and development of leaders. This experiential workshop will provide participants with an overview of the three main methods of leadership assessment – self-assessment, 360-degree assessment, and assessment centers. An assessment center is a method of evaluating leaders’ behaviors during simulated scenarios, or various life-like situations that leaders encounter. Leadership assessments discussed will include leadership competency models, personality, strengths, and skills. In the process of this workshop, participants will be provided with feedback on their leadership strengths, skills, and styles. To receive feedback, please complete the following in advance of the session.

(1)    Complete the StrengthsFinder assessment
Purchase and review the Strengths-based leadership book by Rath & Conchie (2008). In the book, you will be provided with an access code. To complete the StrengthsFinder assessment, visit strengths.gallup.com and enter your access code. The StrengthsFinder assessment is based on 35 years of Gallup research and has been taken by 4 million people in 17 languages. The assessment consists of 180 forced choice questions that you must respond to within 20 seconds each. In total, you should allot 40 minutes to complete the StrengthsFinder. When you complete this online self-assessment, Gallup will automatically email you a detailed feedback report, which you should review in detail. The feedback report will contain your top five strengths themes organized across Gallup’s four domains of leadership strength. To learn more about how to develop your strengths as a leader and about your specific strengths, read parts 1-3 (pages 1-95) of the Strengths-Based Leadership book (Rath & Conchie) along with subsequent chapters associated with your strengths. Bring a printout of your Strengths report to the workshop. (make sure you buy a ‘new’ copy of the book that has a code for completing the StrengthsFinder survey)

(2)    Complete the LeAD360 Assessment
The purpose of the LeAD360 assessment and feedback is to provide a 360-degree view of how others perceive your leadership. In the LeAD360, you and up to 20 of your former/current supervisor(s), peers, and followers that you nominate will complete an online assessment of your leadership based on the 6PLeadership framework. During the workshop in Claremont, you will be provided a personalized feedback report created from aggregated responses (anonymous raters) detailing how others perceive your leadership behaviors. To get started with the LeAD360, please complete the following steps:

STEP 1: REGISTER FOR THE LEAD360.
Register to take the LeAD360 by following Eventbrite link.

After registering, you will receive an email directing you to the LeAD360 user account registration website. To create a user account, you will enter the Eventbrite order number and email address associated with your Eventbrite order and a password of your choosing. Write down this information so you can come back to your survey if you are unable to complete the survey in one session. After creating a LEAD360 user profile, you will receive a second email containing a link to the LeAD360 access portal, which will give you access to the self-assessment and nomination form.
STEP 2: COMPLETE YOUR SELF-ASSESSMENT.

Access the self-assessment from the LeAD360 access portal by clicking the button ‘LEAD360 Self-Assessment’. This survey will take approximately 30 minutes and will ask you questions about your leadership. It is important that you are as honest as possible in your responses so that we can provide you the best available feedback. Only the LeAD Labs 360 administrator, your coach, and you will have access to your responses and associated report. Please complete the self-assessment no later than August 9.

STEP 3: NOMINATE OTHERS TO COMPLETE YOUR LeAD360.

Access the nomination form from the LEAD360 access portal by clicking the button ‘Nominate Other Raters’.  You will be asked to nominate other individuals to rate your leadership behaviors through an online ‘roster.’ This roster is where you can input the names and email addresses of up to 20 individuals in your work circle (i.e., peers, supervisor(s), subordinates), who will complete your leadership assessment. Please be thoughtful when considering who to nominate and be sure to identify the appropriate email addresses for each person. Please nominate others who have had an opportunity to observe your leadership in action. Avoid nominating people with which you have had only casual or brief interactions. Please complete the roster no later than August 9 to allow time for us to collect your nominees’ assessments.

STEP 4: EMAIL THOSE RATING YOU.

It is helpful for you to gain approval from those you would like to rate your leadership and to let them know to expect an email request from LeAD Labs. This is essential to ensure that the all nominees’ ratings of your leadership are included in your feedback report. We have provided an email template (see below) for your convenience to send directly to your nominees. Using this template increases rapid response rate, which is essential to ensure you receive your feedback during your time in Claremont. Be sure to send this email to your nominees no later than August 9, your nominees will be emailed a survey link on the following day. In order for us to compile your LeAD360 feedback report, your nominees’ assessments of your leadership are due by August 20.

Email Template

Dear [RATER NAME],
I am writing to ask for your assistance in assessing my leadership behavior by participating in the LeAD360. This assessment is part of my participation in the Claremont Evaluation Center’s PDW on Leadership Assessment. I have invited several co-workers to help me complete this assessment and all responses will be presented anonymously in a report that averages scores across respondents. The assessment will be a valuable source of feedback about strengths and gaps in my leadership behavior, and will be used to assist me in creating a personal leadership development plan.

The assessment will involve completing an online survey that will take about 30 minutes. The link to the survey will be sent out in a separate email. Because survey links are sometimes flagged as “spam” in email filters, please check your spam folder for an email with the subject line containing ‘LeAD360’ if you do not receive the survey link by August 10. You can also try adding ‘noreply@qemailserver.com’ to your contact or accepted email list. The assessment deadline is August 20.

Please respond to let me know if you are willing to participate in the LeAD360. I appreciate your help and genuinely thank you for your time.
Sincerely,
[Your NAME]

Questions regarding this workshop may be addressed to Becky.Reichard@cgu.edu.

Grant Writing Workshop
Allen Omoto

This workshop covers some of the essential skills and strategies needed to prepare successful grant applications for education, research, and/or program funding. It will provide participants with tools to help them conceptualize and plan research or program grants, offer ideas about where to seek funding, and provide suggestions for writing and submitting applications. Some of the topics covered in the workshop include the pros and cons of grant-supported work, strategies for identifying sources of funding, the components and preparation of grant proposals, and the peer review process. Additional topics related to assembling a research or program team, constructing a project budget, grants management, and tips for effective writing also will be covered. The workshop is intended primarily as an introduction to grant writing, and will be most useful for new or relatively inexperienced grant writers. Workshop participants are invited to bring their own “works in progress” for comment and sharing. There will be limited opportunities for hands on work and practice during the workshop. At its conclusion, workshop participants should be well positioned to read and evaluate grant applications, as well as to assist with the preparation of applications and to prepare and submit their own applications in support of education, research, or program planning and development activities.

Questions regarding this workshop may be addressed to Allen.Omoto@cgu.edu.

Tuesday, August 23

Principles-Focused Evaluation
Michael Quinn Patton

Online Workshop
Evidence about program effectiveness involves systematically gathering and carefully analyzing data about the extent to which observed outcomes can be attributed to a program’s interventions. It is useful to distinguish three types of evidence-based conclusions:

  1. Single evidence-based program. Rigorous and credible summative evaluation of a single program provides evidence for the effectiveness of that program and only that program.
  2. Evidence-based model. Systematic meta-analysis (statistical aggregation) of the results of several programs all implementing the same model in a high-fidelity, standardized, and replicable manner, and evaluated with randomized controlled trials (ideally), to determine overall effectiveness of the model. This is the basis for claims that a model is a “best practice.”
  3. Evidence-based principles. Synthesis of case studies, including both processes and outcomes, of a group of diverse programs or interventions all adhering to the same principles but each adapting those principles to its own particular target population within its own context. If the findings show that the principles have been implemented systematically, and analysis connects implementation of the principles with desired outcomes through detailed and in-depth contribution analysis, the conclusion can be drawn that the practitioners are following effective evidence-based principles.

Principles-focused evaluation treats principles as the intervention and unit of analysis, and designs an evaluation to assess both implementation and consequences of principles.  Principles-focused evaluation is a specific application of developmental evaluation because principles are the appropriate way to take action in complex dynamic systems.  This workshop will be the worldwide premier of principles-focused evaluation training.  Specific examples and methods will be part of the training.

Participants will learn:

  • What constitutes a principle that can be evaluated
  • How and why principles should be evaluated
  • Different kinds of principles-focused evaluation
  • The relationship between complexity and principles
  • The particular challenges, strengths, and weaknesses of principles-focused evaluation.

Questions about this workshop may be addressed to mqpatton@prodigy.net.

Empowerment Evaluation: A Tool to Build Evaluation Capacity and Sustainability
David Fetterman

This workshop will introduce colleagues to the theory, concepts, principles, and steps of empowerment evaluation.  Theories will include process use and theories of use and action.   Concepts covered will include:  critical friend, cycles of reflection and action, and a community of learners.  This workshop will also present the steps to plan and conduct an empowerment evaluation, including:  1) establishing a mission or unifying purpose of a group or program; 2) taking stock – prioritizing evaluation activities and ratings performance; and 3) planning for the future – establishing goals and strategies to achieve objectives, as well as credible evidence to monitor change.  We will also highlight the use of basic self-monitoring tools such as:  1) establishing a baseline, creating goals, specifying benchmarks, and comparing goals and benchmarks with actual performance.

The workshop will be experiential.  Lectures and discussion will be combined with hands-on exercises. The workshop will also cover how to select appropriate user-friendly technological tools to facilitate an empowerment evaluation, aligned with empowerment evaluation principles.

Dr. David Fetterman is president and CEO of Fetterman & Associates, an international evaluation consulting firm.  He has 25 years experience at Stanford University in administration, the School of Education, and the School of Medicine. He is the founder of empowerment evaluation and the author of over 16 books, including Empowerment Evaluation:  Knowledge and Tools for Self-assessment, Evaluation Capacity Building, and Accountability (Sage) and  Empowerment Evaluation Principles in Practice (Guilford) with his collaborators Drs. Abraham Wandersman and Shakeh Kaftarian.  Dr. Fetterman is a past-president of the American Evaluation Association, recipient of the Lazarsfeld and Myrdal Evaluation Awards. He is currently co-Chair (with Dr. Liliana Rodriguez-Campos) of the AEA Collaborative, Participatory and Empowerment Evaluation Topical Interest Group.  He is a highly experienced and sought after speaker, facilitator, and evaluator.

Questions regarding this workshop may be addressed to fettermanassociates@gmail.com.

Positive Organizational Psychology and Social Entrepreneurship: The Theory and Design of Social Enterprises
Stewart I. Donaldson & John Gargani

In this full-day workshop, you will be introduced to the leading theories of positive psychology and how they can be applied in organizations to improve the quality of work life, well-being of customers; happiness of a community, and health of the environment. From this theoretical perspective, you will explore the growing field of social entrepreneurship, in which leaders leverage innovation, creativity, and design to manage organizations in ways that create positive social impact. Through short lectures and a series of hands-on design activities, you will be challenged to develop a plan for a new or existing organization to solve an important societal problem. You will be guided by a process for problem solving and criteria for evaluating your solution, tools that you can apply again in other settings to help organizations create positive impacts.

Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

The New Science of Persuasion: Neuromarketing and Neuromanagement
Paul Zak & Jorge Barraza

This workshop will give attendees a brief background in the behavioral neuroscience of decision-making (neuroeconomics), how to apply this knowledge to improve employee motivation and organizational performance (neuromanagement), and to create and test persuasive communications and advertising (neuromarketing). The workshop leaders helped found these transdisciplinary fields and will present findings from a variety of studies they have done. The workshop also includes a live demonstration of a number of neuroscience measurement technologies so that attendees will learn how applied neuroscience is done. Those who attend the workshop will be more informed consumers of neuroscience research when the day concludes.

Questions regarding this workshop may be addressed to Paul.Zak@cgu.edu

Last updated: 5/10/2016