What Constitutes Credible Evidence in Evaluation and Applied Research?
August 19, 2006
See highlights from this event! An in-depth text based on this event is now available from Sage Publications.
Summary of the Event
The zeitgeist of accountability and evidence-based practice is now widespread across the globe. Organizations of all types and sizes are being asked to evaluate their practices, programs, and policies at an increasing rate. While there seems to be much support for the notion of using evidence to continually improve efficiency and effectiveness, there appears to be growing disagreement and confusion about what constitutes sound evidence. These disagreements have far-reaching implications for evaluation and applied research practice and for funding competitions, as well as for how best to conduct and use evaluation and applied research to promote human betterment.
In light of this debate, we assembled an illustrious group of experts working in various areas of evaluation and applied research to share their diverse perspectives on the question of “What Constitutes Credible Evidence?” This illuminating and action packed day in Claremont was hosted on Saturday, August 19, 2006. Over 200 attendees from all backgrounds–academics, researches, private consultants, students, and professionals from many fields–enjoyed a day of intense discussion and varied perspectives.
Over 200 people were in attendance for this highly-anticipated event. The symposium united an all-star cast of internationally-recognized scholars to debate—sometimes contentiously—the realities of using randomized control trials (RCTs), and the need for serious academic scholarship to inform evaluation. A 5-minute highlights reel is available.
The day began with a cultural analysis by CGU President Robert Klitgaard of how we derive meaning from evidence. Foreshadowing a later talk by Dr. Gary T. Henry, President Klitgaard cautioned the audience to recognize their own presuppositions before attempting to understand other cultures. School of Social Science, Policy & Evaluation Dean Stewart Donaldson followed with a description of the current state of evaluation, and the rise of evidence-based practice in the face of the undeniable fact that good will and dollars are no longer considered enough to solve real-world social problems. Funding, standards of practice, and indeed all decision-making now depend on how policy makers—hopefully relying on the work of evaluators—decide what counts as evidence.
Following these introductory comments, the morning was devoted to exploring experimental approaches for building credible evidence. Dr. Russell Gersten began this exploration with an account of how the What Works Clearinghouse (WWC) deals with the issues at hand. The WWC, which conducts and publishes reviews on the effectiveness of educational interventions on an ongoing basis, gives strong preference to evidence based on RCTs, effectively making them the “gold standard” for scientific evidence. Drs. Leonard Bickman and Gary T. Henry made it crystal clear why rigorous RCTs are critical for making high stake decisions, and cautioned the audience about the well-known pitfalls of non-experimental evidence.
A lively reaction panel kicked off the second portion of the program, focusing on non-experimental approaches for building credible evidence. The first speaker of the afternoon was Dr. Michael Scriven, formerly of CGU and one of the strong adversaries of the movement to privilege RCTs. Scriven’s assertion that RCTs are not necessary nor often appropriate for determining causation was backed up by Dr. Jennifer Greene. She framed the debate about evidence as political – not methodological. Complexity, she claimed, resists “methodological fundamentalism,” and honors and respects the wondrous diversity of the human species. Examples of non-experimental attempts to deal with complexity were provided by Dr. Sharon Rallis, who focused on qualitative research, and Dr. Sandra Mathison, who addressed image-based research as a means to establish context and improve understanding. Finally, Dr. Thomas Schwandt went over many of the pitfalls that can corrupt the interpretation of evidence, even when the data itself is allegedly foolproof.
The day concluded with an even more animated discussion of the topic, outtakes of which may be seen here.
Finally, Dr. Mel Mark integrated the viewpoints expressed during the day in a talk cautioning all researchers and evaluators to contextualize their designs and their findings. He drove home the point that quantitative and qualitative methods cannot exist without one another for true understanding. The “gold standard” depends on the context, and just as gold is literally only the standard in dollar value—and not in clockworks or bullet-proof vests—credibility can be found amidst criteria such as validity, relevance, feasibility, or precision. Differences in qualitative/quantitative may result from different default assumptions about appropriateness itself.
At the end of the day, speakers and audience mingled over wine and cheese. The symposium served as the basis for an in-depth volume on the nature of credible evidence.
Symposium Schedule & Streaming Video
Highlights From the Day
Videos on this page play best in Apple Quicktime.
Welcome
9-9:10am, President Robert Klitgaard, Claremont Graduate University
Introduction
9:10-9:40am, Stewart I. Donaldson, Claremont Graduate University
“Thriving in the Global Zeitgeist of Accountability and Evidence-based Practice”
Click on talk title for streaming video.
Experimental Approaches as the Route to Credible Evidence
Christina A. Christie, Chair, Claremont Graduate University
9:40 -10:10am, Russell Gersten, University of Oregon
“The What Works Clearinghouse Criteria: Underlying Logic”
Click on talk title for streaming video.*
10:10-10:40am, Leonard Bickman, Vanderbilt University
“Science, Values and Feasibility: Standards for Credible Evidence”
Click on talk title for streaming video.
10:40-11:10am, Gary T. Henry, Georgia State University
“When Credibility is Not Enough: What to do When Getting it Right Matters”
Click on talk title for streaming video.
11:10-11:30am, Morning Break
11:30 – 12:15pm, Reaction Panel – “Limitations of Experimental Approaches”
Click on title for streaming video.
Michael Scriven, Western Michigan University
Jennifer Greene, University of Illinois at Urbana-Champaign
Sharon F. Rallis, University of Massachusetts Amherst
Sandra Mathison, University of British Columbia, Vancouver
Thomas Schwandt, University of Illinois at Urbana-Champaign
Brief Responses By Presenters
Audience Reaction & Participation
12:15 – 1:30pm, Lunch
Non-Experimental Approaches for Building Credible Evidence
Hallie Preskill, Chair, Claremont Graduate University
1:30-2:00pm, Michael Scriven, University of Western Michigan
“Gunfight at the Causation Corral: Let’s Run those Clantons out of Tombstone!”
Click on talk title for streaming video.
2:00-2:30pm, Jennifer Greene, University of Illinois at Urbana-Champaign
“Evidence as ‘Proof’ and Evidence as ‘Inkling'”
Click on talk title for streaming video.
2:30-3:00pm, Sharon F. Rallis, University of Massachusetts Amherst
“Considering Rigor and Probity: Qualitative Pathways to Credible Evidence”
Click on talk title for streaming video.
3:00-3:30pm, Afternoon Break
3:30-4:00pm, Sandra Mathison, University of British Columbia at Vancouver
“Seeing is Believing: The Credibility of Visual Imagery”
Click on talk title for streaming video.
4:00-4:30pm, Thomas Schwandt, University of Illinois at Urbana-Champaign
“Bases of Substantiation: Hierarchies of Evidence or Methodological Appropriateness?”
Click on talk title for streaming video.
4:30-5:15pm, Reaction Panel – Panel discussion on The Limitations of Non-Experimental Approaches
Click on title for streaming video.
Leonard Bickman, Vanderbilt University
Gary T. Henry, Georgia State University
Brief Response by Presenters
Audience Reaction & Participation
Integration
5:15 – 5:45pm, Melvin Mark, The Pennsylvania State University
“Credibility and Other Criteria for ‘Good Evidence’: Integrating Diverse Perspectives, Toward Better Judgements About Evaluation and Applied Research”
Click on talk title for streaming video.
5:45-6pm, Final Audience Reaction & Participation
6pm, Wine & Cheese Reception
Trouble Viewing the Videos?
Videos on this page play best in Apple Quicktime.
With the exception of the Highlights Reel, most videos are 20-30 minutes in length, and may take several minutes to load, depending on your computer and connection speed. The Highlights Reel, though smaller, may take several minutes to load on a slower system.
Viewers who intend to use these videos in a classroom or training setting may download and save the videos on their computer by right-clicking and choosing “Save As…”
Some AOL users have had difficulty viewing these videos. This may be solved by the following procedure: Save the video to your hard drive. Right-click on the file. Choose “Open With…” on the drop-down menu. Select Internet Explorer, Apple Quicktime, Windows Media Player, or another program of your choosing.
For further help or to give feedback, please contact us by sending an e-mail to paul.thomas@cgu.edu or calling (909) 607-9016.
*Note on the footage of Dr. Gersten’s Talk
Due to technical difficulties, approximately 2 minutes of Dr. Gersten’s talk have been omitted. This omission begins at 15 minutes, 44 seconds into the video.