Definition of evaluation: the process of determining the worth or merit of something; if "something" is a program, then it's "program evaluation." Other types of evaluation include: product evaluation (most widely practiced, e.g., Consumer Reports); personnel evaluation; research evaluation; policy studies; art, movie, play, and book reviews.Program evaluation is NOT the same as research although they share many characteristics--Both:
Program evaluation focuses on decisions. Research focuses on answering questions about phenomena to discover new knowledge and test theories/hypotheses. Research is aimed at truth. Evaluation is aimed at action.
- Start with questions
- Use similar methods
- Provide similar information
Purposes of Program Evaluation (based on decisions to be made):
- Program Improvement - making several specific decisions about how to improve the program; done during "formative" stages of the program. Information needed includes how well the program is conducted, whether the program is practical and useful (What purpose does the program serve?), and whether or not the program has intended effects and determination of unintended effects.
- Continuation and/or Dissemination - making an overall decision about whether or not to continue the program and/or whether or not to disseminate the program at other sites. Information needed includes program outcomes and effects, what it takes to implement the program, and the extent to which the program serves a need at a reasonable cost.
- Accountability - provide information to the funding agent to make decisions about continuing to fund the program and/or adjust funding, intervene in program management, make policy decisions. Information needed includes how well the program is meeting its intended goals and effects on participants. This purpose overlaps with purposes for program improvement and continuation/ dissemination.
Methodology: There are benefits and drawbacks to any method used in research/evaluation. Choices of method is based on questions to be answered, weighing costs and benefits of the method, and evaluation purpose (see attached).
"Best practice" in evaluation involves multiple methods and multiple perspectives.
Standards: Most program evaluators abide by a set of standards such as The Program Evaluation Standards developed by The Joint Committee on Standards for Educational Evaluation.
Standards ensure:
- Anonymity of respondents - respects respondents' rights and welfare;
- Neutrality (internal vs. external evaluators) - addresses conflict of interest;
- Concerns of multiple respondents are addressed (e.g., funders, program staff, teachers, students, principals, parents);
- Methods and instrumentation are reliable and valid (especially refers to the more quantitative data--see methodology continuum);
- Intended outcomes (e.g., goals) as well as unintended outcomes are addressed;
- Assessment of generalizability (if for purpose of dissemination); and
- Fair and honest reporting - including strengths and weaknesses.
Reliable - consistency of measurement (e.g., several observers should report similarly using same instrument - "interrater reliability").
Valid - measuring what is purported to be measured (e.g., many new instruments have been developed based on new science standards, so what they measure is adherence to the new standards).
Case Study Bibliography Bogan, Robert C. and Biklen, Sari Knopp, Qualitative Research for Education: An Introduction to Theory and Methods, Allyn and Bacon, Inc., 1982.
Morgan, David L., Focus Groups as Qualitative Research, Qualitative Research Methods Series 16, Sage Publications, !988.
Stake, Robert E., The Art of Case Study Research, Sage Publications, 1995.
Yin, Robert K., Case Study Research: Design and Methods, Applied Social Research Methods Series, Vol. 5, Sage Publications, 1989.