| 
View
 

Chapter 17 Evaluation Research

Page history last edited by PBworks 18 years, 8 months ago

Home Page

Previous Chapter 16 Historical Research

Chapter 17 Complete

p. 541: #1(Describe the major uses of educational evaluation)

 

  1. prepare position papers for people in decision-making roles
  2. create advocacy for particular legislation and budget appropriations
  3. determine whether programs are producing benefits that justify their cost
  4. hold managers accountable for producing results
  5. help managers make sound decisions related to this design, personnel, and budget

 

 

#2(State the major differences between evaluation research and other types of educational research)

 

  1. evaluation study usually is initiated by someone's need for a decision to be made concerning policy, management, or political strategy.
  2. evaluation typically is done for a specific purpose thus is not generalizable
  3. evaluation studies are designed to yield data concerning the worth, merit, or value of educational phenomena. This is a value judgment, not a statistical analysis.

 

#3(Describe procedures for clarifying the reasons for doing a particular evaluation study and identifying stakeholders)

 

"An evaluation study can be initiated because of the of evaluator's personal interest in doing it or because some person or agency requested it"

 

-if the evaluation study is done to answer questions primarily of interest to you, you will need only to clarify for yourself on the study is being done.

-ran in the valuation is requested, the evaluator should consider probing to determine all the reasons for the evaluation request

 

  • for accreditation
  • to "shape up" the behavior of program staff-watchdog function
  • justifying already-made decision to terminate the program or reduce its funding-"hired gun"
  • reflect unfavorably on certain members of the program staff

 

identifying stakeholders

"A stakeholder is anyone who is involved in the program being evaluated or might be affected by or interested in the findings of the evaluation. Some stakeholders may wish simply to be kept informed, whereas others may want to influence the questions that guide the study in the evaluation design."

 

#6(Identify several factors that are involved in creating an evaluation design, but not a research design) (i.e. Four Standards p. 553-554)

Utility

  • stakeholder identification
  • evaluator credibility
  • information scope and selection
  • values identification
  • report clarity
  • report timeliness and dissemination
  • evaluation impact

 

Feasibility

  • practical procedures
  • political viability
  • cost effectiveness

 

Propriety

  • service orientation
  • formal agreements
  • rights of human subjects
  • human interactions
  • complete and fair assessment
  • disclosure of findings
  • conflict of interest
  • fiscal responsibility

 

Accuracy

  • program documentation
  • context analysis
  • described purposes and procedures
  • defensible information sources
  • valid information
  • reliable information
  • systematic information
  • analysis of quantitative information
  • analysis of qualitative information
  • justified conclusions
  • impartial reporting
  • meta-evaluation

 

"The Joint Committee concluded that a good evaluation study satisfies four important criteria: utility, feasibility, propriety, and accuracy. An evaluation has utility if it is informative, timely, and useful to the affected persons. Feasibility means, first, that the evaluation design is appropriate to the setting in which the study is to be conducted, and second, that the design is cost-effective. An evaluation has propriety if it is conducted legally and ethically. Finally, accuracy refers to the extent to which an evaluation study has produced valid, reliable, and comprehensive information for making judgments of a program's worth". Pg. 553

 

#9(State the major quantitative approaches to evaluation, and describe the primary characteristics of each)

 

"The quantitative evaluative approaches described below rely primarily on positivist methods of inquiry. They emphasize objective measurement, representative sampling, experimental control, and the use of statistical techniques to analyze data". Pg. 555

 

  • evaluation of the individual-focuses on the measurement of individual differences and judgments are made by comparing individual with a criterion on or a set of norms.
  • objectives-based evaluation-measurement of explicit objectives as the basis for determining an educational programs merit.

Discrepancy evaluation.

Cost analysis.

Behavioral objective

Goal-free evaluation

  • needs assessment-a discrepancy between existing set of conditions and a desired set of conditions. Provides a basis for setting objectives for curriculum or program development.

* CIPP Model

Context Evaluation

Input evaluation

Process evaluation

Product evaluation

Each type of evaluation is tied to a different set of decisions that must be made in planning and operating a program. Context involves the identification of problems and needs that occur in a specific educational setting. Input concerns judgments about the resources and strategies needed to accomplish program goals and objectives. Process involves a collection of value to data once the program has been designed and put into operation. Product is to determine the extent to which the goals of the program have been achieved.

 

"The politics of evaluation also are not given serious attention in most quantitative approaches. Various groups have a stake in the outcome of an evaluation study, and they may try to influence the evaluation process accordingly". Pg. 562

 

#10(State the major qualitative approaches to evaluation, and describe the primary characteristics of each)

 

  • responsive evaluation-focuses on addressing the concerns and issues of stakeholders.
  • quasi-legal models of evaluation-derived from the field of law.
  • adversary evaluation-distinguished by the use of a wide array of data; the hearing of testimony; and, most importantly, an adversarial approach, meaning that the two sides present positive and negative judgments, respectively, about the program being a value weighted.
  • judicial evaluation-stimulates the use of legal procedures for the purpose of promoting broad understanding of the program, clarifying the subtle and complex nature of the educational issues it raises, and producing recommendations and policy guidelines that lead to institutional growth and/or improved practice.
  • expertise-based evaluation-the use of experts to make judgments about the worth of an educational program.

 

"Qualitative approaches take the position that the worth of an educational program depends heavily on the values and perspectives of those doing the judging. Therefore, the selection of the individuals and groups to be involved in the evaluation is critical". Pg. 562

 

Home Page

Planning Page for this Wiki Check here to see who is doing what!

Comments (0)

You don't have permission to comment on this page.