| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Chapter 12  Experimental Research: Designs, Part 1

Page history last edited by PBworks 17 years, 10 months ago

Chapter 12 Complete

Remember ED S SMITH and DR ED

Experimental mortality

Differential selection

 

Statistical regression

 

Selection-maturation interaction

Maturation

Instrumentation

Testing

History

 

Diffusion of treatments

Rivalry by respondents receiving less desirable treatments

 

Equalization of treatments

Demoralization of respondents receiving less desirable treatments

Home Page

Previous Chapter 11 Nonexperimental Research: Correlational Designs

 

p. 365: #1(Critically evaluate possible threats to the internal validity of an experiment)

 

P370

Threats to Internal Validity

· ED S SMITH

1. E : experimental Mortality: is attrition or loss of subjects over time

2. D : differential Selection: different sampling criteria used for the different experimental groups and the control group.

o To prevent this use random sampling

o Pretest to identify differences in the groups.

3. S: statistical Regression: tendency to score closer to the mean on a retest.

4. S: selection – Maturation interaction : the control group is older physically or psychologically than the experimental group.

o Use random sampling to control for.

o Collect demographic info.

5. M : maturation : Physical or psychological changes of the participants during the course of the experiment.

6. I : instrumentation : the nature of the instrument has changed. Observers may give a more favorable rating the second time because they expect it.

o Use a reliable and valid instrument

o Use a pretest without knowing results.

7. T : testing: procedures used to measure the change. If the pre and post are similar you may become test wise

o To avoid use standardized procedures

o Use different versions of the instrument

8. H: history: events occurring during the experiment that could effect the participants

o Make sure control and experimental groups experience the same historical events

o Look for and document any outside events that occur.

 

 

#2(Critically evaluate possible threats to the external validity of an experiment )

 

- Explicit description of experimental treatment – experimental treatment must be described I sufficient detail to allow reproducibility.

- Multiple-treatment interference – participants are exposed to more than one treatment. can not conclude which treatment causes change.

- Hawthorne effect – performance improves because participants are aware that they are participating in an experiment.

- Novelty and disruption effects – changes are due to the fact that treatment is introduced.

- Experimenter effect – treatment is effective or ineffective because of the experimenter.

- Pretest sensitization – pretest interacts with the treatment and affects the research results.

- Posttest sensitization - posttest is a learning experience in its own right. When the treatment is applied without a posttest, the effectiveness is diminished.

- Interaction of history and treatment effects – effectiveness of a treatment is limited to time it is administered and should not be generalized to other time periods.

- Measurement of the dependent variable – effectiveness of treatment is related to way it is measured.

- Interaction of time of measurement and treatment effects – posttest administered immediately after treatment may yield higher scores that a test administered at a later time.

 

 

#3(Describe procedures for increasing the generalizability of findings from experiments)

 

 

(p. 378-379)

Practice representative design – Plan the experiment so that it accurately reflects real-life environments in which learning occurs and the natural characteristics of the learner.

1. When appropriate, conduct the research in an actual educational setting.

2. Incorporate several environmental variations into the design of the experiment.

3. Observe what students actually are doing during the experiment.

4. Observe the social context in which the experiment is being conducted.

5. Prepare participants for the experiment.

6. Incorporate a control treatment that allows participants to use their customary approaches to learning.

 

#4(Explain how experimenter bias and treatment fidelity can affect the outcome of an experiment)

 

Experimenter Bias -- researcher’s expectations about the outcome that are unintentionally transmitted to participants so that their subsequent behavior is affected. Not intentional.

Treatment Fidelity - - experimenter fails to follow the exact procedures specified for the experiment (failure to follow protocol).

 

(p. 379-383)

 

Experimenter bias refers to researchers’ expectations about the outcome of an experiment that are unintentionally transmitted to participants so that their subsequent behavior is affected. This does not refer to intentional manipulation or falsification. The attitudes and beliefs of the experimenter, teacher, or trainer who delivers the treatment may influence the effectiveness of the treatment. (379-380)

 

Treatment fidelity is the extent to which the treatment conditions, as implemented, conform to the researcher’s specifications. If treatments are not delivered in a specific, clearly defined manner, the effectiveness of the treatment may be influenced. (381)

 

 

 

#5(Describe obstacles to maintaining equivalent treatment groups in experiments and how these obstacles can be avoided or overcome)__( e.g., Strong vs. Weak treatments)__

(p. 384-388)

 

- Withholding the treatment from the control group – If a treatment is viewed as desirable, members of the control group may want access. This can be avoided by telling members of the control group that they can have access to the treatment after the experiment. (386)

- Faulty randomization process - Defect in an experimenters randomization process can create nonequivalent groups. If obviously nonequivalent groups occur, you can attempt to repeat the process, or you can stratify the groups bases on the factors you desire equivalence (387)

- Small sample size – attempt to increase the sample size, use matching procedures, or consider eliminating one or more treatment groups. (387)

- Intact groups – attempt to increase the number of groups and randomly assign them to the control and experimental groups. (388)

 

 

 

#6(Describe the commonly used experimental designs, including the procedures used in random assignment, formation of experimental and control groups, and pretesting and posttesting procedures)

 

. (389 -398)

 

R = random assignment

X = experimental treatment

O = observation pretest/posttest

 

Single-group designs

One-shot case study (p. 389) X O

One group pretest-posttest design (p. 389) O X O

Time-series design (p. 391) O O O O X O O O O

 

Control-group designs

Pretest-posttest Control-group design (p.392) R O X O

R O O

 

Posttest-only Control-group design (p.395) R X O

R O

 

Solomon-four group design (table p.385) R O X O

R O O

R X O

R O

 

 

 

 

 

Quasi-Experimental Research Design: A type of experiment in which research participants are not randomly assigned to the experimental and control groups. Random assignment of subjects is not possible.

 

Static Group Comparison Design: A type of experiment in which research participants are not randomly assigned to the two treatment groups, and in which each group takes a posttest, but no pretest.

 

Nonequivalent Control Group Design: A type of experiment in which research participants are not randomly assigned to the experimental and control groups, and in which each group takes a pretest and a posttest. A problem with the nonequivalent control design is that the difference in post test may be due to group differences. ANOVA handles this problem.

 

Factorial Designs

 

A Factorial experiment: A type of experiment in which the researcher studies how two or more treatment variables (called factors in this context) affect a dependent variable either independently or in interaction with each other.

 

Single Case Designs

 

A single-case experiment, (a.k.a. a single-subject experiment or a time-series experiment): administer treatment to one individual or small group. Work with individual's behavior under non treatment conditions serving as the control for comparison purposes. Single Case experiment involves the profound study of one individual, or of more than one individual treated as a single group. The author’s cite Thomas Kratochwill as one who explains single-case designs "involve the intense analysis of behavior in single organisms.(1992, p. 11, as cited in Gall, et.al, 2003, p. 416)

 

A type of experiment in which a particular behavior of an individual or a group is measured at periodic intervals and the experimental treatment is administered one or more times between those intervals. An example of an appropriate situation for a single case experiment would be to conduct research on behavior modification.

 

 

 

#7(State specific threats to the internal and external validity of common experimental designs)

(Table p. 385)

 

Design
Threats to Internal Validity
Threats to External Validity
One-shot case studyHistory, maturation, selection, and mortalityInteraction of selection and X
One group pretest-posttest designHistory, maturation, testing, instrumentation, interaction of selection and other factorsInteraction of testing and X. Interaction of selection and X.
Time-series designHistoryInteraction of testing and X
Pretest-posttest Control-group designNone (mortality according to Dr. Bass)Interaction of testing and X
Posttest-only Control-group designMortalityNone
Solomon-four group designNone (mortality according to Dr. Bass)None

 

#8(describe the statistical techniques that typically are used to analyze data yielded by experiments) _(e.g., ANCOVA) (p.389-398)

Measurement of Change

 

  • Nearly all experiments are attempts to determine the effect of one or more independent variables on one or more dependent variables. Educational research usually involves an independent variable (the treatment) that is often is a new educational practice or product, and the dependent variable (observable outcome) which is often measured in terms of student achievement, attitude, or self-concept.

 

 

  • Gain score (also called a change or difference score): Subtract Pretest Score from PostTest Score; individual's score on a test administered at one point in time minus that individual's score on a test administered at an earlier time. For example, if a student's initial score on a measure of achievement was 90 and the student's score rose to 100 after administration of the experimental treatment, the gain score would be 10.

 

  • Ceiling Effect: occurs when the range of difficulty of the test items is limited and therefore scores at the higher end of the possible score continuum are artificially restricted.

 

  • Regression toward the mean (a.k.a Statistical Regression): statistical phenomenon describing the tendency for research participants who score either very high or very low on a measure to score nearer the mean when the measure is re-administered.

 

  • Analysis of Variance: Test to determine the likelihood that the differences between the mean scores occurred by chance (F value generated by ANOVA). a procedure for determining whether the difference between the mean scores of two or more groups on a dependent variable is statistically different.

 

Design
Statistics
One-shot case studyNone. Nothing to compare.
One group pretest-posttest designComparison of means by chi-square or t test as appropriate. If pre or posttest scores show marked deviation from normal distribution, use a nonparametric test of statistical significance.
Pretest-posttest Control-group designComputer descriptive statistics. Preferred method of comparison in ANCOVA.
Posttest-only Control-group designt test
Solomon-four group designComputer descriptive statistics. Preferred method of comparison in ANCOVA.

 

 

Pearls of Wisdom

 

"The key problem in experimentation is establishing suitable controls so that any change in the posttest can be attributed only to the experimental treatment that was manipulated by the researcher". Pg. 367

 

Representative Design: "When appropriate, conduct the research in the actual educational setting or other environment to which you wish to generalize your findings". Pg. 379

 

"The approach of testing cases that offer the best chance of supporting one's hypotheses is a positive test strategy. They demonstrated that under certain conditions, including those in which the purpose of the research is to test hypotheses about new educational programs and methods, researchers should seek instances that support their hypotheses rather than instances that refute them". Pg. 380

 

"Treatment fidelity is the extent to which the treatment conditions, as implemented, conform to the researcher's specifications for the treatment". Pg. 381

 

"Treatment fidelity can be maximized by careful training of the individuals - often teachers - whoa re to implement the treatment". Pg. 381

 

Strong vs. Weak Experimental Treatments -- "One of the major challenges in experimental research is to administer a treatment that is strong enough to have a significant effect on the dependent variable". Pg. 383

[GMB -- It is not easy to change people, Never underestimate the challenge in changing long established habits or patterns of human behavior. Remember the Harvard Law of Change: "Under the most rigorously controlled conditions of pressure, temperature, volume, humidity, and other variables, the organism will do as it damn well pleases."]

 

 

Obstacles to forming and maintaining equivalent treatment groups in field experiments:

 

Withholding the treatment from the control group. Solution - "tell the control-group participants that they will receive the treatment after the experiment is concluded".

Faulty randomization procedures. "Participants may not believe the researcher's statement that the assignment to a treatment group was random". Solution - "Have a credible witness observe the random assignment process."

Small sample size. "There are several solutions to the problem of randomly assigning a small sample to two or more treatment groups. One obvious solution is to attempt to increase the sample size. Another solution is to use matching procedures. The third solution is to consider whether one or more treatment groups can be eliminated".

Intact groups. "An intact group is a set of individuals who must be treated as members of an administratively defined group rather than as individuals". Solution - "Increase the number of classrooms in the sample and institute one treatment condition per classroom." Pg. 386-388

 

Self-Check Test from textbook * correct answer

 

1. A posttest in an experiment is sometimes the

 

A. dependent variable *

B. experimental treatment

C. experimental variable

D. treatment variable

 

 

2. An experiment in which extraneous variables are controlled is said to be

 

A. internally reliable

B. internally valid *

C. externally valid

D. externally reliable

 

 

3. In students' scores tend to move toward the mean upon retesting, ____ is said to have occurred.

 

A. experimental mortality

B. statistical regression *

C. maturation

D. reactive effect of pretesting

 

 

4. If the experimental treatment is affected by the administration of the pretest, the ____ of the experiment would be weakended.

 

A. internal validity

B. internal reliability

C. external validity *

D. external reliability

 

 

5. Representative design of experiments assumes that

 

A. the learning environment is a complex, interrelated ecology

B. the human learner is an active processor of information

C. the intended effects of an experimental intervention may radiate out to affect other aspects of performance

D. all of the above *

 

 

6. Researches give teachers special attention, though not part of experiment, may cause change. This is called the

 

A. Hawthorne effect *

B. John Henry effect

C. effect of mulitple-treatment interference

D. reactive effect of experimentation

 

 

7. To minimize the effects of experimenter bias upon the outcome of an experiment is for the researcher to

 

A. train naive experimenters to collect the data from research participants *

B. select experimenters who have prior experience in doing research on the problem being investigated

C. fully disclose the purpose of the study to the experimenters

D. all of the above

 

 

 

 

 

 

 

 

Home Page

Next Chapter 13 Experimental Research: Designs, Part 2

Planning Page for this Wiki Check here to see who is doing what!

Comments (0)

You don't have permission to comment on this page.