Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of

Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of

Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences between types of evaluation designs Identify the key elements of each type of evaluation design Understand the key considerations in selecting a design for conducting an evaluation of your AmeriCorps program What is evaluation design? Evaluation design is the structure that provides the information needed to answer each of your evaluation questions. Your intended evaluation design should be based on and aligned with the following:

Your programs theory of change and logic model Primary purpose of the evaluation and key research questions Resources available for the evaluation Funders evaluation requirements Evaluation designs and CNCS requirements Meet Requirements Evaluation Study Designs Large Grantees Small Grantees/ EAP Programs Process Design (Non-Experimental Design Studies) No Yes Outcome Design (Non-Experimental Design Studies)

No Yes Yes Yes Outcome (Impact) Design (Quasi-Experimental* or Experimental Design Studies) *Fulfills CNCS evaluation design requirement for large, recompeting grantees if a reasonable comparison group is identified and appropriate matching/propensity scoring is used in the analysis. Basic types of evaluation designs The two sides of a programs logic model align with the two types of evaluation designs: Process and Outcome. Process evaluation Goals: Documents what the program is doing Documents to what extent and how consistently the program has been implemented as intended

Informs changes or improvements in the programs operations Common features: Does not require a comparison group Includes qualitative and quantitative data collection Does not require advanced statistical methods Process evaluation designs Common methods include: Review of program documents and records Review of administrative data Interviews, focus group Direct observation Types of analysis: Thematic identification Confirmation of findings across sources (triangulation) Facilitated example: Process evaluation Evaluation Design Crosswalk: Process Evaluation Research question Evaluation design Methods

What kinds of clients are seeking financial education services? Process evaluation How are clients accessing the program? Client interviews (25) Document review: client intake forms, member activity logs

Data to be collected, when, and by whom Client interviews (same as above) Partner focus groups (4) Evaluator will conduct interviews when clients begin program Documents reviewed quarterly (Same interview as above)

Evaluator will hold focus groups quarterly Analysis plan Thematic analysis on interview transcripts using NVivo Coding and thematic analysis (Same as above) Thematic analysis on transcripts using NVivo Outcome evaluation Goals:

Identifies the results or effects of a program Measures program beneficiaries' changes in knowledge, attitude(s), and/or behavior(s) that result from a program Common Features: Typically requires quantitative data Often requires advanced statistical methods May include a comparison group (impact evaluation) What is a comparison or control group? A group of individuals not participating in the program or receiving the intervention Necessary to determine if the program, rather than some other factor, is causing observed changes Comparison group is associated with a quasiexperimental design and control group is associated with an experimental design Outcome evaluation designs Non-experimental designs Outcomes are only tracked for the intervention group There are several variations within the category of nonexperimental outcome

designs, differing only in number and timing of outcome measurement points: a) Single group post-test b) Single group pre- and post-test Intervention Group Pretest a) Single group post-test b) Single group pre- and posttest 0 Treatment Posttest X 0 X

0 X = intervention is administered 0 = measurement is taken Quasi-experimental designs Defined by collecting data on two or more study groups an intervention group and a comparison group The intervention and comparison groups are identified from pre-existing or self-selected groups and are not formed through a random assignment process Intervention Group Comparison Group

Pre-test Treatment 0 X 0 Posttest 0 0 X = intervention is administered 0 = measurement is taken Pre-existing differences between the intervention and comparison groups at the outset of the intervention may lead to inaccurate estimates of the programs effects Types of quasi-experimental designs

Regression discontinuity Differences-in-differences Comparative interrupted time series Pre/post-test with matched comparison group Group constructed using: Propensity score matching Case matching Instrumental variable Experimental designs Defined by collecting data on two or more study groups an intervention group and a control group Pretest Intervention

Group Randomly assigned 0 Treatment X Posttest 0 Control Group Random assignment 0 0 techniques (e.g., lottery draw) Randomly assigned are used by the evaluator to assign study participants to X = intervention is administered 0 = measurement is taken either the intervention or the

control group Random assignment ensures the study groups are equivalent prior to intervention, thus are often considered the most credible design to show impact Facilitated example: Outcome evaluation Evaluation Design Crosswalk: Outcome Evaluation Research question Evaluation design Do clients exit the Outcome program with evaluation increased knowledge of personal finance concepts relevant to their needs? Methods

Data to be collected, when, and by whom Randomized control trial- clients will be randomly assigned to treatment at time of application to program Control group individuals deferred for 6 months, then eligible to participate in program Analysis plan Client and control

group knowledge of personal finance concepts Pre-test: during application; post-test: for treatment group, upon completion of program. For control group, at 6 months post-deferment Collected by evaluator via paper and pencil and online survey Statistical analysisdescriptive statistics; between groups T-test using STATA software Evaluation designs and CNCS requirements Meet Requirements Evaluation Study Designs

Large Grantees Small Grantees/ EAP Programs Process Design (Non-Experimental Design Studies) No Yes Outcome Design (Non-Experimental Design Studies) No Yes Yes Yes Outcome (Impact) Design

(Quasi-Experimental* or Experimental Design Studies) *Fulfills CNCS evaluation design requirement for large, recompeting grantees if a reasonable comparison group is identified and appropriate matching/propensity scoring is used in the analysis. Questions?

Recently Viewed Presentations

  • Why SFIS?

    Why SFIS?

    Why SFIS. Audit. Compliance. Savings. There are three primary benefits to SFIS: Audit | Compliance | Cost Savings. Each of these three areas, when combine, drive emphasis upon yielding managerial benefits in terms of informed decision-making and realized savings anticipated...
  • Coulomb's Law

    Coulomb's Law

    Coulomb's Law The effect of charge and distance on electric force Coulomb's Law Coulomb's law states that the electrical force between two charged objects is directly proportional to the product of the quantity of charge on the objects and inversely...
  • Sir Gawain and the Green Knight - Henry County School District

    Sir Gawain and the Green Knight - Henry County School District

    Bob and wheel: in alliterative verse, a group of five lines with an "ababa" rhyme scheme. The "bob" is the first line in the group and is shorter than the rest; the "wheel" is the quatrain that follows.
  • Ad-hoc Pet Dog Healthcare Diagnosing System using Machine ...

    Ad-hoc Pet Dog Healthcare Diagnosing System using Machine ...

    Why ART2? ART2 is a self-organizing (no target value) pattern clustering structure by competitive learning. 2) It is a stable and adaptable neural network with incremental learning ability, that is, new learning procedure does not affect already existing clusters.
  • Sport - New Internationalist

    Sport - New Internationalist

    Since the 1979 revolution, Iranian women have not been allowed to go to live sports events because they are 'un-Islamic'. But in 2012, this ban started to include volleyball too. And volleyball is the most popular sport in Iran. So...
  • Chilean Mill uses PeCOD for Determination of Chemical Oxygen ...

    Chilean Mill uses PeCOD for Determination of Chemical Oxygen ...

    Chilean Mill's Findings Using the peCOD Method. Figure 1: peCOD (blue) versus traditional method (red) for COD analysis at Diffusor Line 2. Figure 2: peCOD (blue) versus traditional method (red) for COD analysis at Press 2, Line 2.
  • Drawing Contours in PowerPoint - Northeastern Illinois University

    Drawing Contours in PowerPoint - Northeastern Illinois University

    Drawing Contours in PowerPoint(on a laptop or desktop). Select Insert, then Shapes, and then the squiggly "Scribble" line. Then place the cursor where you want the line to begin, and click and drag to draw.
  • OpenPKG

    OpenPKG

    Cross-Platform Multi-Instance Unix Software Packaging The best way to predict the future is to invent it. — Alan Kay Only those who attempt the absurd can achieve