Comparing Process and Outcome Evaluations

February 22, 2016 | Caroline Glesmann

glesmann

Evaluation is an important part of managing a successful program. Stakeholders—including program directors, community members, and funders—want to know if a particular program is working as expected and making a difference for its intended participants.

Evaluation is an important part of managing a successful program. Stakeholders—including program directors, community members, and funders—want to know if a particular program is working as expected and making a difference for its intended participants.

Two common ways to assess a program are process evaluation and outcome evaluation. Each has different but complementary purposes, and ideally both types of evaluation work hand in hand. In short, process evaluation answers the question “How does the program work?” while outcome evaluation answers the question “Does the program work?”

As an example, let’s take a program that aims to prevent young people from becoming involved in the juvenile justice system. This program is designed for youth ages 12–17 who are at risk for system involvement due to issues with substance use, school attendance, conflicts with others, and other risk factors. The program includes substance use education and treatment, help with academics, and counseling.

Process evaluation helps you develop and document a comprehensive picture of how a program functions. It also helps to answer the question of whether the program is operating as intended, which is known as “fidelity.” To explore how this program operates, you will likely gather information about who the program serves, how youth are deemed eligible and selected for the program, what services they participate in, and how services are delivered. You will also gather information on the program’s theory and design, characteristics of the local system and community, and reasons the program was implemented.

To collect and organize process evaluation data, NCCD often uses a model that examines the program’s context, participant identification and selection, program interventions or services, program goals, and relationships or linkages that affect the program. Data collection methods include conducting individual interviews or focus groups with program staff, participants, and other stakeholders; observing program activities; reviewing program documents such as attendance records and case files; and compiling data on the population served and services delivered.

Outcome evaluation assesses whether a program was successful in achieving its intended results. For the prevention program described here, outcome evaluation will tell us if—during a particular period of time—participation led to changes in young people’s skills, knowledge, attitudes, or behaviors and whether those changes were associated with reduced juvenile justice involvement. As with process evaluation, both qualitative and quantitative data are collected and analyzed, but in outcome evaluation, conclusions are based on the statistical analysis of quantitative measures.

For the program described here, outcomes assessed would include juvenile justice involvement as well as changes in drug use, school attendance, conflict resolution skills, and any other factors addressed by the program. The evaluation plan will determine how these factors will be measured and analyzed. For example, juvenile justice involvement could be measured in various ways, such as by tracking a participant’s number of arrests, referrals to the probation department, or adjudications (convictions), while attendance might be measured as the number of school days tardy or missed.

Outcome evaluations usually have either an experimental or a quasi-experimental design. Experimental research methodology includes the use of “random sample” selection, in which an otherwise equal pool of participants is randomly assigned either to a group that receives an intervention (the “treatment” group) or to a group that does not receive the intervention (the “control” or “comparison” group). The results of the two groups are then compared. Experimental design allows for the strongest conclusions to be drawn because it goes the furthest to ensure that the only difference between the control and treatment groups is the intervention itself. Quasi-experimental methods, which are often more feasible for programs with limited resources or where selecting a control group is difficult, compares results for the treatment group to the best comparison group possible.

Of course, both process and outcome evaluations have limitations. A process evaluation will not tell you whether your program is producing change in participants. An outcome evaluation will not provide detailed information about how a program operates. Funders and other stakeholders may prioritize outcome evaluation for providing information about a program’s effects on participants, while program managers may find process evaluation useful to determine areas for program improvement. Often a process evaluation can set the stage for an outcome evaluation. Ultimately, process and outcome evaluations have a range of practical purposes, including providing data to appropriately modify an existing program and producing evidence of promising practices.

Examples of NCCD evaluations: