Saturday, September 19, 2009

Evaluating the ECS Programming for Children with Severe Disabilities

ASSIGNMENT 2: WHAT MODEL IS APPROPRIATE TO EVALUATE THIS PROGRAM AND WHY?

Based on the limited information provided, the ECS Programming for Children with Severe Disabilities appears to have the goal of helping to improve the functioning of children with severe disabilities in a learning environment either at home or at a center (e.g., daycare, kindergarten). Considering that Individual Program Plans, with clear objectives and goals, are designed for each participant based on his/her needs, the desired outcome of each program may be different. However, one overall goal that should be apparent in each individual program would be to make some sort of gains in the child’s development and/or functioning; in other words, there should be benefits to participating in the program otherwise the program is no longer feasible.

Before finalizing the decision of which approach or model to use to evaluate the ECS Program, a consultation with the stakeholders would have to occur. Stakeholders, or those who are interested in having the evaluation done, should have some input in the direction that evaluation will take. There is likely a reason why the stakeholders have requested that an evaluation be done (such as conflict or complaints from participants, budget requirements, the desire to replicate the program elsewhere, etc.) and part of my job as an evaluator would be to help answer the questions/needs of those stakeholders. Designing the evaluation should be a collaborative effort between the evaluator and stakeholders.

POSSIBLE MODELS/APPROACHES TO THE EVALUATION

The model that I would recommend to evaluate the ECS Program would depend on what question(s) stakeholders want answered through my evaluation. However, the following describes potential models for evaluating the ECS Program and the situations under which each model may be more appropriate to apply.

Summative Goals-Based Evaluation: The Goal is to Determine Whether or Not the Program is Working as Intended

A potential goal of the evaluation may be to determine whether or not the intended objectives and goals of the Individual Program Plans are actually being achieved. There could be numerous motivations for performing a summative goals-based evaluation on the ECS Program. A few examples include identifying areas of the program that should be retained, cut or improved upon in the future, determining whether or not, and to what extent the goals of the program are being reached and the costs associated with obtaining those goals. If the purpose of the evaluation is to examine the extent to which the goals of the program are being achieved, the primary evaluation model that I would recommend would be a goals-based evaluation.

Outcomes-Based Evaluation: The Goal is to Determine if Participants are Benefiting from the Program

Perhaps for publicity reasons the stakeholders would like to examine the efficacy of the programs being applied to each participant. Or, the program may be working so well (or so poorly) that an assessment of participant outcomes is required to make decisions about the future implementation of the program. In these instances, an outcomes-based evaluation may be best suited; the evaluation would aim to assess the benefits and drawbacks of completing the program. In other words, an outcomes-based evaluation would help determine whether or not participants demonstrate improvement from participating in the program and whether those improvements outweigh the costs associated with it. Such an evaluation would help stakeholders decide whether or not to keep or eliminate the program (or certain aspects of it).

Process Evaluation: The Goal is to Replicate the Program Elsewhere or to Inform Stakeholders about the Costs and Benefits of the Program

Perhaps the program has been previously evaluated and has been shown to be quite successful in achieving its goals and there is now a desire to replicate the program elsewhere. Maybe there have been complaints regarding the program, or changes to the program have been made over the years and the expenditure of resources needs to be assessed. In these instances, I would recommend a process evaluation which provides a detailed description of the program’s process, or how the program actually works or runs. The results obtained from a process evaluation could facilitate the implementation of the program at different locations, reveal how resources (including staff, money, etc.) are being utilized and identify potential inefficiencies in the program. If the process of the program is to be evaluated, a process evaluation would be best suited.

Formative Evaluation: The Program is Relatively New and the Goal is to Obtain Feedback regarding Program Improvement

It may be the case that the stakeholders want feedback regarding areas of the program to improve or to test-run the implementation of the program. This would be more likely to occur if the program is somewhat in its infancy and is still being developed. The goal of a formative evaluation of the ECS Program would be to identify areas of the program that could be developed or improved, and to help guide the program towards that improvement.

An Eclectic Approach: There are Numerous Questions to be Answered

Depending upon the purpose of the evaluation and the questions that stakeholders want answered, one model may prove to be better suited than the others. However, it may also be that a more eclectic approach, or a mixture of the models, would be preferred. For example, the stakeholders may want an evaluation of participants’ outcomes (outcomes based) as well as to test run new materials (formative) while attempting to replicate the program at another location (process). Answering all of these questions would not be permitted by strictly following one model while disregarding others; rather a mixture of the models and methods may be the best vehicle to arrive at the desired answers.

Depending on the questions that stakeholders want answered and the resources available to complete the evaluation, the direction and scope of the evaluation may change. My job as an evaluator of the ECS Program would be to first identify the purpose of the evaluation through consultation with stakeholders and to then decide on the most appropriate model or models to follow in order to provide the stakeholders with the information they need to make their decisions.

Friday, September 11, 2009

Evaluating an Evaluation

Quayhagen, M. P., Quayhage, R. R.Hendrix, R. C., Jackson, J. E., Snyder, L., & Bower, D. (2000). Coping with dementia: Evaluations of four nonpharmacological interventions. International Psychogeriatrics, 12(2), 249-265.

Quayhagen et al. (2000) performed an outcome-based (summative) evaluation of four nonpharmacological interventions directed at individuals diagnosed with dementia and their caregiving spouses. Using an experimental design, Quayhagen et al. randomly assigned patients with specific forms of dementia to one of four intervention groups or a wait-list control group. Although valuable data was collected through the pre-test/post-test experimental design of the evaluation, numerous limitations were revealed.

The pre-test/post-test design of the evaluation enabled Quayhagen et al. (2000) to statistically examine the outcomes of each of the interventions by comparing results between both individual baselines (pre-test) as well as to a no-intervention control group. Employing a pre-test helps to eliminate idiosyncratic differences in the results while comparisons between the control group and the interventions provides greater statistical power to detect differences in outcomes due to the interventions as a whole. The fact that participants who completed the interventions had higher education than the general population, and were primarily white (93%) adults with specific forms of dementia (e.g., 70% were diagnosed with Alzheimer’s dementia) the generalizability of the results is restricted; however, some effort to account for these differences was made. For example, participants were first randomly assigned to either the control or intervention group; next those assigned to the intervention group were randomly assigned further to one of the four interventions being evaluated. Random assignment of participants facilitates the balance of pre-existing characteristics across all groups; in other words random assignment in experimental designs grants a greater likelihood that groups of participants will be ‘equal’ than if a quasi-experimental design is used. Indeed, preliminary analyses indicated that the treatment groups did not differ on age or education and ethnicities were spread out among the groups randomly. Further strengths of the evaluation include an adequately trained team using a variety of training methods (e.g., role playing, videotaping) and a pre- and post-test assessment team who was blind to the assigned treatments. Thorough training ensures that participants from the same groups are receiving identical treatments even though different individuals may be administering them. Using a blind assessment team helps to eliminate potential biases that individuals may have regarding specific treatments (e.g., one may believe that cognitive, rather than support-system interventions are more beneficial which may translate into the results of the assessment).

Various weaknesses of the evaluation are also apparent indicating a need for future research in this area. For example, only 15 patient/caregiver dyads agreed to be wait-listed for treatment potentially limiting the statistical power of the analyses with this group. Additionally, participants who agreed to be part of the wait-list control may have sought out other community or medical-based interventions/treatments the eight week wait between their pre- and post-tests; no efforts were made to determine whether or not this occurred. Thus, the control group may not be a no-treatment control group but rather a group who engaged in other treatments outside of the four being evaluated. There was also a high attrition rate for one of the interventions that limits the conclusions that can be drawn. For example, those who withdrew from the treatment may not have found it beneficial, while the select few who remained in the treatment may have. Thus, the efficacy of the treatment may be more positively biased than had those who withdrew completed the treatment and been included in the analyses. The 8 week treatment period may also not have been long enough to fully appreciate the benefits of the treatments; perhaps great gains may have occurred had the interventions and evaluations been more longitudinal in nature. Lastly, although one of the goals of the evaluation was to determine the impact of the interventions on both the patient and caregiver, a number of caregivers did not return the final, qualitative questionnaire. This questionnaire was the primary indicator of the caregivers’ evaluation of the treatments however, no results were reported regarding whether or not caregivers from specific treatments were more or less apt to return it. Thus, the reader is left wondering if the caregivers who failed to return the questionnaire felt the assigned treatment was in fact beneficial.