Jump to content

Evaluability assessment

From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

An evaluability assessment (EA) is a qualitative investigation employed before a programme is evaluated.

Description

Evaluability assessments (EAs) provide information of whether a programme can be evaluated or not.[1] They are also used to describe the objectives, logic and activities of the programme with an aim to investigate its credibility, feasibility, sustainability and acceptability.[2] EAs address the likelihood of the programme achieving its anticipated outcomes, the required changes needed for optimum management, whether an evaluation can improve the programme's performance and to identify stakeholder interests in the evaluation and how the findings will be used.[3] According to Jung and Schubert (1983) these specific aspects[2] of a programme need to be address by an EA:

  • Obtaining perspectives of all involved in the programme e.g. programme staff and stakeholders
  • Investigating the programmes objectives the methods to measure whether they were achieved
  • Providing information of how the programme is actually functioning
  • Identifying ways to improve the programme
  • Relaying important information about the programme

Procedure

Evaluability assessors form a workgroup[3] which comprises an evaluation representative and a managerial member of the programme. The workgroup investigates official programme records and documentation. In addition, interviews and observations are conducted to understand and describe how the programme is perceived by the programme staff and relevant stakeholders.[1] The workgroup is supervised by a policy group who oversees managerial decisions concerning the impending evaluation of the programme.[3] Jung and Schubert (1983) provide six areas [2] which ought to be addressed in an EA. These include:[2]

  • Identifying the programme's objectives
  • Identifying the intended activities to achieve the programme objectives
  • Identifying incongruities between the programme's objectives and its intended activities
  • Investigating field operations
  • Comparing actual field operations to the programme's intended activities
  • Providing management and evaluation options

Potential limitations

An effective EA requires that workgroups and policy groups collaboratively engage with programme staff and stakeholders. This could be problematic when there are differences or tensions that arise between programme staff and stakeholders, posing a challenge to the EA.[2] Thurston, Graham, and Hatfield (2003) state that EAs are beneficial since they facilitate programme revisions. It is often found that after an EA, programmes require some modifications as there could be inconsistencies with the logic model or issues concerning implementation.[4]

A tension may arise when there are positive unintended outcomes that occur when a programme is poorly designed. How do the evaluability assessors, programme staff and stakeholders manage this difficulty? Is it permissible to allow the problematic programme to operate because of the benefits derived from it, or should the programme be put on hold and reformulated resulting in the end of the positive unintended outcomes?.[1]

Conclusion

EAs are complex and require cooperation from relevant individuals involved in a programme. They are subjective assessments of how credible a programme is. Despite their limitations, EAs are valuable tools to promptly address issues that arise in a programme. If an EA is efficiently conducted on a poorly designed programme, it has the potential to save programme staff and stakeholders time and funding resources that would be otherwise wasted if the programme were to continue operating unaltered.

References

  1. ^ a b c Rossi, P., Lipsey, M. W., & Freeman, H. E. (2004). Expressing and assessing program theory. In P. Rossi, M. W. Lipsey, & H. E. Freeman, Evaluation. A systematic approach (pp. 133-168). Thousand Oaks, CA: Sage.
  2. ^ a b c d e Jung, S. M., & Schubert, J. G. (1983). Evaluability assessment: a two-year retrospective. Educational Evaluation and Policy Analysis, 5(4), 435-444.
  3. ^ a b c Strosberg, M. A., & Wholey, J. S. (1983). Evaluability assessment: from theory to practice in the department of health and human services. Public Administration Review, 43(1), 66-71.
  4. ^ Thurston, W. E., Graham, J., & Hatfield, J. (2003). Evaluability assessment a catalyst for program change and improvement. Evaluation & The Health Professions, 26(2), 206-221.