banner



Tool 46 Designing The Evaluation

  • What is evaluation?

"Evaluation goes beyond an assessment of what has happened; it considers why something has occurred (…) and, if possible, how much has changed as a consequence. It should look at the wider perspective and provide an independent and objective judgement of the situation based on the evidence available.

Evaluation looks for evidence of causality – i.e. did the intervention (help) bring about the expected changes or were there other unintended or unexpected changes? Beyond listing outputs and describing changes, evaluations should investigate any links between the observed changes and the [policy measure]. Generally, evaluations should be carried out only after sufficient time has passed to allow for changes to be identified and/or measured.

An evaluation should also assess the strength of the evidence obtained, and the implications for the robustness of the conclusions reached. Although there are many useful activities which may cover some of the elements of an evaluation (e.g. reports, implementing reports, monitoring exercises, audits, and studies including cumulative cost assessments) it is unlikely that any of these sources will on their own address all of necessary issues in order to qualify as an evaluation."

Source: European Commission's Better Regulation Toolbox, Chapter VI Evaluations and fitness checks, Tool #43 "What is an evaluation and when is it required?" (see p.3 of the pdf: https://ec.europa.eu/info/sites/info/files/better-regulation-guidelines-evaluation-fitness-checks.pdf )

"Evaluation is an objective process of understanding how a policy or other intervention was implemented, what effects it had, for whom, how and why.

Evaluations need to be tailored to the type of policy being considered, and the types of questions it is hoped to answer. The earlier an evaluation is considered in the policy development cycle, the more likely it will be that the most appropriate type of evaluation can be identified and adopted.

Good-quality evaluations generate reliable results which can be used and quoted with confidence. They enable policies to be improved, or can justify reinvestment or resource savings. They can show whether or not policies are delivering as planned and resources being effectively used.

Good-quality evaluations can play important roles in setting and delivering on government priorities and objectives, demonstrating accountability, and providing defensible evidence to independent scrutiny processes. They also contribute valuable knowledge to the policy evidence base, feeding into future policy development and occupying a crucial role in the policy cycle.

Not evaluating, or evaluating poorly, will mean that policy makers will not be able to provide meaningful evidence in support of any claims they might wish to make about a policy's effectiveness. Any such claims will be effectively unfounded."

Source: UK Magenta Book ("recommended central government guidance on evaluation that sets out best practice for departments to follow"), chapter 1 "Key issues in policy evaluation" (see p.11 of pdf: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/220542/magenta_book_combined.pdf )

For a general introduction to evaluation of energy efficiency policies, see also the first webinar of the series IEPPEC Evaluation Academy: http://www.leonardo-energy.org/resources/965/introduction-to-evaluation-and-the-role-of-ieppec-58738a5661e31

For more details and examples about why evaluating a policy measure, see [add link to the section "Why evaluate?" of the sub-gateway "Process of evaluation"]

  • What type of evaluation for what evaluation questions / objectives?

"Evaluations can be designed to answer a broad range of questions on topics such as how the policy was delivered, what difference it made, whether it could be improved and whether the benefits justified the costs.

Broadly, these questions can be answered by three main types of evaluation. Process evaluations assess whether a policy is being implemented as intended and what, in practice, is felt to be working more or less well, and why. Impact evaluations attempt to provide an objective test of what changes have occurred, and the extent to which these can be attributed to the policy. Economic evaluations, in simple terms, compare the benefits of the policy with its costs.

Understanding why an intervention operated in a certain way and had the effect it had generally involves combining the information and analytical approaches of the different types of evaluation and they should, therefore, be designed and planned at the same time.

The choice of evaluation approach should be based on a statement of the policy's underlying theory or logic and stated objectives – how the policy was supposed to have its effect on its various target outcomes. The more complex the underlying logic, the more important it will be to account for other factors which might affect the outcome.

Having a clear idea about the questions that need to be addressed and the required type(s) of evaluation at an early stage will help inform the design of the evaluation and the expertise required."

Source : UK Magenta Book ("recommended central government guidance on evaluation that sets out best practice for departments to follow"), chapter 2 "Identifying the right evaluation for the policy" (see p.21 of pdf: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/220542/magenta_book_combined.pdf )

In the evaluation literature, economic evaluations can also be named CBA (Cost-Benefit Analysis).

About the concept of logic of intervention, see the next point.

The EPATEE toolbox is primarily focused on impact evaluations. For more details about process and economic evaluations, see [add link to the section "Evaluation approaches complementary to impact evaluations"]

  • Key concept 1: the policy theory (or intervention logic)

"The starting point for an evaluation is to consider how the intervention [or policy measure] was expected to work. This requires identification of the different steps and actors involved in the intervention which in turn allows identification of the expected cause and effect relationships. There are many different ways to do this [see example in the figure below]. The intervention logic is an important tool for both communication and analysis. The following considerations are linked to the intervention logic:

  • Evaluations should try to establish to what extent the intervention is responsible for the observed changes.
  • Cause and effect relationships are challenging to prove, particularly when evaluating policies which operate in a complex environment influenced by a wide range of factors falling both within and outside the scope of the intervention.
  • When evaluating [a policy measure], it is particularly difficult to identify a robust counter-factual situation (i.e. what the situation would be if [the policy measure] had not been [implemented])."

Source : European Commission's Better Regulation Toolbox , Chapter VI Evaluations and fitness checks, Tool #43 "What is an evaluation and when is it required?" (see p.8 of the pdf: https://ec.europa.eu/info/sites/info/files/better-regulation-guidelines-evaluation-fitness-checks.pdf )

In the evaluation literature, the intervention logic is also commonly named policy theory.

Likewise, the counter-factual situation is also commonly named baseline.

The intervention logic (or policy theory) is also commonly linked to several levels of objectives, as represented in the figure below.

For more details about the policy theory and theory-based evaluations (or process evaluations), see [add link to the section "Evaluation approaches complementary to impact evaluations"]

  • Key concept 2: evaluation criteria

Evaluation criteria can be used to specify the focus or scope of an evaluation. The European Commission's Better Regulation Toolbox highlights four common evaluation criteria (plus one criteria specific to EU policies: added value of EU policies compared to national policies):

  • Effectiveness: "Effectiveness analysis considers how successful [a policy measure] has been in achieving or progressing towards its objectives."
  • Efficiency: "Efficiency considers the relationship between the resources used by an intervention and the changes generated by the intervention (which may be positive or negative)."
  • Relevance: "Relevance looks at the relationship between the needs and problems in society and the objectives of the intervention and hence touches on aspects of design."
  • Coherence: "The evaluation of coherence involves looking at a how well or not different [policy measures] work together. It may highlight areas where there are synergies which improve overall performance (…) ; or it may point to tensions e.g. objectives which are potentially contradictory, or approaches which are causing inefficiencies."

Source: European Commission's Better Regulation Toolbox, Chapter VI Evaluations and fitness checks, Tool #47 "Evaluation criteria and questions" (for more details about each criteria and corresponding usual evaluation questions, see: https://ec.europa.eu/info/sites/info/files/file_import/better-regulation-toolbox-47_en_0.pdf )

Other evaluation criteria can be used (e.g., viability, utility) as represented in the figure below linking intervention logic, objectives and evaluation criteria.

  • Examples of general guidelines for evaluation

European Comission's guidelines for the evaluation of EU policies, programmes and legislation (chapter VI of the Better Regulation toolbox): https://ec.europa.eu/info/better-regulation-toolbox_en

UK Magenta Book ("recommended central government guidance on evaluation that sets out best practice for departments to follow"): https://www.gov.uk/government/publications/the-magenta-book

French Evaluation Society's charter (in French): http://www.sfe-asso.fr/evaluation/charte-evaluation

German Evaluation Society's guidelines (in German): https://www.degeval.org/images/stories/Publikationen/DeGEval_-_Standards.pdf

For other examples of guidelines or standards, see European Evaluation Society's website: http://europeanevaluation.org/resources/evaluation-standards/national-and-regional-evaluation-societies

Tool 46 Designing The Evaluation

Source: https://www.epatee-toolbox.eu/evaluation-principles-and-methods/general-principles/about-evaluation/

Posted by: lewissatepas64.blogspot.com

0 Response to "Tool 46 Designing The Evaluation"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel