ntg-mono

4. Manage the implementation of the evaluation

|

The manager of an internal or external evaluation team will need to ensure the collection of data follows the appropriate methodology and timeframes set out in section 5 of the work plan (see section 2.5. Evaluation methodology). It is the responsibility of the manager to respond to any issues that arise and communicate often and early with the team.[1]

The evaluation team should have expertise in data collection and be able to provide advice on issues including data quality. It is important to ensure data quality checking is undertaken when the data is collected, such as checking datasets for completeness, spot checking survey responses, and random checks for online systems.[2]

Collecting baseline data will allow for meaningful comparisons. Ideally, baseline data will be collected prior to program implementation. For programs that have already commenced, baseline data will need to be collected retrospectively. In the absence of available baseline data, it may be appropriate to use benchmarking against best practice research or similar existing programs.


[1] BetterEvaluation: Manager's guide to evaluation – 7. Manage implementation of the evaluation

[2] NSW Department of Premier & Cabinet: Evaluation toolkit – 6. Manage implementation of the workplan, including production of report(s)

Data analysis is a crucial step in the evaluation process. It involves sorting the data and looking for patterns to create insights. Quality analyses use the most appropriate methods for purpose and present the data in a meaningful way. Further information is on the BetterEvaluation website.

Analysing both qualitative and quantitative data strengthens the evaluation by balancing the limitations of both data types. It is important to ensure that these are combined so that qualitative data complements and provides explanation for the quantitative data.

Data analysis should look for evidence that proves causation rather than correlation between the program and impacts. There are various tools and approaches to check causal attribution including contribution analysis, which offers managers and evaluators a step-by-step approach to drawing conclusions about whether the program has contributed to particular outcomes. In addition, impact evaluations should rule out possible alternative explanations. Refer to section 2.5.2: Evaluation questions for further guidance on answering evaluation questions.

Box 2: Data visualisation

Data visualisation is the process of representing data in a way that is clear and easy to understand. The most appropriate type of graph or visualisation will depend on the nature of the variables, for example, relational, comparative or time-based. For further information on data visualisation options, see Visualise data on the BetterEvaluation website.

It is the responsibility of the evaluation manager to ensure data is presented in a user friendly way. It can be useful to request the evaluation team to present preliminary findings to a broader team to highlight any inconsistencies or errors.[1]

4.2.1. Evidence synthesis

The evaluator should present an overall conclusion by synthesising the data and placing a value judgement on the results. There are various approaches for summarising evidence as outlined in Table 19.

Table 19: Common techniques in evidence synthesis[2]
TechniqueDescription
Cost benefit analysis Compares cost to benefits in monetary units.
Cost effectiveness analysis Compares costs to the outcomes in terms of a standardised unit (for example, additional years of schooling, years of life saved).
Cost utility analysis A particular type of cost effectiveness analysis that expresses benefits in terms of a standard unit such as Quality Adjusted Life Years.
Multi-criteria analysis A systematic process which considers monetary impacts, material costs, time savings and project sustainability as well as the social and environmental impacts that may be quantified but not so easily valued.
Numeric weighting Numeric scales to rate performance against each evaluation criterion to result in a total score.
Qualitative weight and sum Qualitative ratings (such as symbols) to identify performance in terms of essential, important and unimportant criteria.
Rubrics A descriptive scale for rating performance that incorporates performance across a number of criteria (can be qualitative and quantitative).

Evaluation findings often involve some form of extrapolation or generalisation of the data. This may involve making generalisations about the future. For example, if the evaluation finds a program is working well based on current data, is the success likely to continue.

Other evaluations such as pilot programs require recommendations to be made based on scaling up to a wider population or scope. The findings need to clearly outline the situation to which results will be generalised.


[1] NSW Department of Premier & Cabinet: Evaluation toolkit – 6. Manage implementation of the workplan, including production of report(s)

[2] Adapted from BetterEvaluation: Methods and processes – Synthesise | data from one or more evaluations: Synthesise data from a single evaluation


Last updated: 14 December 2020

Give feedback about this page.

Share this page:

URL copied!
Back to top
URL: https://treasury.nt.gov.au/dtf/financial-management-group/program-evaluation-unit/toolkit/4.-manage-the-implementation-of-the-evaluation
25 September 2021, 11:48 pm