Evaluating strategically

Monitoring and evaluation requires the commitment of resources. If an evaluation does not provide decision-makers with meaningful information it reduces resources available for program implementation.[1] Therefore, it is necessary to balance the cost of evaluation and the risk of not evaluating, noting that sometimes monitoring will be sufficient. While outcome and impact evaluations are important, well-designed data collection, program monitoring and process evaluations can help refine programs over time with minimal cost.[2]

Agencies and program managers will need to take a strategic approach in determining appropriate evaluation scopes, designs and resourcing requirements. For some programs, evaluation could simply involve routine assessment of activities and outputs built into program reporting, while for others evaluation will need to be more comprehensive and assess whether the program is appropriate, effective and efficient.[3]

Although it is not feasible, cost effective or appropriate to fully evaluate all Territory Government programs, some level of monitoring and review should be considered for all programs.

For whole of government programs, or programs with multiple components, it may be necessary to evaluate components separately as well as collectively, considering questions such as:

  • which program initiatives are providing the greatest impact
  • which elements of program delivery are most effective in generating desired outcomes
  • is greater impact achieved when specific strategies are combined into a package of initiatives
  • in what contexts are mechanisms of change triggered to achieve desired outcomes?[3]

Evaluations should aim to achieve the highest rigour for the lowest cost by:

  • incorporating evaluation planning at the initial program design stage
  • collecting the required data for monitoring and evaluation throughout program implementation and aligning this to existing data collections where possible
  • using a tiered approach that prioritises evaluative effort (see Prioritising evaluations for rolling schedule for further information).

Developing the annual program master list

To balance evaluative effort against the potential benefit, agencies need to review their existing stock of programs and prioritise evaluations.

A good practice starting point is a program master list, which is designed to capture all current Territory Government-funded programs to help prioritise evaluations. It identifies the extent to which existing programs have been evaluated and the proposed timing of any future evaluation.[2, 4] Agencies are asked to complete the Program master list template annually as part of the Budget development process. Integrating the program master list into the Budget development process ensures new Cabinet submissions are considered within the context of existing government programs and the available evidence base from completed evaluations.

In 2016, the New South Wales Auditor-General undertook a performance audit of the NSW Government’ program evaluation initiative. The audit set out the good practice model expected from each agency to prepare an evaluation schedule including a master list of all current agency programs with their tier ranking and linkage to government priorities. NSW Auditor-General’s Report to Parliament, Implementation of the NSW Government’s program evaluation initiative, 2016, accessed October 2020.

Agency activities that are not captured as part of the program master list will still be scrutinised as part of broader organisational reviews.

Prioritising evaluations for rolling schedule

To help manage and prioritise evaluations, agencies are required to prepare multi-year rolling evaluation schedules that are reviewed annually by the Budget Review Subcommittee of Cabinet. In addition to evaluating new programs in accordance with the approved evaluation overview, the schedules will be expected to include a list of existing programs planned for evaluation, including the tier and expected evaluation timeframe.

Evaluating existing programs can be complex and expensive, particularly where the data required to answer basic evaluation questions has not been collected. The section ‘Getting existing programs ready for evaluation’ has further guidance.

The evaluation schedule for each agency should be aligned to agency corporate planning cycles and internal decision-making processes and should be developed in consultation with DTF and CMC.

A whole of government evaluation schedule will be compiled by DTF and submitted to the Budget Review Subcommittee of Cabinet for approval along with an annual summary of evaluation findings for the previous year. An example of an evaluation schedule is available from the Commonwealth Department of Industry, Innovation and Science.

Table 1 provides a guide to prioritising programs for the rolling schedule of evaluation. A best-fit approach should be used to categorise programs (that is, a program does not need to satisfy every characteristic to fall into a particular tier).

Table 1. A guide to program tiers, evaluation types and timing[5]
Evaluation type
Tier Characteristics of program 1 year 2 years 3–5 years
4

Priority: strategic priority for government

Program accountability: Cabinet or Cabinet subcommittee

Funding: significant government/agency funding

Risk: high risk (either to government or the community)

Scope: multiple government agencies and/or multiple external delivery partners

Other factors: lack of evidence base, major external reporting requirements (for example, Commonwealth), innovative approach

Process Outcomes Impact
3

Priority: strategic priority for agency

Program accountability: portfolio Minister(s)

Funding: significant agency funding

Risk: moderate to high risk

Scope: multiple government agencies and/or external delivery partners

Other factors: lack of evidence base, internal reporting and evaluation requirement

Process Outcomes Impact or outcomes
2

Priority: named in department agency strategic plan

Program accountability: agency chief executive

Funding: moderate agency funding

Risk: low to moderate

Scope: responsibility of single agency, may involve external delivery partners

Other factors: limited evidence base, internal reporting and evaluation requirement

Process   Outcomes
1

Priority: low or emerging strategic priority for agency

Program accountability: business unit within agency

Funding: limited agency funding

Risk: low

Scope: single agency, may involve external delivery partners

Other factors: local delivery similar to other successful programs

Process   Process

The appropriate evaluation types and timing will need to be determined on a case-by-case basis to ensure the overall evaluation approach is fit-for-purpose.

When prioritising evaluations, agencies should give priority to:

  • tier 3 and tier 4 programs (as per the program tiering in Table 1)
  • programs that have not previously been evaluated
  • programs for which evaluation is required by Cabinet (for example, in line with an evaluation overview approved by Cabinet).

Tier 3 and Tier 4 programs should be prioritised for evaluation and would usually be expected to go through process, outcome and/or impact evaluations over the program lifecycle.

The prioritisation of Tier 1 and 2 programs is at the discretion of agencies but should be influenced by how they fit into higher tier programs (if applicable). In particular, agencies should consider evaluating small programs if they will be used to inform decisions about whether to roll out the program to a wider area and/or client group (such as a pilot or a trial) or will be used as evidence of another program’s effectiveness.[3]

Getting existing programs ready for evaluation

The Program evaluation framework emphasises the importance of planning for evaluation and data capture at the program design stage. However, existing programs without an evaluation work plan should also be periodically reviewed because:

  • the bulk of government spending relates to legacy programs
  • the nature and outcomes of these programs may have evolved or drifted away from their initial rationale or purpose over time
  • legacy programs have the potential to become embedded or institutionalised by the participants or community in ways that may have significantly affected their outcomes.

Evaluating programs that were not designed with evaluation in mind can be complex and expensive.[6] Completing an evaluation work plan (see section 2. Complete the evaluation work plan) can assist to get programs ‘evaluation ready’. An important first step is clarifying what the program aims to achieve and how it tries to achieve this by developing a program logic (further information in section 2.5.1. Program logic).[7] Developing a program logic for an existing program can be an uncomfortable process. Stakeholders may disagree about how a program works or even what it is aiming to achieve. The program logic may reveal that the program is not well formulated or that it includes dubious assumptions. To genuinely support learning and improvement, developing a program logic should not try to rationalise past program decisions. Instead, developing a program logic for an existing program should be an opportunity to question, debate and learn. This process can help agencies identify unnecessary activities and make space for more important ones.[7]

The program stage will also have implications for the evaluation design, see Table 2 from the Better Evaluation website .[8]

Table 2. Evaluation design at different program stages
Stage of program developmentConsequencePossible implication for the evaluation design
Not yet startedCan set up data collection from the beginning of implementationPossible to gather baseline data as a point of comparison and also to establish comparison or control groups from the beginning
  Opportunity to build some data collection into administrative systems to reduce costs and increase coverage
 Period of data collection will be longNeed to develop robust data collection systems including quality control and storage
Part way through implementationCannot get baseline data unless this has already been set upWill need to construct retrospective baseline data to estimate changes that have occurred
 Might be able to identify ‘bright spots’ where there seems to be more success and those with less successScope to do purposeful sampling and learn from particular successes and also cases that have failed to make much progress
Almost completedCannot get baseline data unless this has already been set upWill need to construct retrospective baseline data to estimate changes that have occurred
 Depending on timeframes, some outcomes and impacts might already be evidencedOpportunity to gather evidence of outcomes and impacts
CompletedCannot get baseline data unless this has already been set upWill need to construct retrospective baseline data to estimate changes that have occurred
 Depending on timeframes, some outcomes and impacts might already be evidencedOpportunity to gather evidence of outcomes and impacts
 Cannot directly observe implementationWill depend on existing data or retrospective recollections about implementation

[1] M. K. Gugerty, D. Karlan, The Goldilocks Challenge: Right Fit Evidence for the Social Sector, New York, Oxford University Press, 2018.

[2] APS Review, Evaluation and learning from failure and success.

[3] Queensland Government Program Evaluation Guidelines.

[4] 2016 NSW Auditor-General’s Report to Parliament, Implementation of the NSW Government’s program evaluation initiative.

[5] Adapted from DIIS Evaluation Strategy 2017-2021 and NSW guidelines and WA guidelines.

[6] C. Althaus, P. Bridgman, G. Davis, The Australian Policy Handbook: A practical guide to the policy making process, 6th edition, Sydney, Allen and Unwin, 2018

[7] M. K. Gugerty, D. Karlan, The Goldilocks Challenge: Right Fit Evidence for the Social Sector, New York, Oxford University Press, 2018

[8] BetterEvaluation

Last updated: 19 January 2021

Share this page:

Was this page useful?

Describe your experience

More feedback options

To provide comments or suggestions about the NT.GOV.AU website, complete our feedback form.

For all other feedback or enquiries, you must contact the relevant government agency.