A whole of government approach to evaluation

The Territory Government is focused on improving evidence-based decision-making as part of A plan for budget repair. This will be supported by a whole of government approach to evaluation in the Territory to help drive a culture of continuous improvement across government.

Program evaluation aims to improve government services to achieve better outcomes for Territorians. This is important because sometimes programs don’t work in reality and may even cause unintended harm. Program managers need to know whether their programs are helping people and whether implementing changes could help more people within the same budget. Without evaluation, there is a risk that poorly performing programs continue without change, slowing progress towards achieving desired outcomes and potentially wasting taxpayers’ money.

Example

Scared Straight and other similar programs involve organised visits to prison by juvenile delinquents or children at risk for criminal behaviour. Programs are designed to deter participants from future offending through firsthand observation of prison life and interaction with adult inmates. A recent Cochrane review found that these programs fail to deter crime and actually lead to more offending behaviour. See Petrosino et. al., 2013, ‘Scared Straight and other juvenile awareness programs for preventing juvenile delinquency’, accessed October 2020.

When an evaluation shows a program is not working well, managers can use the evaluation findings to improve the program by either modifying the existing program or taking a new approach. Each evaluation is an opportunity to learn by either demonstrating what works well or what does not. Over time, evaluations build an evidence base of what works in the Territory and foster a culture of continuous improvement.

Central oversight is critical to developing a strategic whole of government approach to evaluation and strengthening evaluation culture.[1] A centralised approach to program evaluation supports:

  • a consistent standard of evaluation across agencies
  • an ability to identify systemic issues across government
  • capacity to set strategic priorities for and identify gaps in evaluation
  • accountability for multi agency and whole of government programs
  • coordinated capability building, resourcing, data collection, reporting and evaluative effort
  • a centralised repository of evaluations to enhance continuous learning and quality improvement.

Under the Territory’s approach, evaluation activity will continue to be undertaken primarily by the agency delivering the program (this may include using external experts commissioned by the agency). This is necessary to maintain a close link between the evaluation and the program area with relevant subject matter knowledge and experience.

Evaluation activity will be overseen, coordinated and supported by the PEU within the Department of Treasury and Finance (DTF), supported by the Department of the Chief Minister and Cabinet (CMC), the Office of the Commissioner for Public Employment (OCPE) and the Department of Corporate and Digital Development (DCDD).

Program evaluation framework

The Program evaluation framework integrates evaluation into the government’s policy and budget development processes. The framework aims to improve transparency and accountability, and encourage better use of Territory Government funds by:

  • ensuring new programs and extensions to existing programs have identified goals and objectives that are achievable and measurable, or include actions to develop measurement as part of the program
  • ensuring new programs and extensions to existing programs have an evaluation strategy
  • applying sunset provisions to new programs (or extensions to existing programs), where the decision for further funding is informed by evaluation outcomes
  • establishing a rolling schedule of evaluations to ensure existing programs are evaluated over time
  • providing a clear mandate for agencies to evaluate their programs and target their investments
  • outlining expected evaluation principles and standards
  • providing government with clear advice about the costs and benefits of evaluation (including data collection and analysis) to help inform evaluation decisions
  • establishing a protocol for policy and program officers to plan for evaluation across the program lifecycle (with a step-by-step guide in the program evaluation toolkit)
  • establishing a tiered system of evaluations to ensure evaluation is proportionate to the cost, risk and complexity of a program
  • describing how the Territory Government can build evaluation capability within the Northern Territory Public Sector and foster a culture of continuous improvement
  • outlining how the Territory Government will measure progress in implementing the framework.

Territory Government agencies must use the framework and toolkit to help plan, commission and use evaluations. The framework and toolkit may also provide useful guidance for Territory Government service delivery partners and external evaluators of Territory Government programs.

The Program evaluation framework is underpinned by 10 best practice evaluation principles:[2]

  1. Build evaluation into program design – plan the evaluation as part of program design to ensure clearly defined objectives and measurable outcomes prior to commencement.
  2. Base the evaluation on sound methodology – adopt a best practice evaluation methodology that is commensurate with the program’s size, significance and risk.
  3. Allocate resources and time to evaluate – include provision for the required evaluation resources and timeframes when planning and budgeting for a program. Ensure evaluation findings are available when needed to support key decision points.
  4. Use the right mix of expertise and independence – use evaluators who are experienced and independent from program managers, but include program managers in evaluation planning.
  5. Ensure robust governance and oversight – establish governance processes to ensure programs are designed and evaluated in accordance with this framework, including meeting reporting requirements.
  6. Be ethical in design and conduct – carefully consider the ethical implications of any evaluation activity, particularly collecting and using personal data, and any potential impacts on vulnerable groups.[3]
  7. Be informed and guided by relevant stakeholders – listen to stakeholders, including program participants, government and non-government staff involved in managing and delivering the program, and senior decision makers.
  8. Consider and use evaluation data meaningfully – include clear statements of findings, recommendations or key messages for consideration in evaluation reports. Use reports to inform decisions about program changes.
  9. Be transparent and open to scrutiny – disseminate key information to relevant stakeholders, including methodologies, assumptions, analyses and findings.
  10. Promote equity and inclusivity – harness the perspectives of vulnerable groups during evaluations, to enable fair and socially just outcomes.

Treasurer’s Direction on Performance and Accountability

Treasurer’s Directions are mandatory requirements that specify the practices and procedures that must be observed by Accountable Officers in the financial management of their agencies (Financial Management Act 1995). DTF is currently developing a Performance and Accountability Treasurer’s Direction that will set out the minimum requirements for all Territory Government agencies for:

  • planning objectives and actions
  • managing or delivering services
  • performance reporting
  • reviewing and evaluating outcomes.

Guidance on performance and accountability will be provided to agencies to assist them in complying with the requirements of the Performance Accountability Treasurer’s Direction.

Program evaluation, organisational reviews and audits – what is the difference?

For the purposes of the Program evaluation framework:

Evaluation is:

“A systematic and objective process to make judgements about the merit or worth of one or more programs, usually in relation to their effectiveness, efficiency and appropriateness.”[4] This definition applies more to outcome and impact evaluations rather than process evaluations that tend to focus more on monitoring.

Monitoring is:

“A management process to periodically report against planned targets or key performance indicators that, for the most part, is not concerned with questions about the purpose, merit or relevance of the program.”[4]

While there are a number of different approaches to evaluation,[5] the Program evaluation framework is based on three types[6], linked to a program’s lifecycle:

  1. Process evaluation – considers program design and initial implementation.
  2. Outcomes evaluation – considers program implementation and short to medium term outcomes.
  3. Impact evaluation – considers medium to long term outcomes, whether the program contributed to the outcomes and represented value for money.

The different types of evaluation are covered in more detail in section 2.5.3 Types of evaluation.

Program evaluation is most effective when it is complemented by other activities which collect information and assess performance including:

  • organisational reviews – consider an agency’s entire budget, ensure expenditure is aligned to government priorities and services are being provided efficiently. Implementation of a rolling schedule of organisational reviews was a recommendation in A plan for budget repair. DTF is currently developing an Agency Organisational Review Framework.
  • program reviews – typically quick, operational assessments of a program to inform continuous improvement[7].
  • research – closely related to evaluation, but can ask different types of questions that may not be related to the merit or worth of a program[7].
  • external audits – undertaken by an independent auditor. Reviews records supporting financial statements[8].
  • internal audits – undertaken by agencies. Reviews governance, risk management, and control process according to a risk-based need[8].
  • performance Management System (PMS) Audits – undertaken by the Northern Territory Auditor-General. Consider whether appropriate systems exist and are effective in enabling agencies to manage their outputs[9].
  • performance Audit – undertaken by an Auditor-General but not currently within the scope of the Northern Territory Auditor-General. Examine the economy, efficiency and effectiveness of government programs and organisations.[10], [11]
  • Independent Commissioner Against Corruption (ICAC) audit or review – investigate practices, policies or procedures of a public body or public officer to identify whether improper conduct has occurred, is occurring or is at risk of occurring.

Further definitions are in the Glossary.


[1] Evaluation and learning from failure and success, ANZSOG, 2019.

[2] Adapted from the NSW Government Program Evaluation Guidelines.

[3] In some circumstances, formal review and approval from an ethics committee certified by the National Health and Medical Research Council may be required. See Ethical considerations for further information.

[4] NSW Government Evaluation Framework August 2013.

[5] For information about other evaluation types, please see the BetterEvaluation website.

[6] Further information on these three evaluation types are in sections 3.2.1 to 3.2.3.

[7] NSW Evaluation toolkit.

[8] The Institute of Internal Auditors Australia, Internal Audit Essentials, 2018.

[9] Northern Territory Auditor-General’s Office Annual Report 2017-18.

[10] Black, M., 2018, Strategic Review of the Northern Territory Auditor-General’s Office.

[11] A Guide to Conducting Performance Audits, Australian National Audit Office, 2017.

Last updated: 14 December 2020

Share this page:

Was this page useful?

Describe your experience

More feedback options

To provide comments or suggestions about the NT.GOV.AU website, complete our feedback form.

For all other feedback or enquiries, you must contact the relevant government agency.