Templates, acronyms and glossary



Acronym Full form
CMC Department of the Chief Minister and Cabinet
DTF Department of Treasury and Finance
PEU Program Evaluation Unit
NHMRC National Health and Medical Research Council
OCPE Office of the Commissioner for Public Employment


Activities The day-to-day tasks an organisation must undertake in order to provide a product or service.[1]
Appropriateness The extent to which a program is suitable for achieving stated objectives in a given context.[2]
Assumptions The conditions that have to hold for a certain part of a program to work as expected.[3]
Baseline Information collected before or at the start of a program that provides a basis for planning and/or assessing subsequent program progress and outcomes.[4]
Benchmark A reference point or standard against which performance can be assessed.
Cost benefit analysis Compares the costs and benefits of a program in monetary terms. The difference between the present value of benefits and the present value of costs is referred to as the net present value. The program option with the highest net present value represents the most economically viable option.[5]
Cost effectiveness analysis Compares the quantifiable relative costs (in dollars) and outcomes (effects) of two or more courses of action. Cost effectiveness analysis is distinct from cost benefit analysis and should be used when the benefits of a program cannot be easily quantified in monetary terms. The costs are compared with outcomes measured in natural units – for example, per life saved, per year of life gained. This process is used to identify the lowest cost means of achieving that outcome.[4]
Counterfactual How individuals or communities would have fared had a program of policy not occurred (or occurred differently).[3]
Effectiveness The extent to which a program achieves its objectives.
Efficiency The extent to which a program is delivered at the lowest possible cost (technical efficiency), to the areas of greatest need (allocative efficiency) and/or continues to improve over time by finding better or lower cost ways to deliver outcomes (dynamic efficiency).[2]
Equity The extent to which a program meets the individual needs of participants. It can be distinguished from equality where participants are treated equally.[2]
Evaluation A systematic and objective process to make judgements about the merit or worth of one or more programs, usually in relation to their effectiveness, efficiency and appropriateness.[6]
Impact The change in outcomes for those affected by a program compared to the alternative outcomes had the program not existed.[3] May also refer to longer term outcomes.
Impact evaluation Assesses the longer-term results and impact of a program – positive or negative, intended and unintended, direct and indirect.[7]
Inputs The resources (funds, expertise, time) required for the delivery of activities to achieve outputs.
Meta-analysis Comparing and combining results of many studies.
Meta-evaluation The evaluation of an evaluation to judge its quality.
Monitoring Tracks whether a program is being implemented as intended and whether participants are using the program as anticipated.
Objectives Clear, measurable statements of what the program or evaluation aims to achieve.[2]
Outcomes The intended (and unintended) results of program outputs.[3]
Outcome evaluation Assesses progress in the early and medium-term results that the program is aiming to achieve.[8]
Outputs The products or services generated by program activities – deliverables. The provision of outputs is typically under the control of the program and is related to the quantity and quality of program implementation.[1]
Process evaluation Investigates whether the program was implemented according to plan.[8]
Program A set of activities managed together over a sustained period of time that aim to deliver an outcome for a client or client group.[9]
Program logic Describes how the program contributes to a chain of results and visually represents how inputs and activities link to intended outcomes.
Qualitative data Provides an understanding of social situations, people’s values, perceptions and motivations. Generally presented in narrative, descriptive form.
Quantitative data Measured on a numerical scale (for example, how many, how much, or how often). Generally presented using tables, charts and graphs.
Research Closely related to evaluation, but can ask different types of questions that may not be related to the merit or worth of a program.[10]
Sustainability The capacity of a program to continue to deliver results into the future. Considers the social, economic, political, institutional and other conditions surrounding a program.[4]
Value for money Value for money is achieved when the maximum benefit is obtained from a program within the resources available to the agency. It may not always mean the ‘highest quality’ program is selected. A lower cost option may be appropriate when the agency has limited funds.[4]

[1] Innovations for Poverty Action: Guiding your program to build a theory of change

[2] Queensland Treasury: Queensland Government Program Evaluation Guidelines

[3] M. K. Gugerty, D. Karlan, The Goldilocks Challenge: Right Fit Evidence for the Social Sector, New York, Oxford University Press, 2018.

[4] NSW Treasury: Guidelines - Program Evaluation

[5] NSW Government Program Evaluation Guidelines.

[6] NSW Government: NSW Government Evaluation Framework August 2013. This definition applies more to outcome and impact evaluations rather than process evaluations that tend to focus more on monitoring.

[7] BetterEvaluation: Types of evaluation – Impact evaluation

[8] BetterEvaluation: Manager's guide to evaluation – 2. Scope the evaluation: Develop agreed key evaluation questions

[9] NSW Government: NSW Government Evaluation Framework August 2013

[10] NSW Department of Premier & Cabinet: Evaluation Toolkit

Last updated: 24 August 2022

Give feedback about this page.

Share this page:

URL copied!