Posted on February 14, 2022 by Ruth Kelly
Setting targets for carbon emissions is a crucial part of government’s plans to tackle climate change. To do this government uses its UK TIMES model, a model of the whole UK energy system, to provide important evidence supporting decisions like the net zero target. That’s just one example of the hundreds of models that government relies on.
Models are used for activities like estimating costs, distributing funding within organisations, and testing policy options – and they underpin decisions that affect people’s lives. In recent years departments have used models to plan NHS test and trace services, set allocations for teacher training places, and estimate the cost of the financial settlement when leaving the EU. So it’s really important that people who depend on outputs from models can feel confident in the quality and robustness of these models.
How the NAO uses models
At the NAO, part of our financial audit work involves scrutinising the models that underpin significant estimates in departments’ accounts. Our expert Modelling Team looks for innovative ways to do this, and to support departments in improving the way they produce and use models.
Building an independent copy or reproduction of a model is one of the most comprehensive ways of quality assuring a model. We applied this ‘gold standard’ approach to one of the most technically complex and inherently uncertain models that we audit – HMRC’s Oil and Gas Decommissioning model. This is a micro simulation model for oil and gas activity in the North Sea, which generates an estimate of the total provision of revenue from the Petroleum Revenue Tax and Ring Fence Corporation Tax.
The complexity of micro simulation models makes traditional approaches to auditing models challenging and amplifies potential errors that can be easy to miss. To help us audit the estimate, we built an independent reproduction of the model in the R software language. Running the reproduction separately allows us to produce an independent estimate and helps us to identify and investigate any discrepancies to the original model. This has enhanced confidence in the outputs for key stakeholders.
How we managed uncertainty
Modelled outputs are inherently uncertain. As well as checking that the central estimate is reasonable, we also wanted to understand the full range of plausible outcomes. We built in fully automated uncertainty analysis to our reproduction, which lets us stress test the estimate under extreme scenarios. It also lets us test what happens to the estimate when several inputs change at the same time, by running thousands of simulations to generate a likely range of outcomes. This is something not carried out in many of the models we audit and is an area where our independent model assurance can provide additional value. It gives us confidence that the estimate will not be materially wrong, even when economic shocks are considered.
This fully working model reproduction has transformed the way we audit the estimate and is a great example of what is possible in terms of model quality assurance. It’s enhanced the quality of our work: quality assurance checks are automated, including more advanced sensitivity analysis. And it’s helped us to be more efficient: the quality assurance checks in the reproduction are quicker to produce, freeing up our analysts to focus on creating greater insights.
We think there are opportunities to replicate this approach across the portfolio of models that we audit and help enhance our quality assurance work. We want our audit work to help build confidence in the quality of government’s models and support government in making plans that don’t place value for money at risk.
Our recently published report on Financial Modelling in government looks at how government produces and uses models and identifies the systemic issues that can lead to value for money risks.
To find out more about the way we audit models, see our Framework to Review Models is framework is aimed at people commissioning, carrying out or assuring analysis. It provides a structured approach to review models, which organisations can use to determine whether the modelling outputs they produce are reasonable and robust.
How do you think this framework could help you or your organisation? Tell us in the comment section below.
About the author
Ruth Kelly is our Chief Analyst and has wide experience of applying economics and other analytical approaches to support policy evaluation, investment decisions and risk management. Prior to joining the NAO, she held business evaluation and risk management roles for a global resources company, and advised clients on carbon and energy issues for a Big 4 economic consultancy practice.