Left column

Middlecolumn

Analysis

    Mapping the Justice landscape 

  • Posted on May 3, 2022 by

    James Gjertsen explains how the NAO uses spatial analytics to bring audit teams closer to the data and visualise the complex geographic relationships at work behind the scenes in the justice system 

    From public transport to healthcare, we use services and infrastructure provided by central and local government all the time. Multiple Government departments have to coordinate long term decisions about provision of services and land use – and these all come together in a place. Bringing a place-based focus to decisions is key to maximising the positive impact government can have on our lives. This is why the NAO has an increasing interest in the power of spatial analysis to provide insight.   

    Even within a single policy area, such as health or justice, one department or ministry can be responsible for managing multiple different types of organisations that need to work together to deliver a coordinated service for users. One area we recently looked at was the criminal justice system.  

    This involves a range of organisations, from police forces to courts and probation. Mapping the spatial relationships between these organisations helps us answer deeper questions about the co-ordination and delivery of those services. We can ask more insightful questions about where backlogs are occurring or how the user experiences their linked-up journey through the system, and spot good practice. But all this starts with how these organisations relate to one another spatially. 

    To explore this, we developed an interactive tool visualising how all these various organisations interact with each other. You can use it to see at a glance where those interactions are simple and where they may be more complicated, along with summaries for each body.  

    This quickly shows that the number of organisations interacting with each other can add up fast: for police forces the Metropolitan Police tops the list for the most interactions with 63 local interacting organisations! This includes one prison region, HM Courts and Tribunals Service cluster and youth Justice Board, two Crown Prosecution regions, nine local justice areas, eighteen Probation Delivery Units and thirty-one Youth offending teams.  

    The tool can help raise questions about the efficiency of the criminal justice system by providing geographic context around the interactions between the various bodies.  

    Under the hood 

    The tool uses local authority boundaries as the building blocks for each organisation’s map layer. As an example: For probation delivery units, Cardiff, Newport and the Vale of Glamorgan are listed under the same unit, “Cardiff and the Vale”. The tool joins these local authorities’ boundaries together to produce a boundary that represents the probation delivery unit.  

    Converting local authority boundaries to probation delivery units in Wales 

    There were benefits to using this approach. Firstly, a few of the organisations didn’t have a set of predetermined boundaries to provide us, so using this approach let us construct them from scratch.  

    Secondly, using the same building blocks for each layer makes comparing them much easier. Boundaries will often be simplified to shrink the size of files being dealt with. Imagine trying to record the borders of a town. If you took measurements every 500 meters you would need far fewer measurements than if you took them every ten centimetres, and for some purposes the result would be just as useful.  

    However, it can be hard to compare boundaries that may have been recorded differently. Using the same base boundaries to build all the layers simplifies things. They will all be recorded in the same way and any differences that get flagged will be genuine, no extra work needed. 

    We’ve been building our capacity at the NAO to use data and spatial analytics to derive deeper insights around the topics we audit. This tool is just one of the ways we are supporting our current and future work by bringing teams closer to the data, by clearly articulating the complex geographic relationships at work behind the scenes. Why not see for yourself? 

    About the author

    James Gjertsen

    James Gjertsen is a Senior Analyst in the Analysis Hub at the NAO. He joined the NAO in 2018 and leads the Analytical Insights Team which aims to support the NAO’s VFM work programme by  drawing deeper insights from the data it collects.

  • What makes a super model? Using innovative approaches to audit departments’ models

  • Posted on February 14, 2022 by

    Setting targets for carbon emissions is a crucial part of government’s plans to tackle climate change. To do this government uses its UK TIMES model, a model of the whole UK energy system, to provide important evidence supporting decisions like the net zero target. That’s just one example of the hundreds of models that government relies on.  

    Models are used for activities like estimating costs, distributing funding within organisations, and testing policy options – and they underpin decisions that affect people’s lives. In recent years departments have used models to plan NHS test and trace services, set allocations for teacher training places, and estimate the cost of the financial settlement when leaving the EU. So it’s really important that people who depend on outputs from models can feel confident in the quality and robustness of these models.  

    How the NAO uses models 

    At the NAO, part of our financial audit work involves scrutinising the models that underpin significant estimates in departments’ accounts. Our expert Modelling Team looks for innovative ways to do this, and to support departments in improving the way they produce and use models.  

    Building an independent copy or reproduction of a model is one of the most comprehensive ways of quality assuring a model.  We applied this ‘gold standard’ approach to one of the most technically complex and inherently uncertain models that we audit – HMRC’s Oil and Gas Decommissioning model. This is a micro simulation model for oil and gas activity in the North Sea, which generates an estimate of the total provision of revenue from the Petroleum Revenue Tax and Ring Fence Corporation Tax.  

    The complexity of micro simulation models makes traditional approaches to auditing models challenging and amplifies potential errors that can be easy to miss. To help us audit the estimate, we built an independent reproduction of the model in the R software language. Running the reproduction separately allows us to produce an independent estimate and helps us to identify and investigate any discrepancies to the original model. This has enhanced confidence in the outputs for key stakeholders.  

    How we managed uncertainty 

    Modelled outputs are inherently uncertain. As well as checking that the central estimate is reasonable, we also wanted to understand the full range of plausible outcomes. We built in fully automated uncertainty analysis to our reproduction, which lets us stress test the estimate under extreme scenarios. It also lets us test what happens to the estimate when several inputs change at the same time, by running thousands of simulations to generate a likely range of outcomes.  This is something not carried out in many of the models we audit and is an area where our independent model assurance can provide additional value. It gives us confidence that the estimate will not be materially wrong, even when economic shocks are considered.  

    This fully working model reproduction has transformed the way we audit the estimate and is a great example of what is possible in terms of model quality assurance.  It’s enhanced the quality of our work: quality assurance checks are automated, including more advanced sensitivity analysis. And it’s helped us to be more efficient: the quality assurance checks in the reproduction are quicker to produce, freeing up our analysts to focus on creating greater insights. 

    What next? 

    We think there are opportunities to replicate this approach across the portfolio of models that we audit and help enhance our quality assurance work. We want our audit work to help build confidence in the quality of government’s models and support government in making plans that don’t place value for money at risk. 

    Our recently published report on Financial Modelling in government looks at how government produces and uses models and identifies the systemic issues that can lead to value for money risks. 

    To find out more about the way we audit models, see our Framework to Review Models is framework is aimed at people commissioning, carrying out or assuring analysis. It provides a structured approach to review models, which organisations can use to determine whether the modelling outputs they produce are reasonable and robust.  

    How do you think this framework could help you or your organisation? Tell us in the comment section below.  

    About the author

    Ruth Kelly

    Ruth Kelly

    Ruth Kelly is our Chief Analyst and has wide experience of applying economics and other analytical approaches to support policy evaluation, investment decisions and risk management. Prior to joining the NAO, she held business evaluation and risk management roles for a global resources company, and advised clients on carbon and energy issues for a Big 4 economic consultancy practice.

  • Let’s get down to business

  • Posted on July 7, 2021 by

    Man with pencil

    At the National Audit Office, we come across many business cases when looking at government programmes. A strong business case is vital for effective decision making and for successfully delivering intended outcomes.

    The foundation of a business case is a clear understanding of what success will look like for a programme – the strategic case. But when it’s not clear what a programme is trying to achieve, it’s hard for decision makers to know if this programme is the right thing to do, or to plan and focus resources. It creates the risk that different stakeholders have different expectations about what will be achieved. It makes it harder to spot where other programmes may contribute to similar goals or where there may be adverse impacts. And for the public, parliament and us as auditors, it makes it hard to understand if the programme has delivered good value for money.

    Promoting the strategic case

    The November 2020 update to HM Treasury’s Green Book (its guidance on how to appraise and evaluate spending proposals) introduces a stronger requirement to establish clear objectives up front. Proposals should be assessed primarily on how well they deliver policy objectives, and cannot be considered value for money without delivering these.

    But for proposals to be assessed this way, the strategic case needs to be robust. Therefore, when auditing major programmes, we ask the seemingly simple question – is it clear what objective the programme is intended to achieve?

    Our recent learning from COVID-19 re-emphasised the importance of government being clear and transparent about what it is trying to achieve, so that it can assess if it is making a difference. For example, HM Revenue & Customs agreed clear principles for its employment support schemes. Although the Bounce Back Loans Scheme achieved its initial objective of quickly supporting small businesses, a lack of more detailed scheme-specific objectives will make it difficult to measure its ultimate success.

    The government’s commitment to ‘levelling up,’ and uncertainty over what this means, may create difficulties for programmes to set out what they will achieve. They will need clarity to produce a business case. It could be interpreted as giving everyone access to the same opportunities, or at least to the same minimum standards – say of health outcomes or broadband access. This prioritises spreading prosperity to deprived areas. However, it also could be framed as addressing gaps in potential by, for example, investing where an area should be showing higher productivity. This prioritises value for money investments. As these different goals require different policy solutions, it can be challenging to set out how an intervention will achieve ‘levelling up’. Later this year, government will publish a levelling up White Paper setting out how new policies will improve opportunity and livelihoods across the UK.

    Whilst defending the economic case…

    A strong strategic case alone does not mean an intervention is justified. There might be other ways to meet an objective which could be better value for money. We often see business cases that seem to justify a pre-selected solution, rather than exploring a range of options for meeting the objectives – what the Green Book calls ‘the long list’.

    Our report on Hinkley Point C found that alternative ways of the government providing support for the planned nuclear power station could have resulted in lower costs to consumers over the life of the project, but weren’t considered. We have also seen departments not considering different options when thinking about how to deliver policy – nine out of the 24 business cases we reviewed as part of our report on arm’s length bodies did not consider a long-list of options.

    The economic case is important in setting out value for money, often through formal modelling, the results of which will need to be considered alongside the strategic case.  Our early work on High Speed 2 found that the relationship between savings (with the Department for Transport putting a high emphasis on journey time savings) and the strategic reasons for doing the programme, such as rebalancing regional economies, was unclear.

    So, what do we expect from strategic cases?

    Throughout a programme, the strategic case needs to help ensure effective decision-making. As well as specifying what should be achieved (with a clear, logical set of assumptions) it needs to:

    • Be easily understandable so effective trade-offs can be made. Our lessons from major programmes describes how objectives need to be clear enough to be translated into a programme scope (what will be required and when). For example, government has been considering which objectives to prioritise for the roll-out of gigabit-capable broadband. In our report, we found that prioritising the speed of programme delivery over other objectives posed a risk to value for money. 
    • Help prioritise cross-government objectives. We see cases where objectives are neither coherent when taken together, nor clearly prioritised when tensions emerge between them. In November 2020 we considered progress in achieving government’s long-term environmental goals. The government set out 10 overarching goals but did not provide a clear and coherent set of objectives, with, for example, varying and often unclear timescales. 
    • Be measurable (where possible). The strategic case will capture those assumptions that cannot be equated to a monetary equivalent. And, the easier assumptions are to quantify, the easier it will be to assess progress. Our early High Speed 2 review found the strategic case should have been more developed. For example, it included limited evidence on forecast passenger demand which provided a weak foundation for securing and demonstrating success. The Department was working to strengthen its analysis. Also, our Hinkley Point C report found the Department put more weight on the wider, unquantified strategic case when the economic case weakened but had little control over these benefits and at the time of our report no plan to realise them. 

    Government plans to invest heavily in programmes, with £100 billion expected investment in 2021-22 alone. For government to secure best value from this it must set out clearly and logically what it wants, how to best deliver this and how it will show what has been achieved for the investment. 


    Authors: Ruth Kelly and Emma Willson

    Emma Willson

    Emma Willson

    Emma Willson leads our Major Projects hub. She has worked at the NAO for almost 20 years, auditing a wide range of government programmes, from welfare reform to large-scale defence equipment projects. She is a qualified chartered accountant and holds an International Association for Contract and Commercial Management (IACCM) qualification.

    Ruth Kelly

    Ruth Kelly

    Ruth Kelly is our Chief Analyst and has wide experience of applying economics and other analytical approaches to support policy evaluation, investment decisions and risk management. Prior to joining the NAO, she held business evaluation and risk management roles for a global resources company, and advised clients on carbon and energy issues for a Big 4 economic consultancy practice.

  • Using process mining to support the NAO’s investigation into the Windrush compensation scheme

  • Posted on June 10, 2021 by

    At the NAO, our central analysis team supports value for money studies by applying specialist analysis techniques to government data in order to generate new insights. For our Investigation into the Windrush Compensation Scheme, we used process mining to help the study team understand the Home Office’s operation of the scheme. 
     
    Process mining is an exciting technique that allows you to gain a detailed understanding of a process purely by analysing data that users of the process generate, which can be found in automated event logs. It allows us to understand the flow of cases through a system, including the time taken for different activities, how resources are used and where bottlenecks occur, and as such can be vital in assessing the performance of the overall process.  

    To use process mining, the minimum data requirement is an event log containing Identification Codes (IDs) that are unique to each case, consistent labels relating to actions made on a case, and a timestamp for each action. For our work on the Windrush Compensation Scheme, we analysed logs generated by the Home Office’s case management system for dealing with compensation claims using the open source programming language R, specifically the bupaR collection of packages. 
     
    Our first step using process mining was to create a process map. This output provided a powerful visual representation of the different stages of the case management system and the way in which cases move between stages – beginning with registration and ending ultimately in either claim payment or rejection. 

    A simplified version of the process map was included in the report and is repeated below. The different nodes shown here represent the different stages in the process, while the numbers show how many times a compensation scheme case has entered/re-entered a stage (boxes), or, the number of times cases have moved between stages (connecting lines). 

    Process map showing how cases registered after 13 March 2020 moved through the Home Office’s Windrush Compensation Scheme system – data as at 31 March 2021. Refer to page 39, figure 15, for the report figure (see also footnote 1).
     

    Next, we used process mining to create an animated version of the process map. This showed all individual compensation scheme cases dynamically moving through the system between stages. It proved very effective in illuminating the rate at which cases progress through the system and the stages where that progress is slower – A short illustrative GIF of this output is shown below (see also footnote 2).

    Animated process map showing how cases registered after 13 March 2020 moved through the Home Office’s Windrush Compensation Scheme system by time – data as at 31 March 2021. Circles represent the flow of individual cases, the animated process map is based on the process map used in the report, refer to page 39: figure 15, for the report figure, and footnotes below. The animated map is truncated for illustrative purposes. 

    Visualising the Windrush Compensation Scheme in these ways enabled the study team to better understand its operation and to draw out insights from the client’s case management systems. As a result, our auditors were able to define complex hypotheses and explore these with the client, using process mining analysis to support their findings.  

    For example, our analysis showed that, of cases that were subject to a quality assurance check, half needed to return to a caseworker, indicating a significant level of rework. We were able to use process mining to combine this with observed data and quantify the exact rate at which this occurred. In another example, our study team were able to use process mining to compare the Department’s initial estimates for the average number of hours to do everything required on a case, end to end, with the actual number of hours recorded in the Department’s data for cases up to 31 March 2021.  

    Using advanced analytics – in this case process mining – our study team enhanced their understanding of the Windrush Compensation Scheme and the report’s findings.

    You can read more about the Investigation here: Windrush Compensation Scheme, and if you are interested in knowing more about Analysis at the NAO, visit our Analysis page on the main NAO site.

    Authors: Mohit Motwani and Ben Coleman

    Ben and Mohit work in the NAO’s analysis hub, helping support value for money studies by providing complex data analysis techniques to study teams. They both undertook process mining work in relation to the NAO’s Investigation into the Windrush Compensation Scheme.


    Footnotes

    Note 1

    1. Data relate to 1033 cases for which a registration stage was created after 13 March 2020 and show the observed movement of these cases through the system until the 31 March 2021.
    2. The numbers shown are a count of instances of cases reaching a stage or moving between stages, rather than a count of the number of unique cases.
    3. Some intermediate stages such as offer and payment approvals and payment preparation have been omitted for clarity.
    4. Some movements between stages have also been omitted for clarity, including:
      1. Cases moving back following successful applicant appeal;
      2. Cases moving back to registration or eligibility following casework;
      3. Cases moving back to casework following payment offer.

      Source: National Audit Office’s analysis of Home Office applications data

    Note 2

    The full version of this covered the period March 2020 to March 2021, which was the period for which we had access to full case management records in the event log. 

  • How to audit artificial intelligence models

  • Posted on June 1, 2021 by

    In our ever-increasing digital and automatized world, certain buzzwords are becoming more centre stage in the public sector. One of them is “artificial intelligence”. While the concept, and development, of artificial intelligence is not new (artificial intelligence was first recognised as a formal discipline in the mid-1950s), it is a word that has been casually thrown around more in recent years in the public sector, and sometimes carelessly.  

    Traditional algorithms vs machine learning models 

    These days modern data scientists normally associate with artificial intelligence systems that are based on machine learning models. Machine learning models deploy methods that develop rules from input data to achieve a given goal.1 There is a difference to, what you may call, traditional algorithms. Traditional algorithms don’t need data to learn, they just churn out results based on the rules inherent to them. 

    Traditional algorithms have been used in the public sector for some time to make decisions. The latest example making the headline was the model determining A level exam results last summer. From an auditing perspective, as the basis of the algorithms are usually transparent, auditing them is something we as a public audit institution are used to.2 

    But artificial intelligence that is based on machine learning is different –  it has only been (cautiously) employed in the public sector in recent years. 

    It is different because, firstly, for a machine learning model to learn it needs good, quality data – and often a lot of it. Our report on the challenges of using data across government has shown that that condition is not always given.

    Secondly, it can be quite costly to develop  and deploy them. Moreover, the benefits are not always guaranteed and immediately realisable. In a public sector context with tight budgets, the willingness to put money behind them may not always be there. 

    The reason for this is related to a third point. It is not always certain from the outside what the machine will learn and therefore what decision-making rules it will generate. This makes it hard to state the immediate benefits. Much of the progress in machine learning has been in models that learn decision-making rules that are difficult to understand or interrogate.  

    Lastly, many decisions affecting people’s lives that artificial intelligence models would support pertain to personal circumstances and involve personal data, such as health, benefit or tax data. Whilst the personal data protection landscape has strengthened in recent years, there are not always the organisational regulatory structures and relevant accountabilities in place of the use of personal data in machine learning models.3 Public sector organisations are therefore at risk of inadvertently falling foul of developing data protection standards and expectations. 

    How to audit public sector machine learning models 

    Given all these challenges, it may not be surprising that in our public audit work, we are not coming across a lot of examples of the use of machine learning models in decision-making. But there are examples4 and we foresee that they may be growing in the future.  

    We have therefore teamed up with other public audit organisations in Norway, the Netherlands, Finland and Germany, and produced a white paper and audit catalogue on how to audit machine learning models. You can find it here: Auditing machine learning algorithms (auditingalgorithms.net)

    As the paper outlines in more detail, we identified the following key problem areas and risk factors: 

    • Developers of machine learning models often focus on optimising specific numeric performance metrics. This can lead them to neglect other requirements, most importantly around compliance, transparency and fairness. 
    • The developers of the machine learning models are almost always not the same people who own the model within the decision-making process. But the ‘product owners’ may not communicate their requirements to the developers  – which can lead to machine learning models that increase costs and make routine tasks more, rather than less time-consuming. 
    • Often public sector organisations lack the resources and/or competence to develop machine learning applications internally and therefore rely on external commercial support. As a result they may take on a model without understanding how to maintain it and how to ensure it is compliant with relevant regulations. 

    We also highlighted the implications for auditors to meaningfully audit artificial intelligence applications: 

    • They need a good understanding of the high-level principles of machine learning models 
    • They need to understand common coding languages and model implementations, and be able to use appropriate software tools 
    • Due to the high demand on computing power, machine learning supporting IT infrastructure usually includes cloud-based solutions. Auditors therefore also need a basic understanding of cloud services to properly perform their audit work. 

    Our audit catalogue sets out a series of questions that we suggest auditors should use when auditing machine learning models. We believe it will also be of interest to the public sector bodies we audit that employ machine learning models. It will help them understand what to focus on when developing or running machine learning models. As a minimum, it gives fair warning what we as auditors will be looking for when we are coming to audit your models! 

    Footnotes

    1 In fact there are two main classes of machine learning models. Supervised machine learning models attempt to learn from known data to make predictions; unsupervised machine learning models try to find patterns within datasets in order to group or cluster them. 

    2 See for example our Framework to review models – National Audit Office (NAO) Report to understand more about what we look out for when auditing traditional models and algorithms. We currently have some work in progress that aims to take stock of current practices and identify the systemic issues in government modelling which can lead to value for money risks 

    3 In the UK the Information Commissioner Office has published guidance on the use of personal data in artificial intelligence: : Guidance on AI and data protection | ICO 

    4 For some UK example see: https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector 

    About the author: 

    Daniel Lambauer joined the NAO in 2009 as a performance measurement expert and helped to establish our local government value for money (performance audit) team. He is the Executive Director with responsibility for Strategy and Resources. As part of his portfolio, he oversees our international work at executive and Board level and has represented the NAO internationally at a range of international congresses. He is also the NAO’s Chief Information Officer and Senior Information Responsible Owner (SIRO). Before joining the NAO, Daniel worked in a range of sectors in several countries, including academia, management consultancy and the civil service.

  • How data analytics can help with audits

  • Posted on July 20, 2020 by

    Data analytics has become an integral part of the audit process. And where a few years ago it was an area where we were exploring and creating tactical solutions to problems, today the NAO, like many organisations, has developed a strong capability and is making significant progress towards the widespread adoption of data analytics across all elements of our audit work.  

    more… How data analytics can help with audits

    Tagged with:       

Right column

  • About the NAO blog

    Our experts share their views about issues and common challenges facing government, what public sector leaders should look out for and how organisations have addressed issues. Our posts draw together threads from across our reports, share secrets spilled in events and reveal our experts’ expectations for the future.

    We encourage comments that support the exchange of ideas for improvement, but ask that those posting are respectful of others.

  • Sign up for automatic feeds

    Sign up to receive email alerts:




    RSS IconSubscribe in an RSS Reader