Left column

Middlecolumn

Latest posts

    Climate Change risk: Are We Doing Enough, Fast Enough?

  • Posted on September 13, 2021 by

    “The world is now living through climate change, not watching it draw near” is the stark warning delivered by the IPCC (Intergovernmental Panel on Climate Change), in its sixth assessment report (AR6).

    In risk speak, high-impact, low-likelihood events will become more likely with higher temperatures.

    With COP 26 fast approaching and extreme weather events becoming an uncomfortable ‘new normal’ across the world, not a week goes by without media coverage of the physical risks of climate change whether it be the: scorching heat in Canada, wildfires in the Americas, or devastating floods in Germany, India and China.

    So, are we acting fast enough?

    The verdict from the Climate Change Committee’s June progress report is “with every month of inaction, it is harder for the UK to get on track” with its climate ambitions.

    To gauge the level of climate change risk maturity in government we surveyed Chairs of Audit and Risk Assurance Committees (ARACs). While four out of five ARAC Chairs considered climate risks to be relevant to their organisation, over half noted that their organisation did not have a climate or sustainability risk policy or a dedicated employee accountable for either. Additionally, seven in ten Chairs said that climate change risks had either never been discussed at an ARAC meeting or had been discussed less than annually.

    Against this backdrop, we intend to help government organisations start the conversation around climate change risk.

    What is climate change risk?

    As risk professionals we tend to think in terms of “what could go wrong?” and how “how can we manage these risks?”.

    Government organisations have a huge challenge in trying to balance short, medium and long-term risks. The UK, and indeed the rest of the world, are still recovering from the COVID-19 pandemic, which showed how crucial it is for organisations to have the resilience to respond to high-impact, low-likelihood events. It is important that a true assessment of long-term risks is considered.

    Our good practice guide intends to help with this. In setting out the wide variety of potential risks that climate change can bring about, it will help all organisations across government – not just those responsible for leading on climate policy –identify and effectively manage a variety of climate change risks. These risks stretch beyond the physical risks, such as the impact of rising temperatures. They also include the risks posed by the transition to net zero and risks specifically posed to government organisations.

    How to support and challenge management

    Our guide further allows audit and risk assurance committees to constructively challenge management’s approach to climate change risk.

    This can be done across the whole risk management cycle: from initial identification and assessment, to treatment and monitoring, through to risk reporting and continual improvement.

    For many organisations effectively managing climate change risk will be a long journey. Our challenge questions are a great tool to help you do this.

    Example questions
    Example questions for the risk management principle ‘Governance and leadership (plain text alternative)

    Key takeaways of the Good Practice Guide can be found here. We hope you find it helpful.

    Authors: Mfon Akpan, Chris Coyne, Courtnay Ip Tat Kuen


        

    About the authors           

    Mfon Akpan

    Mfon leads our Financial and Risk Management Hub. She is a Big Four trained multi-disciplinary Risk, Assurance and Governance professional with over two decades of cross-border leadership experience across the financial services industry and beyond.

    Mfon has held roles at a number of blue-chip institutions, including the World Bank Group as a Risk Management Specialist, Standard Bank Group where she was the Chief Risk Officer and Regional Head of Risk for its operations in Nigeria and across West Africa, respectively, and Barclays Group Plc where she was an Audit Director.

    Follow Mfon on LinkedIn

    Chris Coyne

    Chris manages our work on financial and risk management. He has been with the NAO since he joined as a graduate trainee in 2008, and has significant experience managing financial audits across a variety of government organisations.   

    Follow Chris on LinkedIn 

    Comment on this post...

  • Efficiency – who’s judging?

  • Posted on August 26, 2021 by

    I stumbled across the television programme ‘Undercover Boss’ last week. I’d never seen it, but the premise is simple. The chief executive of a company dons a disguise and spends a week working undercover in their own organisation to see what it’s really like. It reminded me of ‘Troubleshooter’, a similar show in the 1990s that was about helping businesses in trouble – minus the disguises. It would focus often on boardroom tensions over the company’s strategic direction or ignoring great ideas from the ‘shop floor’. Inevitably the advice was typically about opportunity for efficiency and better products.

    Government has more complex challenges but a spending review is looming. It will undoubtedly need to address the cost of the pandemic as well as providing existing and new services. HM Treasury has asked government departments to make plans for achieving substantial efficiency savings by 2024-25. But what might this mean in practice?

    It’s tempting to look at efficiency in government through a cost lens. To reduce costs by having fewer people or stopping activities altogether. But what next after one-off savings? How do you know there will be no impact on the outcomes achieved? How do you ensure that what makes sense from a department’s viewpoint doesn’t have unintended consequences and push problems, demand and costs elsewhere? Such as happened in the past when HMRC’s tax services moved online but demand for telephone advice stayed the same and service deteriorated.

    Perhaps the more searching question to answer is, efficiency in whose eyes? What does efficiency look like through the lived experience of those impacted? Efficiency is achieving more with the same. Or achieving the same, or more, with less. Government needs to judge the outcome side of the equation to know if it’s making a difference.

    Looking at efficiency through the lens of the user can help to ensure services aren’t adversely affected. Understanding service users and what they value, helps predict how they will react when services change. You can make the right improvements if things aren’t working and take out unnecessary activity. Get this right and efficiency will focus on what’s important to the people using the services.

    I’m lucky in my role. I’ve spent 10 years being able to play a version of the undercover boss seeing how government departments and services work. That’s shown me plenty of untapped potential for improvement and efficiency.

    Our good practice guide on improving operational delivery points to three underlying questions to focus efficiency on outcomes:

    Why: is it clear what our priorities are?

    This will help government align on and inform cross-cutting purpose, objectives and investment. Departments can make consistent trade-offs where priorities seem to conflict. (See how government has used the public value framework to inform priorities and outcomes in spending reviews)

    What: is real life experience informing our chosen approach to achieving our priorities?

    The experience of people impacted needs to inform and continue to challenge the chosen approach. (See Are you making a bid for design? for how government is encouraging such an approach)

    How: is there a better way to do our work?

    It’s rare for new ways of working to be perfect. But people doing the work understand what is and isn’t working in the services they are providing. Supporting them with the capability and time to identify opportunities, innovate and solve problems will improve services. (An ambition set out in the Declaration on Government Reform).

    All three questions matter. Efficiency is about getting better at how we do our work. But that raises the risk of ‘doing the wrong thing righter’ – perfecting work that isn’t important. Stepping back and questioning why we do something, and our chosen approach is trickier but vital. That challenges our long-held views, assumptions and the status quo.

    Our recent report, efficiency in government report, has lessons on identifying, planning and embedding efficiency. It’s the first in a series, outlining how government can use the outcomes that matter as the basis for longer-term decisions, rather than just seeing efficiency as a short-term numbers game. Combining this with good operational management will provide the adaptability that government needs to cope with changing whole-system demands.

    About the author:

    Alec Steel

    Alec Steel

    Alec has led our operations management specialism since 2010, and supports government thinking across the UK, Europe, USA and Australia. He has authored reports on subjects ranging from major project assurance to the use of consultants, and his assessment of operations management capability across central government in 2015 drew on learning from 32 organisations and 86 operational services.

    Comment on this post...

  • Six reasons why digital transformation is still a problem for government

  • Posted on August 4, 2021 by

    It’s revealing to look at the timeline of digital transformation initiatives over the last 25 years. Government’s ambition for ‘world class’ services using joined-up systems and data goes back to the mid 1990s, from where we can trace a steady stream of policies and initiatives right through to last autumn’s National Data Strategy. Most of these cover similar ground, which shows how hard genuine transformation is.

    Repeated cycles of vision for radical digital change have been accompanied by perhaps an overly simplistic view of the ease of implementation. Government is not a greenfield site where brand new systems can be created at will. New ways of doing business and services need to fit into a government landscape still dominated by legacy systems and data. As a result, well-intentioned initiatives have petered out, falling short of achieving their intended outcomes.

    It’s important not to see this report as just another commentary on project and programme management failures. In business transformation initiatives with significant digital elements, the intangible nature and use of novel technology introduces many more ‘unknown unknowns’. Contrast this with infrastructure projects, where people can visualise the end product within the laws of physics. This allows a clearer sense from the outset of what is realistically feasible.

    Digital leaders bring experience and understand the challenges well. But they often struggle to get the attention, understanding and support they need from other senior decision-makers. This is borne out by a recent government review into Organising for digital delivery which identified a significant challenge of low technical fluency across the civil service leadership generally. This contrasts with the commercial world where technology is increasingly seen as a critical delivery lever and senior leaders are expected to have a clear understanding of how to deploy it effectively.

    Six reasons why

    We wanted to shine a light on the systemic issues that need to be tackled before a programme even gets underway, using our past reports as illustrations. When implementing digital business change programmes here are six things to get right at the outset.

    1. Understand your aims, ambition and risk by:
    • Avoiding unrealistic ambition with unknown levels of risks
    • Ensuring the business problem is fully understood before implementing a solution
    • Planning realistic timescales for delivery, which are appropriate to the scope and risk of the programme.
    1. Engage with commercial partners through:
    • Spending enough time and money exploring requirements with commercial partners at an early stage
    • Adopting a more flexible contracting process that recognises scope and requirements may change
    • Working towards a partnership model based on collaboration with commercial suppliers.
    1. Develop a better approach to legacy systems and data through:
    • Better planning for replacing legacy systems and ensure these plans are appropriately funded
    • Recognising the move to the cloud will not solve all the challenges of legacy
    • Addressing data issues in a planned and incremental way, to reduce the need for costly manual exercises.
    1. Use the right mix of capacity, make sure you:
    • Are clear about what skills government wants to develop and retain, and what skills are more efficient to contract out
    • Better align political announcements, policy design and programme teams’ ability to deliver through closer working between policy, operational and technical colleagues.
    1. Consider the choice of delivery method through:
    • Recognising that agile methods are not appropriate for all programmes and teams
    • When using agile methods ensure strong governance, effective coordination of activities and robust progress reporting are in place.
    1. Develop effective funding mechanisms by:
    • Ensuring that requirements for both capital and resource funding are understood and can be provided for.
    • Seeing technology as part of a service that involves people, processes and systems in order to better consider the economic case for investment.

    We recognise that addressing the challenges around digital business change programmes is difficult but using these six lessons will support practical improvements. If you want to find out more, our report The challenges in implementing digital change looks into why large scale government programmes repeatedly run into difficulties.


    About the author: 

    Yvonne Gallagher

    Yvonne Gallagher

    Yvonne is our digital transformation expert, focused on assessing the value for money of the implementation of digital change programmes. Yvonne has over 25 years’ experience in IT, business change, digital services and cyber and information assurance, including as CIO in two government departments and senior roles in private sector organisations, including the Prudential and Network Rail.

    Tagged:

    1 Comment

  • Supporting better value in public procurement

  • Posted on July 20, 2021 by

    The NAO’s good practice guide for managing the commercial lifecycle

    The government’s response to the COVID-19 global pandemic has drawn renewed, and possibly unprecedented, attention to public procurement and commercial practice in government.

    Our aim at the NAO is to provide an independent and evidence-based perspective, on how public authorities can achieve better outcomes and value for money throughout their commercial activities.

    Through our value for money studies programme we have reported on procurement during the pandemic, the supply of PPE, availability of ventilators and preparations for COVID-19 vaccines. We have reported twice on Test and Trace – in an interim report and a progress update as well as the procurement of the free school meals voucher scheme. We have also kept track of the estimated cost of government measures in response to the pandemic, via the COVID-19 cost tracker which currently registers £372 billion.  

    Given that over £200 billion of UK taxpayers’ money is spent every year on the purchase of goods and services (and this excludes capital expenditure), it is right that attention is focused consistently on commercial activity. The diagram below shows the distribution of this expenditure by department in one financial year.

    Annual expenditure on goods and services by government department (£ billion):

    Figures are for 2018-2019

    Enhancing our good practice guidance
    Over the last couple of months we have engaged with government departments, local authorities, professional bodies, commercial experts and our colleagues to consider how to improve the NAO’s existing commercial guidance and we have listened carefully to helpful advice and suggestions. We have also reviewed the revised guidance issued by the government commercial function and discussed the government’s plans for amendments to domestic legislation.

    Our conclusions and recommendations to government that support improved performance in procurement have featured in some 209 reports examining 350 contractual arrangements published throughout the last 20 years. Alongside our good practice guide we have also published a complete list of past reports, noting that many of the findings are applicable to the wider public sector

    Our good practice guide draws on all these valuable sources of insight and practice, and it is intended to support public bodies prepare proactively for future procurements. It will help practitioners understand and effectively manage all elements of the commercial lifecycle, and will support senior leaders with governance responsibilities in asking the right questions about commercial activity in their organisations.

    We have presented it in 10 sections as illustrated in the diagram below.

    • Four strategic sections highlight the elements that support good commercial outcomes: commercial strategy, capability, accountability & governance, transparency & data.
    • Six procedural stages address the end-to-end commercial lifecycle, starting with the identification of a requirement; choice of sourcing approach; market monitoring; process and agreement; contract management; and review, transition and exit.

    Each of the ten sections contains a description, lists our expectations of good practice, supported by our extensive evidence base, and highlights what needs to improve. In addition, the guide contains useful way to find a wide range of relevant guidance issued by the government commercial function and professional practice.

    Good practice guide for managing the commercial lifecycle

    Our good practice guide also contains 20 case studies summarising the NAO’s findings from published reports. In the ‘commercial capability’ section we refer to our report on Managing the HMRC estate and outline how government negotiated improved cost transparency with its supplier. In this and other case studies in the guide we aim to bring our expectation of good practice to life.

    This guidance is essential reading for policy and commercial staff involved at all levels of public procurement and commercial activities, including senior leaders and non-executive board members of public authorities. It complements our continuing programme of reviews to scrutinise public procurement and commercial activity across government.

    In the coming months we will publish a series of blogs focussing on individual stages of the commercial lifecycle, and drawing out our findings and expectations for good practice. We hope to engage with as many of you as possible as we discuss this renewed interest in procurement and collaborate with colleagues across government and other organisations to embed good practice in procurement that will drive good outcomes for taxpayers and the public as a whole.

    The good practice guide for manging the commercial lifecycle is published on the NAO website and can be accessed via this link.  


    About the authors: 

    Matthew Rees FCA, CPFA, FCSI: leads the NAO’s Commercial specialism. He also represents public sector issues as a co-opted member of the ICAEW Council. His commercial experience spans Big-Four audit and valuations, global investment banking and non-executive director and audit and risk committee chair in an energy sector consultancy. Matthew’s public sector experience includes merger and market investigations of a wide range of sectors at the CMA, economic regulation in the telecoms, water and the aerospace and defence sectors and value for money studies in relation to the UK government’s corporate finance activities. 

    Lilian Ndianefo is a qualified accountant with many years of experience working across disciplines at the NAO including public sector companies audits, government departments, and supporting the C&AG’s responsibilities for setting the Local Audit Code and developing Auditor Guidance for local government and NHS bodies. Prior to the NAO, Lilian worked in public practice, with years of experience of managing a large portfolio of corporate multinationals with the Big Four.

    Iain Forrester is a qualified accountant with long experience of working on the NAO’s commercial and contracting related work. This has included cross-government work on grants, shared services, EU Exit, and the government’s response to COVID-19. He also worked on the commercial and contract management insights guide published in 2016. 

     

     

    Comment on this post...

  • Let’s get down to business

  • Posted on July 7, 2021 by

    Man with pencil

    At the National Audit Office, we come across many business cases when looking at government programmes. A strong business case is vital for effective decision making and for successfully delivering intended outcomes.

    The foundation of a business case is a clear understanding of what success will look like for a programme – the strategic case. But when it’s not clear what a programme is trying to achieve, it’s hard for decision makers to know if this programme is the right thing to do, or to plan and focus resources. It creates the risk that different stakeholders have different expectations about what will be achieved. It makes it harder to spot where other programmes may contribute to similar goals or where there may be adverse impacts. And for the public, parliament and us as auditors, it makes it hard to understand if the programme has delivered good value for money.

    Promoting the strategic case

    The November 2020 update to HM Treasury’s Green Book (its guidance on how to appraise and evaluate spending proposals) introduces a stronger requirement to establish clear objectives up front. Proposals should be assessed primarily on how well they deliver policy objectives, and cannot be considered value for money without delivering these.

    But for proposals to be assessed this way, the strategic case needs to be robust. Therefore, when auditing major programmes, we ask the seemingly simple question – is it clear what objective the programme is intended to achieve?

    Our recent learning from COVID-19 re-emphasised the importance of government being clear and transparent about what it is trying to achieve, so that it can assess if it is making a difference. For example, HM Revenue & Customs agreed clear principles for its employment support schemes. Although the Bounce Back Loans Scheme achieved its initial objective of quickly supporting small businesses, a lack of more detailed scheme-specific objectives will make it difficult to measure its ultimate success.

    The government’s commitment to ‘levelling up,’ and uncertainty over what this means, may create difficulties for programmes to set out what they will achieve. They will need clarity to produce a business case. It could be interpreted as giving everyone access to the same opportunities, or at least to the same minimum standards – say of health outcomes or broadband access. This prioritises spreading prosperity to deprived areas. However, it also could be framed as addressing gaps in potential by, for example, investing where an area should be showing higher productivity. This prioritises value for money investments. As these different goals require different policy solutions, it can be challenging to set out how an intervention will achieve ‘levelling up’. Later this year, government will publish a levelling up White Paper setting out how new policies will improve opportunity and livelihoods across the UK.

    Whilst defending the economic case…

    A strong strategic case alone does not mean an intervention is justified. There might be other ways to meet an objective which could be better value for money. We often see business cases that seem to justify a pre-selected solution, rather than exploring a range of options for meeting the objectives – what the Green Book calls ‘the long list’.

    Our report on Hinkley Point C found that alternative ways of the government providing support for the planned nuclear power station could have resulted in lower costs to consumers over the life of the project, but weren’t considered. We have also seen departments not considering different options when thinking about how to deliver policy – nine out of the 24 business cases we reviewed as part of our report on arm’s length bodies did not consider a long-list of options.

    The economic case is important in setting out value for money, often through formal modelling, the results of which will need to be considered alongside the strategic case.  Our early work on High Speed 2 found that the relationship between savings (with the Department for Transport putting a high emphasis on journey time savings) and the strategic reasons for doing the programme, such as rebalancing regional economies, was unclear.

    So, what do we expect from strategic cases?

    Throughout a programme, the strategic case needs to help ensure effective decision-making. As well as specifying what should be achieved (with a clear, logical set of assumptions) it needs to:

    • Be easily understandable so effective trade-offs can be made. Our lessons from major programmes describes how objectives need to be clear enough to be translated into a programme scope (what will be required and when). For example, government has been considering which objectives to prioritise for the roll-out of gigabit-capable broadband. In our report, we found that prioritising the speed of programme delivery over other objectives posed a risk to value for money. 
    • Help prioritise cross-government objectives. We see cases where objectives are neither coherent when taken together, nor clearly prioritised when tensions emerge between them. In November 2020 we considered progress in achieving government’s long-term environmental goals. The government set out 10 overarching goals but did not provide a clear and coherent set of objectives, with, for example, varying and often unclear timescales. 
    • Be measurable (where possible). The strategic case will capture those assumptions that cannot be equated to a monetary equivalent. And, the easier assumptions are to quantify, the easier it will be to assess progress. Our early High Speed 2 review found the strategic case should have been more developed. For example, it included limited evidence on forecast passenger demand which provided a weak foundation for securing and demonstrating success. The Department was working to strengthen its analysis. Also, our Hinkley Point C report found the Department put more weight on the wider, unquantified strategic case when the economic case weakened but had little control over these benefits and at the time of our report no plan to realise them. 

    Government plans to invest heavily in programmes, with £100 billion expected investment in 2021-22 alone. For government to secure best value from this it must set out clearly and logically what it wants, how to best deliver this and how it will show what has been achieved for the investment. 


    Authors: Ruth Kelly and Emma Willson

    Emma Willson

    Emma Willson

    Emma Willson leads our Major Projects hub. She has worked at the NAO for almost 20 years, auditing a wide range of government programmes, from welfare reform to large-scale defence equipment projects. She is a qualified chartered accountant and holds an International Association for Contract and Commercial Management (IACCM) qualification.

    Ruth Kelly

    Ruth Kelly

    Ruth Kelly is our Chief Analyst and has wide experience of applying economics and other analytical approaches to support policy evaluation, investment decisions and risk management. Prior to joining the NAO, she held business evaluation and risk management roles for a global resources company, and advised clients on carbon and energy issues for a Big 4 economic consultancy practice.

    1 Comment

  • Using process mining to support the NAO’s investigation into the Windrush compensation scheme

  • Posted on June 10, 2021 by

    At the NAO, our central analysis team supports value for money studies by applying specialist analysis techniques to government data in order to generate new insights. For our Investigation into the Windrush Compensation Scheme, we used process mining to help the study team understand the Home Office’s operation of the scheme. 
     
    Process mining is an exciting technique that allows you to gain a detailed understanding of a process purely by analysing data that users of the process generate, which can be found in automated event logs. It allows us to understand the flow of cases through a system, including the time taken for different activities, how resources are used and where bottlenecks occur, and as such can be vital in assessing the performance of the overall process.  

    To use process mining, the minimum data requirement is an event log containing Identification Codes (IDs) that are unique to each case, consistent labels relating to actions made on a case, and a timestamp for each action. For our work on the Windrush Compensation Scheme, we analysed logs generated by the Home Office’s case management system for dealing with compensation claims using the open source programming language R, specifically the bupaR collection of packages. 
     
    Our first step using process mining was to create a process map. This output provided a powerful visual representation of the different stages of the case management system and the way in which cases move between stages – beginning with registration and ending ultimately in either claim payment or rejection. 

    A simplified version of the process map was included in the report and is repeated below. The different nodes shown here represent the different stages in the process, while the numbers show how many times a compensation scheme case has entered/re-entered a stage (boxes), or, the number of times cases have moved between stages (connecting lines). 

    Process map showing how cases registered after 13 March 2020 moved through the Home Office’s Windrush Compensation Scheme system – data as at 31 March 2021. Refer to page 39, figure 15, for the report figure (see also footnote 1).
     

    Next, we used process mining to create an animated version of the process map. This showed all individual compensation scheme cases dynamically moving through the system between stages. It proved very effective in illuminating the rate at which cases progress through the system and the stages where that progress is slower – A short illustrative GIF of this output is shown below (see also footnote 2).

    Animated process map showing how cases registered after 13 March 2020 moved through the Home Office’s Windrush Compensation Scheme system by time – data as at 31 March 2021. Circles represent the flow of individual cases, the animated process map is based on the process map used in the report, refer to page 39: figure 15, for the report figure, and footnotes below. The animated map is truncated for illustrative purposes. 

    Visualising the Windrush Compensation Scheme in these ways enabled the study team to better understand its operation and to draw out insights from the client’s case management systems. As a result, our auditors were able to define complex hypotheses and explore these with the client, using process mining analysis to support their findings.  

    For example, our analysis showed that, of cases that were subject to a quality assurance check, half needed to return to a caseworker, indicating a significant level of rework. We were able to use process mining to combine this with observed data and quantify the exact rate at which this occurred. In another example, our study team were able to use process mining to compare the Department’s initial estimates for the average number of hours to do everything required on a case, end to end, with the actual number of hours recorded in the Department’s data for cases up to 31 March 2021.  

    Using advanced analytics – in this case process mining – our study team enhanced their understanding of the Windrush Compensation Scheme and the report’s findings.

    You can read more about the Investigation here: Windrush Compensation Scheme, and if you are interested in knowing more about Analysis at the NAO, visit our Analysis page on the main NAO site.

    Authors: Mohit Motwani and Ben Coleman

    Ben and Mohit work in the NAO’s analysis hub, helping support value for money studies by providing complex data analysis techniques to study teams. They both undertook process mining work in relation to the NAO’s Investigation into the Windrush Compensation Scheme.


    Footnotes

    Note 1

    1. Data relate to 1033 cases for which a registration stage was created after 13 March 2020 and show the observed movement of these cases through the system until the 31 March 2021.
    2. The numbers shown are a count of instances of cases reaching a stage or moving between stages, rather than a count of the number of unique cases.
    3. Some intermediate stages such as offer and payment approvals and payment preparation have been omitted for clarity.
    4. Some movements between stages have also been omitted for clarity, including:
      1. Cases moving back following successful applicant appeal;
      2. Cases moving back to registration or eligibility following casework;
      3. Cases moving back to casework following payment offer.

      Source: National Audit Office’s analysis of Home Office applications data

    Note 2

    The full version of this covered the period March 2020 to March 2021, which was the period for which we had access to full case management records in the event log. 

    1 Comment

  • How to audit artificial intelligence models

  • Posted on June 1, 2021 by

    In our ever-increasing digital and automatized world, certain buzzwords are becoming more centre stage in the public sector. One of them is “artificial intelligence”. While the concept, and development, of artificial intelligence is not new (artificial intelligence was first recognised as a formal discipline in the mid-1950s), it is a word that has been casually thrown around more in recent years in the public sector, and sometimes carelessly.  

    Traditional algorithms vs machine learning models 

    These days modern data scientists normally associate with artificial intelligence systems that are based on machine learning models. Machine learning models deploy methods that develop rules from input data to achieve a given goal.1 There is a difference to, what you may call, traditional algorithms. Traditional algorithms don’t need data to learn, they just churn out results based on the rules inherent to them. 

    Traditional algorithms have been used in the public sector for some time to make decisions. The latest example making the headline was the model determining A level exam results last summer. From an auditing perspective, as the basis of the algorithms are usually transparent, auditing them is something we as a public audit institution are used to.2 

    But artificial intelligence that is based on machine learning is different –  it has only been (cautiously) employed in the public sector in recent years. 

    It is different because, firstly, for a machine learning model to learn it needs good, quality data – and often a lot of it. Our report on the challenges of using data across government has shown that that condition is not always given.

    Secondly, it can be quite costly to develop  and deploy them. Moreover, the benefits are not always guaranteed and immediately realisable. In a public sector context with tight budgets, the willingness to put money behind them may not always be there. 

    The reason for this is related to a third point. It is not always certain from the outside what the machine will learn and therefore what decision-making rules it will generate. This makes it hard to state the immediate benefits. Much of the progress in machine learning has been in models that learn decision-making rules that are difficult to understand or interrogate.  

    Lastly, many decisions affecting people’s lives that artificial intelligence models would support pertain to personal circumstances and involve personal data, such as health, benefit or tax data. Whilst the personal data protection landscape has strengthened in recent years, there are not always the organisational regulatory structures and relevant accountabilities in place of the use of personal data in machine learning models.3 Public sector organisations are therefore at risk of inadvertently falling foul of developing data protection standards and expectations. 

    How to audit public sector machine learning models 

    Given all these challenges, it may not be surprising that in our public audit work, we are not coming across a lot of examples of the use of machine learning models in decision-making. But there are examples4 and we foresee that they may be growing in the future.  

    We have therefore teamed up with other public audit organisations in Norway, the Netherlands, Finland and Germany, and produced a white paper and audit catalogue on how to audit machine learning models. You can find it here: Auditing machine learning algorithms (auditingalgorithms.net)

    As the paper outlines in more detail, we identified the following key problem areas and risk factors: 

    • Developers of machine learning models often focus on optimising specific numeric performance metrics. This can lead them to neglect other requirements, most importantly around compliance, transparency and fairness. 
    • The developers of the machine learning models are almost always not the same people who own the model within the decision-making process. But the ‘product owners’ may not communicate their requirements to the developers  – which can lead to machine learning models that increase costs and make routine tasks more, rather than less time-consuming. 
    • Often public sector organisations lack the resources and/or competence to develop machine learning applications internally and therefore rely on external commercial support. As a result they may take on a model without understanding how to maintain it and how to ensure it is compliant with relevant regulations. 

    We also highlighted the implications for auditors to meaningfully audit artificial intelligence applications: 

    • They need a good understanding of the high-level principles of machine learning models 
    • They need to understand common coding languages and model implementations, and be able to use appropriate software tools 
    • Due to the high demand on computing power, machine learning supporting IT infrastructure usually includes cloud-based solutions. Auditors therefore also need a basic understanding of cloud services to properly perform their audit work. 

    Our audit catalogue sets out a series of questions that we suggest auditors should use when auditing machine learning models. We believe it will also be of interest to the public sector bodies we audit that employ machine learning models. It will help them understand what to focus on when developing or running machine learning models. As a minimum, it gives fair warning what we as auditors will be looking for when we are coming to audit your models! 

    Footnotes

    1 In fact there are two main classes of machine learning models. Supervised machine learning models attempt to learn from known data to make predictions; unsupervised machine learning models try to find patterns within datasets in order to group or cluster them. 

    2 See for example our Framework to review models – National Audit Office (NAO) Report to understand more about what we look out for when auditing traditional models and algorithms. We currently have some work in progress that aims to take stock of current practices and identify the systemic issues in government modelling which can lead to value for money risks 

    3 In the UK the Information Commissioner Office has published guidance on the use of personal data in artificial intelligence: : Guidance on AI and data protection | ICO 

    4 For some UK example see: https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector 

    About the author: 

    Daniel Lambauer joined the NAO in 2009 as a performance measurement expert and helped to establish our local government value for money (performance audit) team. He is the Executive Director with responsibility for Strategy and Resources. As part of his portfolio, he oversees our international work at executive and Board level and has represented the NAO internationally at a range of international congresses. He is also the NAO’s Chief Information Officer and Senior Information Responsible Owner (SIRO). Before joining the NAO, Daniel worked in a range of sectors in several countries, including academia, management consultancy and the civil service.

    Comment on this post...

  • When regulation affects everything from education to transport, how do we make regulation effective?

  • Posted on May 25, 2021 by

    Regulation impacts all our lives in many ways. Where we live and work, how we travel and communicate, the food we eat, the gadgets we buy, the banks we use, the water and energy we run our homes with. All of these places, goods and services, and more, are regulated. 

    We don’t generally notice the business of regulation happening in our daily lives. But when regulation fails, it can have serious consequences for our finances or safety, the economy as a whole, or the environment. Some high-profile disasters in recent years are often described as regulatory failures: the explosion at the Buncefield oil terminal in 2005, the financial and banking crisis in 2008, the Deepwater Horizon pollution disaster in 2010, the Grenfell fire in 2017, and the list goes on.  

    As well as minimising the risk of failures like these, good regulation can be used to achieve a range of different aims and opportunities. It can support innovation, make workplaces safer, or help to keep essential services affordable. In a modern, mixed economy, like the UK, regulation is used in most areas of public policy, from education, healthcare and charities to transport, food, communications, utilities and employment.  

    What does good regulation involve? 

    It’s one thing to say what regulation is supposed to achieve, and quite another to make it work in practice. Being in the NAO’s Regulation team, we’re quite often asked: “what does good regulation actually look like?”. This question comes from regulators and policymakers, but it also comes from other people and organisations interested in making regulation work well, such as charities and trade bodies. And it’s not a simple question to answer, as much as we’d like it to be, because regulation can take different forms and exist for different purposes. Regulatory interventions vary, and most regulators and government departments will use a variety of approaches. 

    At one end of the spectrum are essentially unregulated, free markets where primarily the courts are the arbiter of any disputes. At the other end are areas with particular risks, such as the nuclear and pharmaceutical sectors, that have more prescriptive, rules-based systems of regulation. Between these lies a rich landscape of more principles-based approaches, varying from providing guidance and reputational incentives (for example, performance league tables), through various forms of self-regulation and codes of practice, to licensing regimes and the regulation of prices.  

    Despite this variety our work on regulation in the past decade has identified common themes and challenges that come up time and time again, such as the use of data to identify problems, how regulation is funded, or how regulators know whether they’re actually doing a good job or not. Based on this experience, we’ve published a good practice guide setting out broad principles of effective regulation, illustrated by case studies or further guidance for each principle. Our aim is to support policymakers, regulators and other stakeholders to design and implement regulation in a way that is effective at achieving what it is supposed to, whether this is protecting people and businesses, supporting economic growth, adapting to changes from EU Exit and technological developments, or safeguarding the environment and pursuing the priorities and challenges of the government’s net zero agenda.  

    The learning cycle 

    At the heart of our guide is a ‘learning cycle’ for assessing how well regulators and policymakers are applying the principles. Regulation is rarely a single programme of work with a simple beginning, middle and end, but tends to be an ongoing process of designing a regulatory framework, analysing what is needed, intervening, and then learning from experience in order to do things better in the future. If any one of these elements is overlooked there’s a risk that it can undermine the purpose and effectiveness of the regulatory framework.  

    When creating or making changes to a regulatory system, all aspects should be considered upfront – for example, if you don’t plan how you will measure the impact of changes, you probably won’t be collecting the data you’ll need later on. But each of the four stages has its own focus: 

    • The Design stage principles – such as objectives, powers, funding and accountability – are the most crucial to get right from the start. They help translate the policy intent into the design of a regulatory regime, and can be costly or disruptive to change later if they require new legislation.  
    • The Analyse stage is about identifying areas that present risks or opportunities, engaging with stakeholders to understand needs and priorities, and establishing what capacity is needed to respond appropriately. For example, the way data and intelligence are analysed is essential in assessing risks, identifying problems and targeting activities and resources. 
    • The Intervene stage principles are intended to help regulators intervene effectively by understanding what impact their actions might have, prioritising issues, and considering how best to respond in a proportionate, consistent and timely way.  
    • Finally, the Learn stage is about regulators, policymakers and others working collaboratively to measure progress, understand the real-world impact of interventions, and learn from experience to maximise effectiveness in the future.  

    We’ll continue to use our work across government to share principles, lessons and good practice, and we welcome any comments you may have. 

    About the authors: 

    Rich Sullivan-Jones manages our work on regulation, consumers and competition. His recent work has included reports on gambling regulation, problem debt, vulnerable consumers, regulatory performance measurement, and public service markets in higher and further education.  

    Peter Langham

    Peter Langham is a senior member of our regulation, consumer and competition team. He leads our work on public service markets, and has extensive experience of assessing the effectiveness of regulators and the UK competition regime.

    Tagged:

    Comment on this post...

  • Achieving excellence in Public Sector reporting

  • Posted on May 20, 2021 by

    Good reporting in the public sector should allow the public and Parliament to understand to easily understand an organisation’s strategy and the risks it faces, how much taxpayers’ money has been spent and on what, and what has been achieved as a result. Following the challenges of the last year, most notably COVID-19, clear and transparent reporting is hugely important.   

    Transparency and accountability in central to strong financial and risk management in government, and how this is supported by clear and understandable reporting. With that in mind, we’re delighted to share a recent National Audit Office report which cuts to the heart of this: the Good Practice in Annual Reporting Guide.  Our guide sets out good-practice principles around a number of key areas to help public sector organisations to compile their Annual Reports.  Those principles are:  

    • Accountability  
    • Transparency 
    • Accessibility 
    • Understandability  

    Building on these principles, our guide provides some excellent examples from public sector organisations that we think are leading the way. Below we have picked out a few key takeaways for organisations to consider as part of their preparations for 2020-21 Annual Reports.  

    Risk and Governance: There should be an increased focus on the risks and challenges of recent events and how these are managed, including: 

    • Frank and honest analysis of how COVID-19 (and other risks) have impacted operations and how the taxpayer’s money has been spent and managed.  
    • Clear depiction of the governance and risk management framework to demonstrate an organisation’s processes to identify, monitor and mitigate risk. 
    • Transparent reporting of complex technical judgements and decisions. We anticipate, given the challenges brought about by the pandemic, spending reviews and EU Exit, that organisations may enter complex transactions or arrangements. These transactions should be disclosed transparently and in a way that is understandable to the users. 

    Strategy and Operations: There should be a clear articulation of purpose and objectives, and how an organisation’s operations support their objectives. In particular:  

    • Clarity around an organisation’s purpose, strategic objectives and values and how these feed into the performance of the organisation and any related risks, with reference to its external environment. 
    • Celebration of Diversity and Inclusion within annual reports. Employee costs make up most of the central government expenditure and people are undoubtedly an organisation’s most precious asset.  Organisations should consider what their employee data says about them and whether reporting could be improved in this area. 

    Measures of Success and Financial Performance: There should be a balanced assessment of goals achieved and performance against targets, and financial performance should be understandable and consistent with the underlying financial statements. For instance:  

    • Better trend reporting. Trend analysis over time is a strong indicator of performance and achievements and a good way for the reader to hold organisations to account. Organisations should consider what trend data is being published and what story they are trying to tell.  
    • Better, more accessible information on non-financial metrics affecting organisations, such as sustainability reporting. Organisations should seek to portray non-financial data in simple terms to help tell a story and show clearly how it is linked to their operations.  

    Lastly, the NAO co-sponsor the annual Building Public Trust Awards, Public Sector reporting category with PwC, to give credit to organisations who are demonstrating excellence in government financial reporting. If you believe your organisation’s 2020-21 annual report and accounts is an example of excellent reporting, you can nominate it for the Building Public Trust Awards – Public Sector Reporting Award by emailing Building.Public.Trust@nao.org.uk by 30 June 2021.  

    Authors: Chris Coyne, Rachel Nugent, Catriona Sheil and Courtnay Ip Tat Kuen.

    Chris Coyne

    Chris manages our work on financial and risk management. He has been with the NAO since he joined as a graduate trainee in 2008, and has significant experience managing financial audits across a variety of government organisations.   

    Follow Chris on Linkedin 

    This article was first published on OneFinance (login required) 

    Tagged:

    Comment on this post...

  • Better data means better services – so how can government get there?

  • Posted on April 29, 2021 by

    The shielding programme was a swift government wide response to identify and protect clinically extremely vulnerable (CEV) people against COVID-19.

    Our recent report on Protecting and supporting the clinically extremely vulnerable during lockdown, shows how government quickly recognised the need to provide food, medicines and basic care to those CEV people shielding. This had to be pulled together rapidly as there were no detailed contingency plans.

    But there was a problem.  In order to do this, government was faced with the urgent task of identifying the people who needed support based on existing, disparate data sources.

    Difficulties in extracting and combining data

    The urgency of this exercise was recognised by all involved, but difficulties in extracting, matching and validating data from across many different systems meant that it took time for people to be identified as CEV.

    At the start of the pandemic, there was no mechanism to allow a fast ‘sweep’ across all patients to identify, in real time, those who fell within a defined clinical category.

    It was a major challenge to identify and communicate with 1.3 million people by extracting usable data from a myriad of different NHS and GP IT systems all holding data differently.

    This lack of joined-up data systems meant NHS Digital had to undertake the task of accessing and extracting GP patient data, stored in different ways in each practice and holding specific details about people’s medical conditions to merge with their own databases. It took a huge effort by the team to complete this task in three weeks.

    Data issues were not resolved by the time of the second lockdown

    Government had identified systems were not capable of ‘speaking’ to each other across hospital, primary care, specialist and adult social care services following the first iteration of shielding (March – August 2020), and sought to apply them to the second lockdown towards the end of 2020. However, our report highlighted resolving the data issues was not an area where significant progress had been or could be made.

    This reflects the wider issues of data across government

    These challenges are examples of broader issues that we have previously highlighted in our report on Challenges in using data across government. People often talk about better use of data as if this is a simple undertaking. But there are significant blockers and constraints that require sustained effort to overcome, which apply to all areas of government trying to use and share data other than for the single purpose it was originally created for.

    The basic issues are widely known and acknowledged:

    • Huge variability in the quality and format of data across government organisations
    • Lack of standardisation within departmental families and across organisational boundaries making it difficult for systems to interoperate
    • The extent of legacy IT systems across government further compounding the difficulties
    • Ownership and accountability aren’t easily agreed where a shared dataset of personal data is brought together and has equal value to different services.

    It’s unclear to us how calls to establish and enforce data standards are going to work in practice if existing systems can’t be modified to support them and there is no firm timetable, road map or funding commitment for replacing them.

    In our report Digital transformation in the NHS, we reported that 22% of trusts did not consider that their digital records were reliable, based on a self-assessment undertaken in 2017. The average replacement cycle for a patient records system is something in the region of once every 15 years so this change isn’t going to happen overnight.

    Our aim is to support government in tackling these issues, and not to be critical of past failings, because we recognise that it is hard. We set out a number of recommendations in our data report and they are summarised in our accompanying data blog.

    Some are aimed at the centre of government and others are steps that individual organisations can take. Our cross-government recommendations were primarily around accountabilities, governance, funding and developing rules and common ways of doing things.

    Our recommendations for individual organisations are:

    • Put in place governance for data, including improving the executive team’s understanding of the issues associated with the underlying data and the benefits of improving that data
    • Set out data requirements in business cases. This should include an assessment of the current state of the data, and the improvements or new data that are necessary. These assessments should have an explicit consideration of ethics and safe use
    • Implement guidance for front-line staff for handling data, including standardisation, data ethics and quality.

    Organisations that hold a cohesive view of their citizen/patient data must address this issue in a managed and incremental way, rather than having to resort to one-off costly exercises which have to be repeated when the next need arises. This will require sustained effort and perseverance.

    Unfortunately, there are no easy shortcuts, but with a will to put in the necessary effort progress can be made one step at a time.


    Yvonne Gallagher

    Yvonne Gallagher

    Yvonne is our digital transformation expert, focused on assessing the value for money of the implementation of digital change programmes. Yvonne has over 25 years’ experience in IT, business change, digital services and cyber and information assurance, including as CIO in two government departments and senior roles in private sector organisations, including the Prudential and Network Rail.

    Tagged:

    Comment on this post...

Right column

  • About the NAO blog

    Our experts share their views about issues and common challenges facing government, what public sector leaders should look out for and how organisations have addressed issues. Our posts draw together threads from across our reports, share secrets spilled in events and reveal our experts’ expectations for the future.

    We encourage comments that support the exchange of ideas for improvement, but ask that those posting are respectful of others.

  • Sign up for automatic feeds

    Sign up to receive email alerts:




    RSS IconSubscribe in an RSS Reader
  • Recent posts