The NAO has published ‘Digital Britain 2: Putting users at the heart of government’s digital services’. This report is about the government’s strategy for moving public services to ‘digital by default’, published in November 2012. To give the Committee of Public Accounts assurance about the digital strategy, and that its approach to assisting those who […]
March 28, 2013
The NAO has published ‘Digital Britain 2: Putting users at the heart of government’s digital services’. This report is about the government’s strategy for moving public services to ‘digital by default’, published in November 2012. To give the Committee of Public Accounts assurance about the digital strategy, and that its approach to assisting those who are offline to use digital services is based on sound assumptions about the preferences, capabilities and needs of users in England, we commissioned independent research from Ipsos Mori. This included a face-to-face survey of over 3,000 people. The views expressed in Digital Britain 2 are those of the National Audit Office.
The data from these surveys have underpinned the research carried out in this report. We are making more detailed breakdowns of the face-to-face survey data publicly available on our website, in a series of Excel tables.
The tables do not contain any personal information and no individuals can be identified from these results.
For each analysis, both the un-weighted and weighted bases are given. The tables also show the (weighted) numbers and percentages of respondents answering in a particular response category. The weighting ensures that the sample profile matches that for the English population aged 15 and older.
We welcome your feedback on this data set or any other data you think is relevant to our work in this area. Enquiries or suggestions can be sent to the ICT and Systems Analysis Team.
Frequently Asked Questions
How was the data collected?
To support our review, we appointed Ipsos Mori to conduct a face-to-face survey with a representative sample of English adults, aged 15 and over (in October and November 2012) resulting in 3,072 respondents. The survey used quota sampling, so that fixed numbers were recruited from different groups, such as, by age and gender. Sample data were also weighted in our analysis to ensure that these and other characteristics such as region, and social grade matched the larger National Readership Survey sample (around 36,000 interviews a year).
We selected 20 online public services as a prompt for people when asking about how they use, and their awareness of, online public services. The 20 services comprised the following:
Education and learning:
• Applied for a student loan
• Applied for a school place
• Applied for Jobseeker’s Allowance
• Applied for disability living allowance
Housing and local services:
• Applied for planning permission
• Applied for housing benefit
Money and tax:
• Filed a tax return (self-assessment)
• Applied or paid for a TV licence
Crime, justice and the law:
• Paid a court fine
Driving, transport and travel:
• Applied for, renewed, or updated a driving licence
• Booked a practical driving test
• Booked a theory driving test
• Applied for a tax disc
• Applied for, renewed, or updated a passport
Working, jobs and pensions:
• Searched for a job through a government service
• Claimed a state pension
Births, deaths, marriages, parenting and care:
• Ordered a copy of a birth, death or marriage certificate
Business and self-employed:
• Filed company accounts and tax returns
• Paid PAYE tax
• Criminal Records Bureau check
What is the difference between the weighted and un-weighted bases?
The un-weighted base is the actual number of respondents included in each analysis (i.e. those who answered the question being analysed and, if applicable, belong to the relevant sub-group).
In common with many surveys, the sample data have been weighted in analysis in order to be representative of the English population on key characteristics such as age, sex and region.
What does each table contain?
Each table shows the profile of answers to a particular question, broken down for different sub-groups. The first two rows (un-weighted and weighted bases) indicate the number of respondents included in each analysis. The cells in the main body of the tables show the numbers and percentages of respondents answering in a particular response category – these have been weighted to ensure the sample profile is representative of the English adult population as a whole.
For example, in Table S2, Column C indicates that 56% of the whole sample used the internet several times a day, 16% around once a day, 3% 4-5 times a week and so on. Looking across rows 13 and 14, these show that 56% of the sample overall used the internet several times a day, which was also true for 61% of men and 51% of women in the sample.
Statistical significance – What do the letters (a/b/c etc.) included in some of the cells mean?
When looking at results for different groups, we carried out tests of statistical significance to check that any observed differences were likely to reflect genuine differences in the population rather than chance fluctuations. The testing we used compares pairs of groups: the letters give an indication of which pairs were found to be significantly different to each other.
For example, in Table S1, row 14, 92 per cent of daily internet users owned a PC. Levels of PC ownership in this group were significantly higher than those amongst people using the internet 1-5 times a week (87%) or 1-3 times a month (77%). The proportion of PC owners in each of these three internet user groups was also significantly higher than the proportion of PC owners amongst non-internet users (20%). Note that no testing was carried out for people using the internet less than once a month due to the small number of respondents (15) in this group.
The tables contain the following: Overlap formulae used. * small base; ** very small base (under 30) ineligible for sig testing. What does it mean?
Overlap formulae used: in testing for differences between groups, many standard statistical tests assume that groups are mutually exclusive, i.e. respondents can belong to one group only. They cannot be applied when respondents could belong to more than one of the groups (e.g. owners of different devices, where an individual could own both a PC and a smartphone). The overlap formulae adjusts the test statistic used so that it can be applied in this situation.
* small base: the number of respondents/responses was between 30-100, depending on the question. Estimates based on such groups should be treated as indicative of the wider population only: 95% confidence intervals (see below) may be as high as +/- 20 percentage points..
** very small base (under 30) ineligible for sig testing: the number of respondents/responses was less than 30. Estimates based on such groups should not be applied to the wider population. They have not been included in the statistical significance testing.used to check differences between groups
What is the confidence interval?
As with any survey, each result we report is subject to a certain level of uncertainty. The degree of uncertainty is indicated by the 95 per cent confidence interval: broadly speaking, we are 95 per cent certain that the stated confidence interval range contains the value for the population. For percentage estimates based on an overall sample of 3,000, we would anticipate confidence intervals in the range of +/- 1–2 percentage points. Confidence intervals for subgroups will be wider.
For example for a sub-sample of 500, the equivalent figures would be in the range +/- 2–4 per cent. The inherent uncertainty in sample estimates must also be taken into account when comparing findings for different groups: we can use tests of statistical significance to establish how likely it is that an observed difference may simply be due to chance fluctuation. In this report, where we comment on differences between different groups, these are always statistically significant, that is unlikely to be due to chance (using a t test, at the 5 per cent level).