Swipe Icon
Measuring Population and Housing: Practices of UNECE Countries in the 2020 Round of Censuses
This publication reviews the practices followed by member countries of the United Nations Economic Commission for Europe (UNECE) – spanning 56 countries across Europe, Central Asia and North America – in conducting their population and housing censuses of the 2020 round. The aim is to compare the approaches adopted by countries and to assess the extent of their alignment with the Conference of European Statisticians (CES) Recommendations for the 2020 Censuses of Population and Housing.
UNECE
May 2026
Chapter 9 Quality and coverage
Detailed information on census quality and coverage collected in the UNECE survey is available in the Quality, coverage and contents section of the UNECE 2020 Census Round dashboard
9.1 Managing quality
194.
194. The product of any census of population and housing is data, and therefore confidence in the quality of that data is critical. Thus, a quality assurance programme should be an essential element in the overall census programme and should cover all activities across the different phases of census implementation, from planning to the development period, the data collection and processing, through evaluation and dissemination of results.
195.
195. As the diversity of census approaches increases across countries in the UNECE region, and as the number of countries making use of administrative data in their censuses grows, the evaluation of the impact that different approaches to census taking could have on the comparability of population statistics becomes ever more important. Especially relevant appear the potential differences that might occur between countries that undertake a field-based enumeration and those with a register-based approach or a combined approach.
196.
196. It is generally accepted that, with particular relevance to the census, there are six dimensions of data quality as set out in section 9.2 of the 2030 CES Recommendations, namely: a) relevance; b) accuracy; c) timeliness; d) accessibility; e) interpretability; f) coherence.
197.
197. Some 43 UNECE countries responded to the Quality, coverage and content section of the UNECE survey. The section started with a question on the management of the six quality dimensions plus comparability (with previous censuses and other data sources), which is strictly associated with the dimension of interpretability. Countries were asked to report on those dimensions for which quality management processes were established, and whether or not the findings were (or were to be) published.
198.
198. Table 33 shows the number of countries that managed each dimension, broken down as to whether or not they have published (or plan to publish) any findings. On the whole, the large majority of countries managed all quality dimensions, with rare exceptions. Only two countries answered no for all of the dimensions: Czechia and the Kingdom of the Netherlands. This answer could be interpreted both as referring to the lack of formal and explicit processes put in place to manage each dimension (Czechia specified that “Ensuring quality was an integral part of all census stages”) and to the emphasis being put on the measurement of the quality of the different sources used to produce data in register-based censuses (as it is the case for the Kingdom of the Netherlands).
Table 33  Count of countries managing quality dimensions
Quality dimensions
Managed total (published or not)
Managed and published (or planned)
Managed only (not published)
Not managed and not published
Relevance
38
93%
22
54%
16
39%
3
7%
Accuracy
38
88%
25
58%
13
30%
5
12%
Timeliness
41
95%
30
70%
11
26%
2
5%
Accessibility
38
95%
26
65%
12
30%
2
5%
Comparability
40
95%
27
64%
13
31%
2
5%
Interpretability
37
88%
20
48%
17
40%
5
12%
Coherence
39
93%
22
52%
17
40%
3
7%
Note: Percentages are of the total for the corresponding dimension.
199.
199. Interestingly, differently from the previous round, accuracy is not the dimension most managed, even though the differences in the number of dimensions managed by responding countries are minimal. On the whole, timeliness is the dimension more managed (by 41 countries out of 43), whereas interpretability is the quality dimension that was managed by the least number of countries (37 out of 43; in the previous round the dimension managed by the least number of countries was relevance instead).
200.
200. In general, where countries managed a particular dimension, the majority published the related information, differently from the 2010 round when countries where almost evenly split as to whether publishing or not. The dimension for which more countries published information is, as for the overall management, timeliness (30), followed by comparability (for which 27 countries published information), while interpretability is the dimension for which the smallest difference between countries that published information and countries that did not is recorded (or the dimension for which the least number of countries – 20 out of 37 that did manage it – published information).
201.
201. Table 34 shows the distribution of ECE countries by number of quality dimensions managed. Besides the few countries that have not established formal processes for managing quality, all countries managed at least three dimensions. Of these, the large majority (32 countries) managed all seven dimensions (whether or not they had published or would publish their findings) and the remaining but a couple managed six of them.
Table 34  
Quality dimensions managed by countries (number of countries)
202.
202. In addition to the answers to the pre-coded questions, a few countries provided comments concerning the dimensions managed but not published. Georgia (that conducted the census at the end of 2024, after the UNECE survey) stated that some activities had not yet been decided, while France and Spain mentioned that customer satisfaction surveys were conducted among data users with regard to all NSOs data and products (i.e. not specifically for the census) and Finland did the same with reference to the relevance and interpretability quality dimensions, whereas information related to accuracy, timeliness, comparability and consistency was published on the institutional website in connection with the quality and method description of respective statistics. Furthermore, France mentioned holding focus groups with strategic users to understand if the census dissemination policy satisfies their needs, while Spain added that comparison of the main indicators with other sources and over time were conducted in order to ensure comparability of census data.
203.
203. Furthermore, Sweden added that information on the quality of the census is described in the metadata to be provided to Eurostat. As this is required under the EU Census Regulation, this holds for all EU countries participating in the survey, but not all of them interpreted this as equivalent to publishing quality information.
9.2 Managing accuracy
9.2.1 Measuring the accuracy of the population count
204.
204. Taking into account the key importance of accuracy of population statistics, the first part of the quality section of the UNECE survey focused on the measurement of accuracy.
205.
205. Table 35 breaks down the overall response rate into the various methods used to measure accuracy of the population count (for the 43 countries completing the quality section), showing whether these methods were used in the measurement of under-coverage, over-coverage and/or variance.
Table 35  
Methods used to measure the accuracy of the population count
Statistical methods
Countries using method*
(out of 43)
Used to measure...
Under-coverage
Over-coverage
Variance
Independent post-enumeration coverage survey
11
11
10
5
Other form of coverage survey
6
3
3
3
Comparison with aggregate administrative datasets
20
20
16
5
Comparison with unit record administrative datasets
21
16
16
7
Comparison with existing surveys (LFS etc.)
17
13
11
9
Analysis of questionnaire return rates
13
11
8
3
Demographic analysis
24
22
19
8
*More than one method can be used
206.
206. Irrespective of the method used, variance and under-coverage are both measured by 20 countries whereas are 17 the countries that measure over-coverage.
207.
207. Demographic analysis, comparison with micro-level administrative datasets and comparison with aggregate administrative datasets were the methods used by the greatest number of countries (used by respectively 24, 21 and 20 countries), while the method least used – by 6 countries – was “other form of coverage survey”.
208.
208. An independent post-enumeration coverage survey (PES) was carried out by 11 countries: of these, 1 country (Austria) used it to measure the over-coverage, 1 country (Italy) to measure the variance, 5 countries to measure both under-coverage and over-coverage and the remaining 4 countries to measure under-coverage, over-coverage and variance.
209.
209. All the methods but two were used by the largest number of countries for the measurement of under-coverage, with variance being the least reported use. Exceptions are the methods “comparison with unit record administrative datasets” and “other form of coverage survey”, used by the same number of countries for measuring under-coverage and over-coverage (respectively by 16 and 3 countries).
210.
210. These responses can be further analysed by type of census methodology, as shown in Table 36, as it could be expected that the methods used for measuring accuracy are somewhat dependent by the census type.
211.
211. Indeed, the breakdown by census method shows some differences in the approach for measuring accuracy among countries adopting a register-based approach and other countries, especially concerning the use of a field survey for measuring under/over-coverage and variance. Essentially, no register-based country used an independent post-enumeration survey (PES) or another form of coverage survey for measuring under-coverage or variance (while two countries measured over-coverage respectively using a PES and another form of coverage survey). Instead, the PES was used for measuring both under-coverage and over-coverage by 5 (out of 14) countries that conduct a field enumeration census and by 5 (out of 16) countries that adopt a combined approach; whereas other forms of coverage surveys were used by 3 countries with field enumeration census for measuring under-coverage and by 2 for measuring over-coverage, and by respectively 2 countries with field enumeration and 1 combined census for measuring variance.
212.
212. On the other hand, the other methods specifically identified in the survey for measuring accuracy appear to be transversally used across census types, even though with different proportions.
213.
213. Among those countries adopting a field enumeration approach, a larger proportion of countries used demographic analysis and comparison with aggregate administrative datasets more than any other method for measuring under-coverage. Demographic analysis was also the method most used for measuring over-coverage, while comparison with unit record administrative datasets was the second most popular method. In the case of measuring variance, the method most used by these countries was instead comparisons with existing surveys.
Table 36  
Methods used to measure the accuracy of the population count, by type of census
Statistical methods and purpose
Total no. of countries using method*
Type of census
Field enumeration
(14 countries)
Combined
(16 countries)
Register-based
(13 countries)
Used for UNDERCOVERAGE
Independent pes
9
Other form of coverage survey
Comparison with aggregate administrative datasets
18 
Comparison with unit record administrative datasets
16 
Comparison with existing surveys (LFS etc.)
12 
Analysis of questionnaire return rates
10 
Demographic analysis 
20 
Used for OVERCOVERAGE 
Independent pes 
10 
Other form of coverage survey
0  
Comparison with aggregate administrative datasets
16 
Comparison with unit record administrative datasets
16 
Comparison with existing surveys (LFS etc.) 
11 
Analysis of questionnaire return rates
0  
Demographic analysis 
19 
Used for VARIANCE 
Independent pes 
Other form of coverage survey
Comparison with aggregate admin. datasets
Comparison with unit record administrative datasets
Comparison with existing surveys (LFS etc.)
Analysis of questionnaire return rates
0  
Demographic analysis 
*More than one method can be used
214.
214. Among the countries with a register-based census, demographic analysis and comparison with existing surveys were the methods most used for all of the three measurements, plus comparison with unit record administrative datasets for the identification of over-coverage.
215.
215. For countries with a combined approach, comparison with aggregate administrative datasets and demographic analysis were the more popular methods in the case of measuring under-coverage and over-coverage. When measuring variance the most popular methods were comparisons with unit record administrative datasets and, again, demographic analysis.
216.
216. Finally, two register-based countries didn’t use any of the methods specifically identified in the survey to measure accuracy (Belgium and Spain), but Spain mentioned under “other approach” the “signs of life” method as a method to estimate the population through the integration of many different administrative sources. This was also mentioned by Finland (also conducting a fully register-based census) and Slovakia (which has adopted a combined approach) as additional method to the other methods used for measuring over-coverage.
217.
217. Figure 5 groups the countries by the number of methods used to measure accuracy. The least number of methods was reported for variance, with the modal value being 0 and just 7 countries using more than one method; whereas for under-coverage and over-coverage respectively 23 and 22 countries used more than one method (with the modal value being 12 in both cases). The mean number of methods was 2.2 for under-coverage, 2.0 for over-coverage and 0.9 for variance.
Figure 5  
ECE countries by number of methods used to measure accuracy
218.
218. The survey went on to ask if countries set and published targets for the accuracy of their census statistics (see Table 37)
219.
219. below). Of the 40 countries answering to the question, 28 reported that they did not set targets. Of those that did it, twice as many countries (8) did not publish their target than did it (4).
220.
220. Target setting varied considerably between the different census methodologies. Among the 13 register-based countries only one country set targets, while the 43% of those that adopted a field enumeration approach and slightly lower than one third (31%) among combined approaches did it.
221.
221. A further question inquired in more detail on the targets set (If targets were set, please specify what they were), but only seven out of the 12 countries setting targets answered. The main targets specified were complete coverage (100% or ”as close to true population as possible”), or coverage of certain groups; response rates (from either PES or comparison against other sources), confidence intervals (based on PES), relative standard errors depending on the size of the area.
Table 37  
Setting and publishing targets for accuracy
Total countries
 (out of 43)
Type of census 
Field enumeration (14 countries)
Combined (16 countries)
Register-based (13 countries) 
Targets for accuracy were set 
12
6
5
1
Targets set but not published
8
4
3
1
Targets set and published
4
2
2
0
Targets were NOT set
28
6
10
12
222.
222. It is worth noting that, compared to the 2010 round, countries that don’t set targets are still the large majority. Even the comparison with the 2010 round suggests that targets are more likely to be set for countries doing field enumeration (they are the same number as in 2020, but the total number of countries conducting a field enumeration census has almost halved – to 14 from 26, considering countries that responded to both 2010 and 2020 surveys).
223.
223. However, not setting targets for accuracy does not correspond to not managing accuracy. Indeed, only 4 of the countries that reported not setting targets answered that they don’t have formal processes for managing accuracy (see Table 33 above).
224.
224. Furthermore, although 28 (65%) of the 43 countries don’t set accuracy targets, 37 (86%) do measure the accuracy of the results in some way, including 25 (89%) of the 28 who don’t set targets. More precisely, for field-based methods and combined approaches all the countries that don’t set targets measure accuracy, whereas for register-based methods 9 of the 12 countries that don’t set targets do measure accuracy (1 is missing, 2 don’t measure accuracy). On the other hand, all the countries that set targets do measure accuracy in some way.
9.2.1.1 Making adjustments for under and over coverage
225.
225. The UNECE survey also included a question to enquire into the extent to which countries, after measuring the accuracy of the population count, made adjustments to census counts for under- and over-coverage.
226.
226. Overall, 56% of ECE countries did not make adjustments, but this share is lower for countries conducting a field enumeration census (43% versus 63% and 62% of respectively combined and register-based censuses, Table 38).
227.
227. Among the countries that adjust the population counts, the majority published only adjusted statistics (13 countries out of 16). All the countries with field-enumeration censuses that adjusted census counts published only the adjusted figures. The PES or other coverage survey was not the main tool. Among field-enumeration censuses with adjusted census counts, 3 countries did not use them at all (Georgia specified that imputations could be made before publication of the results based on the comparison with the administrative data) while 4 countries used them in combination with other methods, likely to deal with over-coverage, while the PES is usually used to measure and correct for under-coverage (Azerbaijan, Bulgaria, France and the United Kingdom).
228.
228. Finland specified that signs of life” are used to correct for over-coverage. Other countries that applied the same method to statistical registers did not make adjustments. Ireland made adjustments by adding persons primarily from administrative data sources when information was returned from the field operation that dwellings were occupied but no census form was returned. Germany used the measurement of accuracy as input for quality reporting.
Table 38  
Adjustments to the population count
Total countries
(out of 43)
Type of census
Field enumeration (14 countries)
Combined (16 countries)
Register-based (13 countries)
Population count was adjusted after measuring accuracy, before publication, of which…
16
7
5
4
Both adjusted and unadjusted statistics published
3
0
2
1
Only adjusted statistics published
13
7
3
3
Population count was NOT adjusted
24
6
10
8
9.2.2 Measuring the accuracy of the responses
229.
229. Countries were also asked whether they measured the accuracy of the responses (Table 39). Almost half of the responding countries answered that they do measure the accuracy of responses, but again with significant differences across the different census approaches. Indeed, the share of countries measuring error content is equal to 71% for field enumeration censuses and 63% for combined ones, while only one register-based country (Austria) reported doing it.
Table 39  
Countries measuring accuracy of responses
Total countries
(out of 43)
Type of census
Field enumeration (14 countries)
Combined (16 countries)
Register-based (13 countries)
Accuracy of the responses (content error) was measured
21
10
10
1
Accuracy of the responses was NOT measured
17
3
5
9
230.
230. Only ten countries provided details on the reason for not measuring accuracy of responses or on the method used to measure it. A few countries reported the comparison of responses from either a separate quality survey (United Kingdom) or a PES (Portugal, Switzerland and the United States), while some countries reported methods used to check the consistency of raw data (such as performing of checks by supervisors on enumerators’ completed or performing of internal consistency checks within records and within households), or the quality of editing procedures (the calculation of item non-response and item imputation rates), rather than to measure the accuracy of the responses.
No countries mentioned customer satisfaction surveys or focus group interviews, not necessarily because no countries do these (in fact, these were mentioned in response to other questions in the section), but likely because it was not evident from the question posed.