Objective A national priority for disability research in the USA is the standardised identification of people with disabilities in surveillance efforts. Mandated by federal statute, six dichotomous difficulty-focused questions were implemented in national surveys to identify people with disabilities. The aim of this study was to assess the prevalence, demographic characteristics and social factors among people with disabilities based on these six questions using multiple national surveys in the USA.
Setting American Community Survey (ACS), Current Population Survey Annual Social and Economic Supplement (CPS-ASEC), National Health Interview Survey (NHIS) and the Survey of Income and Program Participation (SIPP).
Participants Civilian, non-institutionalised US residents aged 18 and over from the 2009 to 2014 ACS, 2009 to 2014 CPS-ASEC, 2009 to 2014 NHIS and 2008 SIPP waves 3, 7 and 10.
Primary and secondary outcome measures Disability was assessed using six standardised questions asking people about hearing, vision, cognition, ambulatory, self-care and independent living disabilities. Social factors were assessed with questions asking people to report their education, employment status, family size, health and marital status, health insurance and income.
Results Risk ratios and demographic distributions for people with disabilities were consistent across survey. People with disabilities were at decreased risk of having college education, employment, families with three or more people, excellent or very good self-reported health and a spouse. People with disabilities were also consistently at greater risk of having health insurance and living below the poverty line. Estimates of disability prevalence varied between surveys from 2009 to 2014 (range 11.76%–17.08%).
Conclusion Replicating the existing literature, we found the estimation of disparities and inequity people with disabilities experience to be consistent across survey. Although there was a range of prevalence estimates, demographic factors for people with disabilities were consistent across surveys. Variations in prevalence estimates can be explained by survey context effects.
- survey methods
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
Strengths of this study include using multiple years of large nationally representative survey samples of the USA to compare estimates using the same disability questions.
Variance estimation techniques including replicate weights specific to each survey were used to generate CIs.
The study used cross-sectional data and estimates over time do not reflect the same people (only the same population).
Varying survey design effects limit the ability to compare estimate differences across surveys.
In order to better understand the disparities and inequities people with disabilities experience, WHO has urged nations to improve collection systems and make health-related and disability-related data more available.1 Over the past two decades, there have been national and cross-national initiatives to develop standardised identifiers to make disability-related research and surveillance more comparable.1–4 Recognising that disability is a complex experience that benefits from a multidisciplinary approach to develop effective interventions and policy, these initiatives have focused on uniform questions to measure economic-related, social-related and health-related phenomenon for people with disabilities.
In the USA, the Healthy People initiative has a goal to include standardised questions that identify people with disabilities in population-based data systems.5 As part of this effort, the Affordable Care Act (ACA, 2010) recognised people with disabilities as a minority population at risk of experiencing health disparities. Section 4302 of the ACA mandated all national population-based health surveys use standardised set of questions to identify people with disabilities. The National Center for Health Statistics (NCHS), Department of Health and Human Services and Census Bureau have developed questions to identify people with disabilities with goals that included: (1) a high relevance to policy across countries, (2) being small enough to be feasibly included in censuses and (3) remaining comparable across populations.2 3 From this work, the sequence of six dichotomous questions (6QS), developed by the Census Bureau and NCHS for use in the American Community Survey (ACS), was selected to respond to the ACA mandate.3 6–8
The 6QS asks about difficulties related to hearing, vision, cognition, ambulation, self-care and independent living. It is included in other national surveys, such as the Current Population Survey Annual Social and Economic Supplement (CPS-ASEC), National Health Interview Survey (NHIS) and Survey of Income and Program Participation (SIPP). The 6QS implementation represents an opportunity to study the variation of the measurement and disparities for people with disabilities across multiple national surveys.
Recent publications have stressed the need for a multisurvey approach to studying disability, highlighting the implementation of the 6QS.7 9 This work emphasised using WHO’s International Classification of Functioning, Disability and Health Framework to study disability in US national surveys and the utility of having a standardised measurement (such as the 6QS) to estimate prevalence, health disparities and health inequities for people with disabilities. Krahn et al stressed the fact that using the 6QS in national surveys people with disabilities can be recognised as a group within target populations for public health interventions.7 10 Understanding the variation in responses to the 6QS is essential to comparing findings across surveys.11 12 Although there is an existing literature comparing disability statistics across survey for employment and ageing, there has been very little evaluation of the disability prevalence estimates generated from the 6QS across surveys.13–16
The goal of this study was to investigate the range of disability estimates across US national surveys, provide prevalence estimates for advocates and researchers, and report the magnitude and direction of differences in key demographic characteristics and social factors based on the 6QS. It presents a twofold assessment of people with disabilities using the ACS, CPS-ASEC, NHIS and SIPP: (1) an examination of the range of responses across surveys and (2) an examination of magnitude and direction of risk ratios between people with and without disabilities across surveys.
Data in this study came from adult civilian, non-institutionalised samples of the US population using the 2009–2014 ACS, CPS-ASEC, NHIS and waves 3, 7 and 10 of the 2008 SIPP (wave 3 covers interview months May 2009 to August 2009, wave 7 covers interview months September 2010 to December 2010 and wave 10 covers interview months September 2011 to December 2011). The ACS is a nationally representative sample of individuals living in households and institutionalised and non-institutionalised group quarters intended to capture the demographic and workforce characteristics of the US population for all ages. The CPS-ASEC is a nationally representative sample of housing and non-institutionalised group quarters designed to produce national and state estimates of the labour force characteristics for the civilian non-institutionalised population aged 16 and over. The NHIS sample is a continuous, cross-sectional, in-person household survey nationally representative of the civilian, non-institutionalised population designed to capture health characteristics of the nation for people of all ages. The SIPP is a longitudinal household sample of the nation and states designed to measure change for income and programme participation for people of all ages.
Respondents were excluded if they lived in group quarters, were in the armed forces or under age 18. The response rates for the household components of the 2009–2014 ACS, CPS-ASEC and NHIS samples ranged from 89.9% to 98.0%, 79.5% to 85.9% and 73.8% to 82.2%. The cumulative response rate ranges of waves 3, 7 and 10 of the 2008 SIPP were 71.1%–80.8%. The unweighted counts of adult, civilian, non-institutionalised people self-reporting difficulties for the 2009–2014 ACS, CPS-ASEC, NHIS and waves 3, 7 and 10 of the 2008 SIPP ranged from 398 296 to 355 469, 11 038 to 16 253, 4890 to 7181 and10 376 to 11 535 people, respectively.
People were identified as having a disability if they responded ‘yes’ to having serious difficulty in at least one of the four following areas: hearing; seeing, even when wearing glasses (vision); concentrating, remembering or making decisions (cognitive); walking or climbing stairs (mobility); or any difficulty with dressing or bathing (self-care) or doing errands alone such as visiting a doctor’s office or shopping (independent living). Based on administration in a given survey, either the sample adult respondent or the designated household or family member responded to the disability questions. More than one limitation could be reported.
All social factors were defined dichotomously. Marital status was defined as being married versus not (divorced, separated, widowed or never married). Employed was defined as employed (‘at work or absent from work’) or not (‘on lay-off or looking for work,’ and ‘retired, disabled or other’). Poverty was defined using the ratio of family income to low-income level as below (below 100% of the low-income level) or above (100% or more above the level). Health Insurance was defined as having any or none of the following insurance types: private health insurance, Medicare, Medicaid, State Children’s Health Insurance Program, a state-sponsored health plan, other government programmes or military health plan (includes TRICARE, VA and CHAMP-VA). Family size was the number of people in the family, including related subfamily members, defined as ‘three or more’ or ‘two or fewer.’ Health was defined as having self-rated health as ‘excellent or very good’ versus ‘good, fair or poor.’ Education was defined as having a ‘college degree or more’ versus ‘less than a college degree.
Disability prevalence was estimated using the 2009–2014 ACS, CPS-ASEC, and NHIS and waves 3, 7 and 10 of the 2008 SIPP (corresponding to 2009, 2010 and 2011 SIPP data). Subsequent analyses of the age distribution and risk ratios used the 2011 ACS, CPS-ASEC, NHIS and wave 10 of the 2008 SIPP (corresponding to 2011 SIPP data). Survey samples were weighted to account for probability of selection, non-response and to adjust for age, sex and race/ethnicity. The ACS, CPS-ASEC and SIPP used standard Census Bureau imputation methods based on census data to adjust for item non-response. All ‘refused, don’t know or not ascertained’ were treated as missing in the NHIS. Survey specific replicate weights were used for analyses. Estimation procedures incorporating complex survey design were used to calculate the prevalence, risk ratios and CIs (α=0.05) of the total population, people with disabilities, demographic variables (gender, age, race and ethnicity) and social factors (education, employment, family size, health insurance, self-reported health, marital status and poverty). Variance estimation used replicate weights and Fay’s balanced repeated replication methods for the ACS, CPS-ASEC, and SIPP and Taylor Series (linearisation) methods for the NHIS. All tests of significance are based on comparisons of CIs of estimates between specific years of survey data. SAS V.9.4 was used for all analyses.
Table 1 presents the weighted counts of people with disabilities by survey and year. We found significant differences each year between survey estimates (α=0.05). The NHIS consistently had the highest counts and the CPS-ASEC consistently had the lowest counts of people with disabilities. Across years, this was a difference of approximately 10 million people. Count estimates in the ACS, CPS-ASEC, NHIS and SIPP increased in value over time. Estimates in 2014, when compared with 2009, were significantly increased for the ACS, CPS-ASEC and NHIS (as indicated by CIs for those years, α=0.05).
Figure 1 shows the percentage of people of disabilities by survey and year. Consistent with counts, significant differences in the percentage of people with disabilities were found between surveys each year (α=0.05). The NHIS had the highest percentages and the CPS-ASEC had the lowest percentages. NHIS estimates were approximately 50% larger than the CPS-ASEC and 10%–20% larger than the ACS in every year. The percentage of people with disabilities in 2014, compared with 2009, increased significantly in the ACS and non-significantly in the CPS-ASEC and NHIS (as indicated by CIs for those years, α=0.05). The greatest absolute increase was seen in the NHIS.
Table 2 presents the percentage of demographic factors for people reporting difficulties in the ACS, CPS-ASEC, NHIS and SIPP (wave 10) for 2011. Within 10-year intervals, the percentage of people reporting difficulties were consistent. For age groups 18–24, 25–34, 45–54 and 55–64, the proportion of people self-reporting difficulties in each survey were within two percentage points. The greatest variation in the percentage of self-reported difficulties among surveys was for people aged 65 and over. Estimates of gender, race and ethnicity were consistent across survey. A greater proportion of women reported difficulties across all surveys. The greatest variation was seen among estimates of race, particularly the percentage of white and other people with self-reported difficulties.
Figure 2A,B shows risk ratios for social factors between people with and without self-reported difficulties in the ACS, CPS-ASEC, NHIS and SIPP (wave 10) for 2011. The directions of risk ratios were consistent across surveys. People with self-reported difficulties are at decreased risk of having: college degrees or more, employment, family sizes of three or more, self-reported health that is excellent or very good and a spouse. Conversely, people with self-reported difficulties are at increased risk of having health insurance and living in poverty. The magnitude of risk ratios varied significantly by survey (α=0.05). Risk ratios varied by as much as two-tenths of a point (employed).
Overall, we found people self-reporting difficulties experienced social disparities of consistent direction and magnitude across survey. The underlying demographic composition of people with disabilities was also consistent across survey. There was substantial cross-survey variation in the total count and percentage of people self-reporting difficulties across each year of estimation. These results replicate the existing literature which shows that people with self-reported difficulties (ie, those identified as having a disability) experience social disparities.7 17 They also replicate the limited existing published literature and reports which have found significant variation in disability prevalence estimates across survey.13–15 Our ACS and CPS-ASEC 2009–2014 estimates of prevalence using public use microdata samples very closely reflect estimates generated online by the US Census Bureau websites (ie, the American Factfinder and the CPS Table Generator).18 19
Factors that influence survey response variation and limit the interpretation and cross-survey comparability of our findings broadly include: survey content (survey topics and priming effects); sample design (sampling frame, sample size, mode of data collection, residency rules and reference periods) and data imputation and weighting (weighting and imputation); and survey error (sampling and non-sampling error). These are extremely complicated and well-researched topics within survey research and design. We touch on them briefly as they relate to the surveys in this study.
The ACS, CPS-ASEC, NHIS and SIPP focus on the health, employment, demographics, and income and programme participation of the nation, respectively. The NHIS’s context of health and limitations may prime responses to self-reported difficulty questions, increasing affirmative responses, similar to other studies.20
Although survey samples were restricted to include the civilian, non-institutionalised population, they come from different underlying population universes and residency rules: the resident population, including all group quarters and military personnel (census universe, ACS) compared with the civilian non-institutionalised population plus armed forces living off post or with their families on post (NHIS, SIPP and CPS-ASEC universe). In the ACS, residency is defined by having lived at a location more than 2 months and having no other place to usually stay. In the CPS-ASEC and SIPP residency is defined by having lived at a location the majority of the time and having no other place to usually stay. Due to these differing residency rules, the ACS considers a college student to be living in their particular dormitory and the CPS-ASEC and SIPP considers college students to be temporarily absent from their household.21 This may result in the ACS including a greater number of younger respondents, in college, who are less likely to have disabilities.
The unweighted sample sizes of each survey vary substantially. For example, the 2011 unweighted samples sizes of the ACS, CPS-ASEC, SIPP and NHIS are 2 128 104, 204 983, 79 321 and 50 188, respectively (differences of this magnitude exist for 2009–2014 samples). The ACS collects data most representative of the USA from the largest number of people, and is the only data source which samples from every county equivalent in the USA. The period of the calendar year people are asked the 6QS also varies by survey: the ACS and NHIS includes the 6QS in every interview of households continuously throughout the year, the CPS-ASEC includes the 6QS in every interview of their supplement conducted February through April of each year and the SIPP periodically includes the 6QS in every interview of households in their reoccurring topical module conducted in 4-month intervals (waves 4, 7 and 10 of the 2008 SIPP include the 6QS). Consistent with employment rates, the self-report of difficulties (ie, disability rate) may vary throughout the calendar year dependent on other periodic factors.21 Further study is needed to determine if survey disability prevalence reflects this potential periodicity.
Data collection procedures vary by survey as well: the ACS uses four modes of data collection (mail, telephone, internet and in-person interviews), the CPS-ASEC and SIPP use two modes (telephone and in-person) and the NHIS uses one mode (in-person). Each survey uses computer-aided telephone and in-person interviews structured specifically for each survey. Using multiple modes of data collection may result in a more representative sampling of the USA and explain why the ACS provides disability prevalence estimates somewhere between the lowest and highest estimates across surveys (generated by the CPS-ASEC and NHIS, respectively).
In addition, the SIPP is the only longitudinal survey included in the survey analyses presented. In contrast to cross-sectional surveys, the SIPP is subject to loss to follow-up and results may be affected by differential attrition of respondents or altered responses due to having heard or answered questions previously.22 This may explain why the population with disabilities in the SIPP decreases non-significantly over the time period presented. However, these effects are not well studied and it is unclear how this may be impacting our results.
Data imputation and weighting
Surveys have different methods for imputing item non-response. For example, the NHIS, CPS-ASEC and SIPP all impute race in a consistent fashion, providing a recoded ‘other’ category. The ACS does not provide this imputation. Further, the NHIS does not impute missing values to the difficulty questions and instead provides responses of ‘don’t know,’ ‘refused’ and ‘not ascertained.’ National surveys are weighted to provide annual estimates for the USA based on census data that does not take into account self-reported difficulty status. Reporting a difficulty is associated with age and survey estimates of self-reported difficulties may affect underlying age distribution of unweighted samples. In the context of residency, surveys that collect younger subpopulations, such as the ACS, may contribute to their having lower disability prevalence than surveys which do not include such subpopulations, such as the SIPP.
Sampling and non-sampling error
Sample survey estimates are subject to sampling and non-sampling error. The accuracy of estimates depends on the extent of both types of error. Although more is known about sampling error given the survey design, the extent of non-sampling error is unknown. The population responding affirmatively to difficulty questions is an extremely heterogeneous population. Disability is a complex experience and there are over 65 federal definitions of disability in the USA.23 Without defining a ‘gold standard’ population with disabilities, the validity and accuracy of estimates cannot be established.24 Surveys with larger unweighted samples will have the smallest CIs. It has been suggested that larger samples of people capture greater numbers of people with less severe difficulties (eg, resulting in higher employment rates).16 However, this does not explain why the NHIS (which had the smallest unweighted sample size) produced the highest percentage of people with self-reported difficulties.
These survey effects make interpreting variations of magnitude and statistically significant differences between surveys difficult. The direction of bias from survey effects is not well researched for disability statistics. It is known that national surveys underestimate the prevalence of specific disability types which suggests that all national estimates under-report the number of people with self-reported difficulties (disabilities) to an unknown extent.25 Further, people are more likely to report difficulties when they have experienced them more recently.26 Without knowing the period of time between when difficulties are experienced and the date of interview it is impossible to adjust for or understand this relationship.
Our results suggest the 6QS may be used to consistently identify and compare the demographic variations, health disparities and inequities among people with disabilities across surveys. They replicate the existing literature showing that (1) people with disabilities experience disparities and inequity and (2) there is a range of disability prevalence estimates across US surveys.7 13 This variation can be explained by both sampling (ie, survey effects) and non-sampling error. The differences in prevalence estimates reflect millions of people and have implications for policy and interventions for people with disabilities. Further research is needed to explore the policy relevance of these findings.
Contributors EAL contributed to the study design, created and coded datasets, conducted all analyses and produced (and edited) the initial manuscript. AJH contributed to the study design, analytic choices, interpretation of results and edited the manuscript.
Funding This research report is a product of the Rehabilitation Research and Training Center on Disability Statistics and Demographics (grant number 90RT5022-02-01), which is funded by the US Department of Health and Human Services’ Administration For Community Living, National Institute for Disability, Independent Living, and Rehabilitation Research under cooperative agreement H133B130015.
Disclaimer This article does not necessarily represent the policy of the US Department of Health and Human Services, and readers should not assume endorsement by the Federal Government (Edgar, 75.620 (b)).
Competing interests None declared.
Patient consent Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement Datasets are publicly available from the US Census Bureau and US Centers for Disease Control and Prevention National Center for Health Statistics websites.