2012 Employer Health Benefits Survey
The Kaiser Family Foundation and the Health Research & Educational Trust (Kaiser/HRET) conduct this annual survey of employer-sponsored health benefits. HRET, a nonprofit research organization, is an affiliate of the American Hospital Association. The Kaiser Family Foundation designs, analyzes, and conducts this survey in partnership with HRET, and also pays for the cost of the survey. HRET subcontracts with researchers at NORC at the University of Chicago (NORC) to work with Foundation and HRET researchers in conducting the study. Kaiser/HRET retained National Research, LLC (NR), a Washington, D.C.-based survey research firm, to conduct telephone interviews with human resource and benefits managers using the Kaiser/HRET survey instrument. From January to May 2012, NR completed full interviews with 2,121 firms.
Survey Topics
As in past years, Kaiser/HRET asked each participating firm as many as 400 questions about its largest health maintenance organization (HMO), preferred provider organization (PPO), point-of-service (POS) plan, and high-deductible health plan with a savings option (HDHP/SO).1 In 2006, Kaiser/HRET began asking employers if they had a health plan that was an exclusive provider organization (EPO). We treat EPOs and HMOs as one plan type and report the information under the banner of “HMO”; if an employer sponsors both an HMO and an EPO, they are asked about the attributes of the plan with the larger enrollment.
As in past years, the survey includes questions on the cost of health insurance, health benefit offer rates, coverage, eligibility, enrollment patterns, premiums,2 employee cost sharing, prescription drug benefits, retiree health benefits, wellness benefits, and employer opinions. New topics in the 2012 survey include the use of biometric screening, domestic partner benefits, and emergency room cost sharing. In addition, many of the questions on health reform included in the 2011 survey were retained, including stoploss coverage for self-funded plans, cost sharing for preventive care, and plan grandfathering resulting from the Affordable Care Act (ACA).
Response Rate
After determining the required sample from U.S. Census Bureau data, Kaiser/HRET drew its sample from a Survey Sampling Incorporated list (based on an original Dun and Bradstreet list) of the nation’s private employers and from the Census Bureau’s Census of Governments list of public employers with three or more workers. To increase precision, Kaiser/HRET stratified the sample by ten industry categories and six size categories. Kaiser/HRET attempted to repeat interviews with prior years’ survey respondents (with at least ten employees) who participated in either the 2010 or the 2011 survey, or both. As a result, 1,579 of the 2,121 firms that completed the survey also participated in either the 2010 or 2011 surveys, or both.3 The overall response rate is 47%.4
The vast majority of questions are asked only of firms that offer health benefits. A total of 1,930 of the 2,121 responding firms indicated that they offered health benefits. The response rate for firms that offer health benefits is 47%.
We asked one question of all firms in the study that we made phone contact with but the firm declined to participate. The question was, “Does your company offer a health insurance program as a benefit to any of your employees?” A total of 3,326 firms responded to this question (including 2,121 who responded to the full survey and 1,205 who responded to this one question). These responses are included in our estimates of the percentage of firms offering health benefits.5 The response rate for this question is 73%. In 2012 the calculation of the response rates was adjusted to be slightly more conservative than previous years.
Firm Size Categories and Key Definitions
Throughout the report, exhibits categorize data by size of firm, region, and industry. Firm size definitions are as follows: All Small Firms, 3 to 199 workers; and All Large Firms, 200 or more workers. Occasionally, firm size categories will be broken into smaller groups. The All Small Firm group may be categorized by: 3 to 24 workers, and 25 to 199 workers; or 3 to 9 workers, 10 to 24 workers, 25 to 49 workers, and 50 to 199 workers. The All Large Firm group may be categorized by: 200 to 999 workers, 1,000 to 4,999 workers, and 5,000 or more workers. Exhibit M.1 shows selected characteristics of the survey sample.
Exhibit M.2 displays the distribution of the nation’s firms, workers, and covered workers (employees receiving coverage from their employer). Among the over three million firms nationally, approximately 61.1% are firms employing 3 to 9 workers; such firms employ 8.3% of workers, and 4.4% of workers covered by health insurance. In contrast, less than one percent of firms employ 1,000 or more workers; these firms employ 48% of workers and 53% of covered workers. Therefore, the smallest firms dominate any national statistics about what employers in general are doing. For this reason, most statistics about firms are broken out by size categories. In contrast, firms with 1,000 or more workers are the most important employer group in calculating statistics regarding covered workers, since they employ the largest percentage of the nation’s workforce.
Throughout this report, we use the term “in-network” to refer to services received from a preferred provider. Family coverage is defined as health coverage for a family of four.
Each year, the survey asks firms for the percentage of their employees who earn less than a specified amount in order to identify the portion of a firm’s workforce that has relatively low wages. This year, the income threshold is $24,000 per year for low-wage workers and $55,000 for high-wage workers. These thresholds are based on the 25th and 75th percentile of workers’ earnings as reported by the Bureau of Labor Statistics using data from the National Compensation Survey (2010), the most current data available at the time of the survey design.
Rounding and Imputation
Some exhibits in the report do not sum to totals due to rounding effects. In a few cases, numbers from distribution exhibits may not add to the numbers referenced in the text due to rounding effects. Although overall totals and totals for size and industry are statistically valid, some breakdowns may not be available due to limited sample sizes. Where the unweighted sample size is fewer than 30 observations, exhibits include the notation “NSD” (Not Sufficient Data).
To control for item nonresponse bias, Kaiser/HRET imputes values that are missing for most variables in the survey. In general, 3% of observations are imputed for any given variable. All variables are imputed following a hotdeck approach. In 2012, there were nine variables where the imputation rate exceeded 20%. For these cases, the unimputed variable is compared with the imputed variable. There are a few variables that Kaiser/HRET has decided should not be imputed; these are typically variables where “don’t know” is considered a valid response option (for example, firms’ opinions about effectiveness of various strategies to control health insurance costs). In addition, there are several variables in which missing data is calculated based on respondents’ answers to other questions (for example, when missing employer contributions to premiums is calculated from the respondent’s premium and the ratio of contributions to premiums). In 2012 the method to calculate missing premiums and contributions was revised; if a firm provides a premium for single coverage or family coverage, or a worker contribution for single coverage or family coverage, that information was used in the imputation. For example, if a firm provided a worker contribution for family coverage but no premium information, a ratio between the family premium and family contribution was imputed and then the family premium was calculated. In addition, in cases where premiums or contributions for both family and single coverage were missing, the hotdeck procedure was revised to draw all four responses from a single firm. The change in the imputation method did not make a significant impact on the premium or contribution estimates.
Sample Design
We determined the sample requirements based on the universe of firms obtained from the U.S. Census. Prior to the 2009 survey, the sample requirements were based on the total counts provided by Survey Sampling Incorporated (SSI) (which obtains data from Dun and Bradstreet). Over the years, we have found the Dun and Bradstreet frequency counts to be volatile because of duplicate listings of firms, or firms that are no longer in business. These inaccuracies vary by firm size and industry. In 2003, we began using the more consistent and accurate counts provided by the Census Bureau’s Statistics of U.S. Businesses and the Census of Governments as the basis for post-stratification, although the sample was still drawn from a Dun and Bradstreet list. In order to further address this concern at the time of sampling, starting in 2009 we use Census data as the basis for the sample.
We also defined Education as a separate sampling category, rather than as a subgroup of the Service category. In the past, Education firms were a disproportionately large share of Service firms. Education is controlled for during post-stratification, and adjusting the sampling frame to also control for Education allows for a more accurate representation of both Education and Service industries.
In past years, both private and government firms were sampled from the Dun and Bradstreet database. Beginning in 2009, Government firms were sampled from the 2007 Census of Governments. This change was made to eliminate the overlap of state agencies that were frequently sampled from the Dun and Bradstreet database. The sample of private firms is screened for firms that are related to state/local governments, and if these firms are identified in the Census of Governments, they are reclassified as government firms and a private firm is randomly drawn to replace the reclassified firm. The federal government is not included in the sample frame.
Finally, the data used to determine the 2012 Employer Health Benefits sample frame include the U.S. Census’ 2008 Statistics of U.S. Businesses and the 2007 Census of Governments. At the time of the sample design (December 2011), these data represented the most current information on the number of public and private firms nationwide with three or more workers. As in the past, the post-stratification is based on the most up-to-date Census data available (the 2008 update to the Census of U.S. Businesses was purchased during the survey field period) and the 2007 Census of Governments. The Census of Governments is conducted every five years, and this is the fourth year the data from the 2007 Census of Governments has been available for use.
In 2012, the method for calculating the size of the sample was adjusted. Rather than using a combined response rate for panel and non-panel firms, separate response rates were used to calculate the number of firms to be selected in each strata. In addition, the mining stratum was collapsed into the agriculture and construction industry grouping. In sum, changes to the sampling method required more firms to be included and may have reduced the response rate in order to provide more balanced power within each strata.
Weighting and Statistical Significance
Because Kaiser/HRET selects firms randomly, it is possible through the use of statistical weights to extrapolate the results to national (as well as firm size, geography, regional, and industry) averages. These weights allow Kaiser/HRET to present findings based on the number of workers covered by health plans, the number of total workers, and the number of firms. In general, findings in dollar amounts (such as premiums, worker contributions, and cost sharing) are weighted by covered workers. Other estimates, such as the offer rate, are weighted by firms. Specific weights were created to analyze the HDHP/SO plans that are offered with an HRA or that are HSA-qualified. These weights represent the proportion of employees enrolled in each of these arrangements.
Calculation of the weights follows a common approach. First, the basic weight is determined, followed by a nonresponse adjustment. As part of this nonresponse adjustment, Kaiser/HRET conducted a small follow-up survey of those firms with 3 to 49 workers that refused to participate in the full survey. Just as in years passed, Kaiser/HRET conducted a McNemar test to verify that the results of the follow-up survey are comparable to the results from the original survey. Starting in 2012 the sample for the non-response survey was changed to exclude firms which were considered ineligible during the initial phase of the survey. Next, we trimmed the weights in order to reduce the influence of weight outliers. First, we identified common groups of observations. Within each group, we identified the median and the interquartile range of the weights and calculated the trimming cut point as the median plus six times the interquartile range (M + [6 * IQR]). Weight values larger than this cut point are trimmed to the cut point. In all instances, less than one percent of the weight values were trimmed. Finally, we calibrated the weights to U.S. Census Bureau’s 2008 Statistics of U.S. Businesses for firms in the private sector, and the 2007 Census of Governments as the basis for calibration / post-stratification for public sector firms.
In 2011, we became aware that the way we had been using the data from the Census Bureau for calibration was incorrect and resulted in an over-count of the actual number of firms in the nation. Specifically, firms operating in more than one industry were counted more than once in computing the total firm count by industry, and firms with establishments were counted more than once in computing the total firm count by state (which affects the regional count). Because smaller firms are less likely to operate in more than one industry or state, the miscounts occurred largely for larger firm sizes. The error affected only statistics that are weighted by the number of firms (such as the percent of firms offering health benefits). Statistics that are weighted by the number of workers or covered workers (such as average premiums, contributions, or deductibles) were not affected.
We addressed this issue by proportionally distributing the correct national total count of firms within each firm size as provided by the U.S. Census Bureau across industry and states based on the observed distribution of workers. This effectively weights each firm within each category (industry or state) in proportion to its share of workers in that category. The end result is a synthetic count of firms across industry and state that sums to the national totals.
Firm-weighted estimates resulting from this change show only small changes from previous estimates, because smaller firms have much more influence on national estimates. For example, the estimate of the percentage of firms offering coverage was reduced by about .05 percentage points in each year (in some years no change is evident due to rounding).6 Estimates of the percentage of large firms offering retiree benefits were reduced by a somewhat larger amount (about 2 percentage points). Historical estimates used in the 2011 survey report were updated following this same process. As noted above, worker-weighted estimates from prior years were not affected by the miscount and remained the same.
We continue to ask firms whether or not they offer a conventional health plan and, if so, how many of their covered workers are enrolled in that plan and whether it is self-funded or underwritten by an insurer. However, due to the declining market share of conventional health plans, in 2006 we stopped asking respondents additional questions about the attributes of the conventional plans they offer.7 As of 2009 our primary covered worker weight no longer includes those workers with conventional coverage. Therefore, premium and cost-sharing levels are estimated among workers covered by an HMO, PPO, POS plan, or HDHP/SO. Removing workers covered by conventional health insurance from the covered worker weight has little impact on the estimates reported for “All Plans,” such as the average single or family premium. In cases where a firm offers only conventional health plans, no information from that respondent is included in “All Plan” averages. The exception is for whether or not the plan is self-funded, for which we have information. For enrollment statistics, we weight the statistics by all covered workers, including those in conventional insurance.The survey contains a few questions on employee cost sharing that are asked only of firms that indicate in a previous question that they have a certain cost-sharing provision. For example, the copayment amount for prescription drugs is asked only of those that report they have copayments for prescription drugs. Because the composite variables (using data from across all plan types) are reflective of only those plans with the provision, separate weights for the relevant variables were created in order to account for the fact that not all covered workers have such provisions.
To account for design effects, the statistical computing package R and the library package “survey” were used to calculate standard errors.8, 9 All statistical tests are performed at the .05 level, unless otherwise noted. For figures with multiple years, statistical tests are conducted for each year against the previous year shown, unless otherwise noted. No statistical tests are conducted for years prior to 1999. In 2012 the method to test the difference between distributions across years was changed to use a Wald test which accounts for the complex survey design. In general this method was more conservative than the approach used in prior years. Exhibits such as 7.9, 7.10, 7.16 etc. are affected by the change.
Statistical tests for a given subgroup (firms with 25-49 workers, for instance) are tested against all other firm sizes not included in that subgroup (all firm sizes NOT including firms with 25-49 workers, in this example). Tests are done similarly for region and industry; for example, Northeast is compared to all firms NOT in the Northeast (an aggregate of firms in the Midwest, South, and West). However, statistical tests for estimates compared across plan types (for example, average premiums in PPOs) are tested against the “All Plans” estimate. In some cases, we also test plan-specific estimates against similar estimates for other plan types (for example, single and family premiums for HDHP/SOs against single and family premiums for HMO, PPO, and POS plans); these are noted specifically in the text. The two types of statistical tests performed are the t-test and the Wald test.
The small number of observations for some variables resulted in large variability around the point estimates. These observations sometimes carry large weights, primarily for small firms. The reader should be cautioned that these influential weights may result in large movements in point estimates from year to year; however, often these movements are not statistically significant.
Additional Notes on the 2012 Survey
In 2012, average coinsurance rates for prescription drugs, primary care office visits, specialty office visits, and emergency room visits include firms that have a minimum and/or maximum attached to the rate. In years prior to 2012 we did not ask firms the structure of their coinsurance rate. For most prescription drug tiers, and most services, the average coinsurance rate is not statically different depending on whether the plan has a minimum or maximum.
Historical Data
Data in this report focus primarily on findings from surveys jointly authored by the Kaiser Family Foundation and the Health Research & Educational Trust, which have been conducted since 1999. Prior to 1999, the survey was conducted by the Health Insurance Association of America (HIAA) and KPMG using a similar survey instrument, but data are not available for all the intervening years. Following the survey’s introduction in 1987, the HIAA conducted the survey through 1990, but some data are not available for analysis. KPMG conducted the survey from 1991-1998. However, in 1991, 1992, 1994, and 1997, only larger firms were sampled. In 1993, 1995, 1996, and 1998, KPMG interviewed both large and small firms. In 1998, KPMG divested itself of its Compensation and Benefits Practice, and part of that divestiture included donating the annual survey of health benefits to HRET.
This report uses historical data from the 1993, 1996, and 1998 KPMG Surveys of Employer-Sponsored Health Benefits and the 1999-2012 Kaiser/HRET Survey of Employer-Sponsored Health Benefits. For a longer-term perspective, we also use the 1988 survey of the nation’s employers conducted by the HIAA, on which the KPMG and Kaiser/HRET surveys are based. The survey designs for the three surveys are similar.