Research Project
Redesigning a survey presents a rare opportunity to consider all aspects of the survey from the perspectives of both the producer and user. The National Center for Education Statistics (NCES) charged the National Institute of Statistical Sciences (NISS) with convening a panel of technical experts to consider the issues for design of the National Postsecondary Student Aid Study. In particular, the panel was asked to address the sample design strategies currently in use for NPSAS, strategies including responsive designs employed by other recent NCES surveys, and applicable innovations, methodological advances and also alternative design approaches from other large-scale federal or governmental agency surveys and assessments. Broad topics to be considered included: i) Sample Design, Sampling Error and Design Effects, ii) Measurement Error, and iii) Imputation and Administrative Data.
The panel met via teleconferences with an in-person meeting at NCES on 31 August -1 September 2016.
Recommendations – I. A Perspective and a Framework
- In considering redesign of the NPSAS to improve data quality, frame the design from a total error perspective that seeks to minimize the error from the primary error sources within specified costs and scheduling constraints.
- In addition to accuracy of the NPSAS data, consider the so-called user dimensions of total survey quality shown in Figure 1.
- Recommendations – II. Sample Design, Sampling Error and Design Effects
- Evaluate precision goals. Determine whether all 1100+ precision targets are necessary to meet the major goals of the survey and whether reduction of the number of targets could lead to important cost savings. Ascertain whether all goals were met for NPSAS:12 and NPSAS:16.
- Evaluate allocation to institutional and student strata. Utilize variance components and update the NPSAS cost model to determine the most efficient allocation of institutions and students to strata.
- Explore use of a dual frame design with a special frame for veterans (using a VA list of veterans who receive educational benefits) to control the veterans’ sample size better and reduce some under-coverage.
- Explore rotation of sample institutions.
- Evaluate composite measures of size. Evaluate how well the composite MOS’s are related to actual student counts and how often within-institution sampling rates have to be adjusted to account for inaccurate MOS’s.
- Evaluate weighting steps. Devise a study to investigate the contributions of each step to bias reduction, variance increase or decrease, and the contribution to mean squared error for a set of important estimates.
Recommendations- III. Measurement Error
- Repeat the 1997 measurement error report.
- Evaluate two additional aspects of measurement quality: interviewer effect and mode effect.
- Utilize NCES’s excellent assessment of measurement error to improve questions and reduce measurement error going forward. Institute a routine system to review, revise, and/or eliminate questions that are subject to large amounts of measurement error.
Recommendations – IV. Imputation and Administrative Data
- Implement imputation methods that account for variance induced by imputation, for example fractional imputation or multiple imputation.
- Make trumping rules explicit and provide an evaluation of their utility.
- Include in the imputation documentation for NPSAS the list of class covariates used for each variable and the order in which the variables were imputed.
- Assess and document the quality of the various administrative data in NPSAS, specifically addressing what biases, if any, are present.
- To the extent possible, determine whether the students who do not match to any administrative data are comparable to those who match to administrative data; and evaluate the effect of any differences with respect to variables of interest on imputation and subsequent analyses.
- Examine the variance implications when direct substitutions are treated as response data as differences in the distributions of the administrative data from those of the response data can result in bias (in either direction) of the calculated variance.
Further Recommendations
In addition to the specific findings, the panel is concerned that NCES faces a structural conundrum in regard to the statistical elements of study or survey design. The technical statistical expertise to continue to assure best statistical practices is not currently present in-house, nor is it represented on current survey oversight committees. While contractors do possess this expertise, there is some possibility for conflict of interest as innovation may or may not be in the contractor’s own interest. The panel notes that NCES statistical leadership is well aware of this issue and has encouraged innovation and exploration of new statistical methodology for design, imputation, estimation and all phases of analysis.
Additional specifically focused recommendations are included in each section of this report.
Paul Biemer, Ph.D., Distinguished Fellow, RTI and Associate Director of Survey Research and Development, Odum Institute, University of North Carolina at Chapel Hill
Michael R. Elliott, Ph.D., Professor of Biostatistics, Research Professor of Survey Methodology, University of Michigan
Fred Galloway, Ed.D., Professor, University of San Diego
Benjamin Reist, M.S., Assistant Center Chief, Center for Adaptive Design at U.S. Census Bureau
Richard L. Valliant, Ph.D., Research Professor, University of Michigan & Joint Program for Survey Methodology, University of Maryland
Linda J. Young, Ph.D., Chief Mathematical Statistician & Director of Research and Development, USDA’s National Agricultural Statistics Service
Panel convened by National Institute of Statistical Sciences
Nell Sedransk, Ph.D., Director, National Institute of Statistical Sciences; Statistics Professor, North Carolina State University