Explore chapters and articles related to this topic
The Study Population:
Published in Lynne M. Bianchi, Research during Medical Residency, 2022
Lynne M. Bianchi, Luke J. Rosielle
Volunteer and non-responder biases are the two primary forms of participant bias that nearly every study must consider. Volunteer bias reflects the fact that individuals who choose to participate in a study are different in some ways from those who do not volunteer. Similarly, those who do not participate (non-responders) are different in some ways from those who do. Non-response bias includes those who decline to participate, those who are difficult to reach, and those who fail to follow-up once enrolled in a study.
Sampling Theory
Published in Marcello Pagano, Kimberlee Gauvreau, Heather Mattie, Principles of Biostatistics, 2022
Marcello Pagano, Kimberlee Gauvreau, Heather Mattie
No matter what the sampling scheme, when we are choosing a sample, selection bias is not the only potential source of error. A second source of bias is nonresponse. In situations where the units of study are people, there are typically individuals who cannot be reached, or who cannot or will not provide the information requested. Bias is present if these nonrespondents differ systematically from the individuals who do respond.
Strategies to Handle Missing Data in Meta-Analysis
Published in Ding-Geng (Din) Chen, Karl E. Peace, Applied Meta-Analysis with R and Stata, 2021
In this chapter, we have reviewed available missing data methods to conduct meta-analysis with major focus on missing outcome and missing predictors. It is well known that the observed data are not sufficient to identify the underlying missing mechanism. Therefore, sensitivity analyses should be performed over various plausible models for the nonresponse mechanism (Little and Rubin, 2019). In general, the stability of conclusions (inferences) over the plausible models gives an indication of their robustness to unverifiable assumptions about the mechanism underlying missingness. Available case analysis and single imputation methods are very convenient ways to address the issue of missing data in aggregate data or individual participant data (if available for each study). However, these methods require the assumption of MCAR which very rarely holds in practice. Currently, MI and ML methods are very widely used methods under ignorable or nonignorable missing mechanisms. Finally, it is not possible to verify the missing mechanism based on observed data; hence, it is highly recommended to evaluate the departure from the assumed missing mechanism by doing sensitivity analysis.
Faculty and Staff Perceptions of Mandatory Reporting Policies and Title IX: A National Perspective
Published in Journal of School Violence, 2023
Christina Mancini, Sarah Koon-Magnin
Overall, 125 faculty and staff members nationally agreed to the informed consent at the survey link. Fifteen of these participants did not complete any of the survey questions and were removed from our sample. Of the 110 remaining participants, there was some attrition throughout the survey such that 17 participants did not complete the final section (demographic and institutional characteristics). We included all responses provided where possible but were unable to include these participants in demographic and institutional comparisons due to nonresponse. Throughout the entire survey, respondents had the option of skipping any questions they did not wish to answer so not all answers include responses from all 110 participants. Table 1 provides a snapshot of this sample.
Handling high-dimensional data with missing values by modern machine learning techniques
Published in Journal of Applied Statistics, 2023
Missing data are a critical problem in practical research including sample surveys, epidemiology, economics, and social science. Simply ignoring missing data in statistical analysis may lead to biased results, see Refs. [34,41]. There are two types of missingness in practice: item nonresponse and unit nonresponse. Item nonresponse is often handled by imputation approaches including hot-deck imputation [1,48], nearest neighbor imputation [8,9,63], predictive mean matching (PMM) imputation [28,40,51,64], multiple imputation (MI) [52–54], fractional imputation [31,32,62], among others. For a comprehensive review in dealing with item nonresponse, see Ref. [12]. Unit nonresponse is often handled by using inverse probability weighting techniques, see Refs. [27,29,33] among others. The validity of the above methods depend on the underlying outcome regression model and nonresponse model assumptions. Doubly robust approaches [2,30,50] and multiply robust approaches [10,11,25] have been proposed to improve the robustness to model misspecification.
Visual perceptual deficit screening in stroke survivors: evaluation of current practice in the United Kingdom and Republic of Ireland
Published in Disability and Rehabilitation, 2022
Michael J. Colwell, Nele Demeyere, Kathleen Vancleef
Recommended approaches to minimising both unit and item nonresponse (i.e., response bias introduced when participants do not provide responses to unit/items) were implemented into the survey design process [33–35], including an initial pilot survey and affirming data anonymity in the research brief. An initial paper-based pilot of the survey was validated among a sample of 11 clinicians who provided improvement suggestions, including factual and grammatical inaccuracies in survey text and positional formatting of survey items. Using the provided feedback, an electronic version of the survey was created using the JISC Online Surveys® platform. This version was debugged for technical issues by internal research staff and tested on multiple browsers (Internet Explorer, Google Chrome, and Safari) and computer devices (mobile phones, and desktop/laptop PCs). Debugging allowed us to fix several issues prior to launching the live version, including bugs where item data was inputted incorrectly due to formatting issues.