Explore chapters and articles related to this topic
Customer Experience
Published in James William Martin, Operational Excellence, 2021
Nonresponse bias occurs if respondents differ in meaningful ways from non-respondents. In the 1936 American presidential election, when Alfred Landon ran against Franklin D. Roosevelt, the sampling was biased because of the survey method used to estimate which candidate was preferred for president. The survey respondents tended to be Landon supporters, and non-respondents were Roosevelt supporters. A low percentage of the sampled voters completed the mail-in survey, which overestimated voter support for Alfred Landon and led the Literary Digest voter survey to predict that Alfred Landon would beat Franklin D. Roosevelt in the 1936 presidential election. But the survey suffered from undercoverage of low-income voters, who tended to be Democrats. If some members of the population to be surveyed are not fully represented in the sample this is under-coverage. Nonresponse bias must be controlled when using surveys. Another form of bias is voluntary response bias, which occurs when survey respondents are self-selected volunteers. An example is a radio show that asks for call-in participation in surveys on controversial topics (e.g., abortion, affirmative action, gun control, etc.). The resulting sample tends to overrepresent individuals who have strong opinions on these issues or those whose opinions align with the source of the survey (e.g., a conservative radio show has conservative listeners, so the call-in responses are likely to be similar to that presented by the radio show).
Classical Statistics and Modern Machine Learning
Published in Mark Chang, Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare, 2020
Online survey response rates can be very low, a few percent. In addition to refusing participation, terminating surveying during the process, or not answering certain questions, several other non-response patterns are common in online surveys. Response rates can be increased by offering some other type of incentive to the respondents, by contacting respondents several times (follow-up), and by keeping the questionnaire difficulty as low as possible. There is a drawback to using an incentive to garner a response, that is, it introduces a bias. Participation bias or non-response bias refers the potential systematic difference in response between responders and non-responders. To test for non-response bias, a common technique involves comparing the first and fourth quartiles of responses for differences in demographics and key constructs. If there is no significant difference this is an indicator that there might be no non-response bias.
Toolkit for Assessing and Monitoring Leadership and Safety Culture
Published in Cindy L. Caldwell, Safety Culture and High-Risk Environments, 2017
The results of a questionnaire are not valid unless they represent the surveyed population. Response rate is a commonly accepted indication of representativeness. Questionnaire response rates are best addressed during the design and data collection phases of the assessment. This can be done by pre-testing the survey, increasing the data collection period, and sending reminders throughout the data collection period. While the survey is being conducted, it is advisable to monitor response rates. Survey research expert Babbie (2007, p. 262) asserts that “a response rate of at least 50 percent is considered adequate for analysis and reporting. A response of 60 percent is good; a response rate of 70 percent is very good.” Many experts agree that below 50% the data should be evaluated for non-response bias (Babbie, 2007). Non-response bias is the bias that results when respondents differ in meaningful ways from non-respondents. There are many variables that could affect non-responders. For example, groups of people who fail to respond in the study could be reluctant to respond, too busy to respond, or have negative beliefs about how the organization handles survey data. Substantial differences between respondents and non-respondents make it difficult to assume representativeness across the entire population (Dillman, 1999). One method to check for non-response bias is to compare response rates across key subgroups of the target population (Groves, 2006). This may point to subgroups that could be underrepresented or justify the representativeness of the responses across the surveyed population.
The Effects of Organizational Structure on MBSE Adoption in Industry: Insights from Practitioners
Published in Engineering Management Journal, 2023
Kaitlin Henderson, Alejandro Salado
Non-response bias occurs when the people from the sample population who responded have different characteristics from those who did not respond, thus questioning whether the sample results can be generalized to reflect the true population (Rogelberg & Stanton, 2007). Nonresponse is a common concern with survey distribution, and appears to be getting worse over time (Rogelberg & Stanton, 2007). A low response rate does not automatically mean the results are biased, but there are several ways that non-response can affect results. One of the ways this can occur is when the topic of the survey is one that can elicit a strong opinion-based response (e.g., gun control) (Wells et al., 2012). In other words, people with extreme opinions in either direction may be overrepresented because they are more compelled to respond to the survey. The topic of this survey, MBSE, is also tied to the sample population, people who use MBSE. Whereas in the example from Wells et al. (2012), the respondents were general college students who tended to be extremely pro- or anti-gun control. The portion of the population that was underrepresented were people who had mid-level opinions. The respondents in this survey had to use MBSE in an organization for at least one year. Since the questions in this survey are largely not opinion-based, this type of non-response bias should not be an issue.
Integration of supply chain management and quality management within a quality focused organizational framework
Published in International Journal of Production Research, 2020
Xianghui Peng, Victor Prybutok, Heng Xie
In survey research, it is necessary to avoid non-response bias because such bias can result in a misleading sample. Consistent with practice we assess non-response bias by comparing the first 90% of respondents’ responses received to the last 10% responses received within the surveying period (Karahanna, Straub, and Chervany 1999). The late respondents are considered more similar to the nonresponse group than the earlier respondents (Armstrong and Overton 1977). After the comparison of the early and late responses through an independent sample t-test (Ketokivi and Schroeder 2004), we found no significant differences. To assess common method bias, we conducted Harman’s one-factor test (Podsakoff and Organ 1986; Podsakoff et al. 2003). The results showed 18 factors with eigenvalues of greater than 1 and that 79.90% of the variance was explained by those 18 factors, with the first factor not accounting for the majority of the variance. These results support the contention that common method bias was not a concern.
Impact of digital resale platforms on brand new or second-hand luxury goods purchase intentions among U.S. Gen Z consumers
Published in International Journal of Fashion Design, Technology and Education, 2023
A majority indicated that their discretionary income is less than $100 (n = 196, 43.3%), followed by between $100 and 199 (n = 109, 24.1%) and $200 and $299 (n = 54, 11.9%) and $300 and $399 (n = 38, 8.4%). Over 95% of the participants reported that they are single, never married. To detect the non-response bias in the data, the researchers compared the responses on research constructs and demographic variables between two groups: Early (first 10%) and late (last 10%) respondents, using t-tests and Chi-square tests. No significant differences were found between these two groups in their responses to the variables. Thus, the researchers proceeded with further data analysis.