Explore chapters and articles related to this topic
Stratified simple random sampling
Published in Mark Stamp, Introduction to Machine Learning with Applications in Information Security, 2023
The design effect is defined as the variance of an estimator of the population mean with the sampling design under study divided by the variance of the π estimator of the mean with simple random sampling of an equal number of units (Section 12.4). So, the design effect of stratified random sampling is the reciprocal of the stratification effect. For the stratified simple random sample of Figure 4.1, the design effect can then be estimated as follows. Function SE extracts the estimated standard error of the estimator of the mean from the output of function svymean. The extracted standard error is then squared to obtain an estimate of the sampling variance of the estimator of the population with stratified simple random sampling. Finally, this variance is divided by the variance with simple random sampling of an equal number of units.
The Prevalence of Lead-Based Paint in Housing: Findings from the National Survey
Published in Joseph J. Breen, Cindy R. Stroup, Lead Poisoning, 2020
R. P. Clickner, V. A. Albright, S. Weitz
A complex survey with geographically clustered sampling, and differential probabilities of selection, typically has less precision than an unclustered sample with equal selection probabilities for all sampled units. The effect of the design on the precision of the data is called the “design effect” The design effect is the ratio of the actual-size sample to the size of a simple random sample with the same precision. For example, if the sample size is 750 and the design effect is 1.5, then the precision is the same as a simple random sample size of 500. The advantages gained by utilizing a complex design (which may be considerable) would be obtained at the cost of 250 units in the “effective” sample size. Approximate design effects were calculated for the national survey of lead-based paint in housing, and for selected subsets of the sample. The approximate design effect was 1.45 for the overall sample. Thus, overall, confidence interval widths increase by approximately 20% (the square root of the design effect). This analysis does not take XRF measurement error into account, or the effects of within-dwelling unit sampling.
Data collection, processing, and database management
Published in Zongzhi Li, Transportation Asset Management, 2018
Ideally, the selected clusters are representative of the population. However, the elements from the same cluster are somewhat mutually similar. As a result, adding one more element from the same cluster in cluster sampling is less informative than adding one more independent element would be. So, with the same actual sample size, cluster sampling has a less effective sample size when compared with simple random sampling. This is known as the design effect. The design effect is basically the ratio of the actual variance under the sampling method actually used to the variance computed under the assumption of simple random sampling. For example, a design effect of 3 means that the variance of the estimator from using a certain sampling method is three times larger than it would be if the sampling method were simple random sampling with the same sample size. Another interpretation is that only one-third as many sample elements are needed to measure the same statistic if a simple random sample were used instead of a cluster sample.
Measuring assistive technology supply and demand: A scoping review
Published in Assistive Technology, 2021
Jamie Danemayer, Dorothy Boggs, Emma M. Smith, Vinicius Delgado Ramos, Linamara Rizzo Battistella, Cathy Holloway, Sarah Polack
Case study: In 2018, an adaptation of the WHO Assistive Technology Assessment-Needs (ATA-N), a precursor to the rATA, was incorporated into the Bangladesh Rapid Assessment of Disability survey (RAD), in two districts with a subsample selected through a two-staged cluster random sampling process (Pryor et al., 2018). In the first stage, 60 clusters were selected in each of the two districts using a probability proportional to size procedure. In the second stage, approximately 15 households were randomly selected in each cluster using a systematic sampling approach. The survey was adapted for cultural relevance through workshops with key stakeholders before translation and refinement. The adapted ATA-N was administered to adult participants “at risk of disability” who identified functional disabilities, based on questions in the RAD from the Washington Group Short Set of Questions on Disability, which ask participants to self-report on level of difficulty in different functional domains (vision, hearing, mobility, cognition, self-care, communication) (Washington Group Short Set of Disability Questions, 2016). Age and sex matched controls without functional limitations were selected for comparison. The survey had a sample size of 4254, of which 31.9% reported at least some functional difficulty in at least one domain. The study generated estimates of self-reported AP use and unmet need, as well as components of demand including facilitators and barriers for AP use. A Logistic regression was used to assess the association between different socioeconomic factors, use, and unmet need of AP. Sampling weights and adjustments were used to control for survey design effect.