Explore chapters and articles related to this topic
The Monte Carlo Method and Its Applications to Nuclear Science and Engineering
Published in Robert E. Masterson, Introduction to Nuclear Reactor Physics, 2017
Importance sampling is perhaps the most common way to reduce the variance of a statistically based method. In essence, the idea of importance sampling is to modify the simulation so that the variance V is reduced by selecting the results from a distribution other than the one that the problem suggests. For example, in radiation shielding calculations, most neutrons produced in the core head toward the reflector, while others will stream randomly back into the core and are absorbed. See Figure 23.13 for a schematic representation of how this occurs. The ones that head back into the core will have little, if any, effect on the ones that make it through the reflector. Hence, if we can eliminate the inward-directed neutrons from the statistical sample more quickly, less computer time will be spent tracking those neutrons, and more computer time will be spent on the neutrons that actually make it through the shield. In other words, the initial trajectory of a neutron is one way to infer the effect it will have on the final solution, and this is one way in which the “importance” of the neutron’s contribution can be assessed. If a neutron is moving away from the reflector, we can “adjust” its absorption cross section Σa(E) from its initial value (say 0.10 per centimeter) to a much higher value (like 0.90 per centimeter).
Monte Carlo Simulation
Published in Shyam S. Sablani, M. Shafiur Rahman, Ashim K. Datta, Arun S. Mujumdar, Handbook of Food and Bioprocess Modeling Techniques, 2006
Kevin Cronin, James P. Gleeson
Variance reduction techniques are available that can improve the efficiency of the Monte Carlo method by more than an order of magnitude. There are a number of such approaches with the importance sampling method being one of the more prominent.27 The basic idea behind the importance sampling method is that certain values of the input random variables (or vectors) have more important impact on the quantities being estimated than others, and if these “important” values are sampled more frequently, i.e., sampled from a biased density function, the variance of the estimator can be reduced. The outputs from simulations are then weighted to correct the bias caused by sampling from the biased density function. The purpose of the importance sampling method is to obtain accurate estimates of output quantities with fewer samples than required in the direct Monte Carlo method. There are two major steps involved in the importance sampling method: the first is distortion of the original input process. Instead of taking samples from the original PDF, samples are taken from some other PDF, called importance density functions, such that some “important” regions of the sample space get more samples. The fundamental issue in implementing the importance sampling method is the choice of biased importance density functions. The second step is correction of the distortion by averaging the output from different samples (realizations) using weights that are related to the distortion, such that the mean of the quantity being estimated is preserved.
Probabilistic modeling and prediction of out-of-plane unidirectional composite lamina properties
Published in Mechanics of Advanced Materials and Structures, 2021
Jiaxin Zhang, Michael Shields, Stephanie TerMaath
Importance sampling is a variance reduction technique applied to estimate a statistical expectation with respect to a target probability distribution using samples drawn from an alternative distribution Specifically, the expected value with respect to is formulated by where denotes expectation with respect to Defining importance weights the importance sampling estimator of is
Optimal budget allocation for stochastic simulation with importance sampling: Exploration vs. replication
Published in IISE Transactions, 2021
When we have a limited budget on the simulation runs, we need to optimize the allocation of the budget for both levels to accurately estimate the output of interest. Choe et al. (2015) provided a general framework for resource allocation at both levels when the first-level uses importance sampling. The objective of importance sampling is to take more samples from the important input region to reduce estimation variance with limited budgets. Choe et al. (2015) considered the so-called stochastic black box model, where the second-level simulation purely relies on a complicated black box computer model, such as a wind turbine simulator. In the two-level simulation framework, they jointly derived the importance sampling density for the first-level simulation and the optimal budget allocation for the second-level simulation; the importance sampling density affects the optimal budget allocation and vice versa. Their approach is called stochastic importance sampling and has been extensively studied in wind energy applications (Choe et al., 2015; Choe et al., 2016; Choe et al., 2018; Cao and Choe, 2019; Pan et al., 2020; Pan et al., 2021).
System reliability evaluation of in-service cable-stayed bridges subjected to cable degradation
Published in Structure and Infrastructure Engineering, 2018
Naiwei Lu, Yang Liu, Michael Beer
In Figure 7, the data processing system (DPS) (Tang & Zhang, 2013) is utilised to generate uniformly distributed samples that will be used for training SVR models. The LIBSVM (Library for Support Vector Machines) (Chang & Lin, 2011) is an MATLAB program package. The MCS (Monte Carlo Simulation) can be a direct MCS or an importance sampling. Compared with the direct MCS, importance sampling is using a weighting function to concentrate the sampling around the most probable point (MMP) rather than around the mean vector (Dai et al., 2012). In this manner, a large number of sample points fall into the failure domain making the sampling much more efficient. The β-bound function refers to Equations (4) and (5). At the end of the procedure, the finite-element model is updated, and the component reliability is then re-evaluated step by step. Eventually, the system reliability can be evaluated in a series-parallel system.