Explore chapters and articles related to this topic
Optimal Control and Differential Game Modeling of a Systems Engineering Process for Transformation
Published in Kenneth C. Hoffman, Christopher G. Glazner, William J. Bunting, Leonard A. Wojcik, Anne Cady, Enterprise Dynamics Sourcebook, 2013
Leonard A. Wojcik, Kenneth C. Hoffman
Thus, we observe that, with foresight of upcoming hits to a SE program, it may be best to begin cautiously, and keep the development relatively limited. Many other hits and helps were run with the model. In some scenarios where hits increase in size over time, it can be best to build a small system rapidly, suggesting a build- a-little, test- a-little approach. When helps, rather than hits, dominate the SE program, corresponding to new operational applications of the system, then aggressive development is called for. In this way, the model can be used to characterize different strategy regimes for SE management and governance. The model presented here assumes full knowledge of upcoming hits and helps, which will not prevail in real SE programs, although in many cases probabilistic estimates of hit magnitudes and times throughout the program life cycle can be generated based on past experience with similar programs. The model can be used to illustrate outcomes across a range of different strategy regimes and to show a perfect- foresight strategy for any set of hits and helps during the system lifetime. A single scenario, including computation of optimum strategy runs in less than one second on a typical, recent laptop computer, so many scenarios can be played out. A possible extension of the model, which has not yet been undertaken, would be to include stochastic simulation to account for uncertainty in number, times, and magnitudes of hits and helps, as well as risk preferences of decision makers.
Machine Learning
Published in Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza, Industrial Applications of Machine Learning, 2019
Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza
Approximate inference methods balance the accuracy of the results and the capability to deal with complex models, where exact inference is intractable. Like exact inference, approximate inference is also NP-hard in general BNs (Dagum and Luby, 1993). The most successful idea is to use stochastic simulation techniques based on Monte Carlo methods. The network is used to generate a large number of cases (full instantiations) from the JPD. The probability is then estimated by counting observed frequencies in the samples. As more cases are generated, the exact probability will be better approximated (by the law of large numbers).
Advanced Methods for RAM Analysis and Decision Making
Published in Qamar Mahboob, Enrico Zio, Handbook of RAMS in Railway Systems, 2018
Andreas Joanni, Qamar Mahboob, Enrico Zio
Monte Carlo simulation is a stochastic simulation technique in which multiple independent outcomes of a model are obtained by repeatedly solving the model for randomly sampled values of the input variables and events. The sample of outcomes thereby obtained is statistically treated to compute the system quantities of interest. The method inherently takes into account the effects of uncertainty and is free from the difficulty of solving the underlying model, which is why it is well suited for complex systems that are difficult to model using analytical techniques, see, e.g., the book by Zio (2013).
Evaluating shift patterns considering heterogeneous skills and uncertain workforce availability
Published in Journal of Decision Systems, 2021
Pia Mareike Steenweg, Matthias Schacht, Brigitte Werners
The literature reviews by De Bruecker et al. (2015b), Ernst et al. (2004), and Van den Bergh et al. (2013) identify different methodologies that are applied in shift scheduling. Few studies have evaluated a combination of optimisation and simulation methods, since uncertainty is mostly neglected and only considered in optimisation models implicitly by the use of expected values for the staffing level. Stochastic simulation may yield a better understanding of the impact of decision policies on the ability to deal with unexpected changes on the operational level. Recent studies from De Bruecker et al. (2015a), Easton (2011), and Ingels and Maenhout (2015) combine both simulation and optimisation. De Bruecker et al. (2015a) develop an itinerary process to enhance a deterministic optimisation model (MILP) for aircraft maintenance personnel rosters based on the results of a stochastic simulation. Easton (2011) models the operational reallocation of available capacity in three stages: a deterministic scheduling model (MIP) forms the basis; then, stochastic simulation is used to identify the uncertainty to which, finally, the reallocation model optimally responds. This procedure with an optimisation component to react after the simulation inspired the structure of our realistic framework development.
A multi-criteria decision-making analysis for the selection of fibres aimed at reinforcing asphalt concrete mixtures
Published in International Journal of Pavement Engineering, 2021
Carlos J. Slebi-Acevedo, Pablo Pascual-Muñoz, Pedro Lastra-González, Daniel Castro-Fresno
The inclusion of stochastic simulations enabled the consideration of the uncertainty of the different alternatives and the criteria associated with each one of them. From the results obtained, a decrease in the performance score of each alternative was observed. Introducing stochastic simulations enables risk to be taken into consideration in the input parameters and therefore, providing more precision in the decision-making process.
Metamodel-based optimization of stochastic computer models for engineering design under uncertain objective function
Published in IISE Transactions, 2019
Guilin Li, Matthias Hwai-yong Tan, Szu Hui Ng
Our proposed methodology is general. It can be applied to any time-consuming stochastic simulation model with multiple outputs to be optimized via the desirability function approach or other approaches for multiobjective optimization. In particular, it can be applied to simulation models based on PDEs with randomly drawn inputs, with the drug delivery system design case study given in this article being one example. In this type of problems, a PDE model for a physical system is solved numerically with the finite element or other methods implemented with a computer code. Due to uncertainties in some inputs to the model, they are modeled as random variables and their values are randomly chosen in the simulations. Thus, the outputs of the computer code are random, which necessitates the use of replicated simulations to quantify the distribution of the outputs. This type of models is widely used in mechanical and civil engineering design to account for input uncertainty. It has been implemented in the popular ANSYS software for finite element simulation and engineers are often interested in using such software for optimizing multiple quantifiable characteristics of an engineering design under input uncertainty (Reh et al., 2006). According to Reh et al. (2006), ANSYS has a built-in capability to construct polynomial emulators (called response surface method) to help reduce simulation time. Recently, other researchers such as Su et al. (2017) have considered the use of GP emulators. However, it appears that objective function uncertainty has not been systematically considered by these engineering design researchers. Our proposed method can be a useful tool for these researchers to optimize multiple characteristics of an engineering design under both input and objective function uncertainty. This includes design of manufacturing processes using finite element simulation models (Liu and Hu, 1997) In addition, our proposed approach can also be applied to optimize discrete event stochastic simulation models in operations research (Alberto et al., 2002; Kleijnen, 2014), and the response variance in robust production design (Tan, 2015b).