Explore chapters and articles related to this topic
Review of Probability Theory
Published in Harry G. Perros, An Introduction to IoT Analytics, 2021
The expected value of a random variable is the average of all the values it takes. The expected value of a discrete random variable X with a probability distribution p(X1 = i), −∞ < i < +∞ isEX=∑i=−∞+∞ipX=iand the expected value of a continuous random variable X with a pdf fX(t) isEX=∫−∞+∞tfXtdt.Below, we describe some of the properties of the expectation.
Expected Value
Published in Carl L. Pritchard, The Project Management Drill Book, 2018
Expected value (or expected monetary value, as it’s often called) is rooted in the notion that if you know both the probability and impact of a risk event, you can establish how much it would cost if you were to perform the project a hundred or a thousand times again. Although projects are unique, one-time events, expected value is frequently used as a tool to establish contingency or relative levels of risk on projects. The basic formula for expected value on a risk event is: expectedmonetaryvalue(EMV)=probability*impact
Advanced economic analysis of alternatives
Published in John Vail Farr, Isaac Faber, Engineering Economics of Life Cycle Cost Analysis, 2018
The expected value of the NPW is often used as a measure of probability distribution. Each possible value is multiplied by an associated probability to determine the expected value. Since the probabilities are weights, they must sum to 1. Typically, these probabilities can be classified as objective or subjective, whereObjective probabilities are based on objective data (historical data) and assume the same trends and characteristics of the past will prevail in the future, andSubjective probabilities are assigned usually based on subject-matter expert opinion.
The modified life cycle cost method for the risk-based design of excavation projects
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2023
The St. Petersburg Paradox, which is acquired by a simple coin flip game known as the St. Petersburg lottery, is the main basis of the utility theory, as stated by Bernoulli (1954). In this game, the player flips the coin until it gets tails. The bonus starts from a specific amount (suppose 2$) and is doubled each time the coin is flipped. The probability of getting a tail in each flip starts from 1/2 and is halved in each flip. So the expected value of each trial is equal to 1, as mentioned in Table 4, which leads to an infinite total expected value. However, one may use the equation to calculate the utility of each flip. The expected utility of each trial may then be calculated according to Table 4 by multiplying the probability and the utility of each trial, which leads to a total expected utility of about 0.602. The total expected utility of 0.602 corresponds to an amount of 4$ according to the second row of Table 4 and hence an individual is keen to pay about 4$ to participate in the game. The main factor that affects the choice of a decision-maker is expected utility rather than the expected value.
Flexibility of infrastructure management decisions: the case of a project expansion
Published in Structure and Infrastructure Engineering, 2019
To compute the expected value of losses due to failure, it is necessary to evaluate, in addition to costs, the failure probability. Failure probability can be computed in different ways depending upon the modeling assumptions. For example, it might be computed based on the distribution of the time to failure, F(t); or on the stochastic nature of both demand and resistance (e.g., ). The discussion on the selection of the strategy to compute failure probability is beyond the scope of this paper but more details can be found, for example, in (Melchers, 1999) and (Sanchez-Silva & Klutke, 2016).
Route planning decisions: evaluating reliance on spatial heuristics under risk
Published in Spatial Cognition & Computation, 2023
Mary E. Frame, Michaela Schwing, Samuel Johnston, Erica Curtis
In operational environments there are consequences to decisions, although decisions are made with variable information on all potential outcomes and their respective probabilities. As such, most of the discourse on decision-making focuses on decisions under risk and uncertainty (Johnson & Busemeyer, 2010). When one knows the probabilities for various outcomes, this is known as a decision under risk. By contrast, uncertainty refers to decisions where some information about outcomes and their probabilities is missing or difficult to determine, either in full or partially. Decisions made under uncertainty rely on presumptions of probability made by the individual decision-maker (Hertwig, Pleskac & Pachur, 2019). When decisions under risk are made, probabilities and outcomes can be evaluated and expected values of each option can be computed mathematically by summing each probability by each outcome. Expected value provides an economic benchmark by which value preference can be determined. For example, a simple gamble of a 50% chance to win $50 would have an expected value of $25, since ($50*.5) + ($0*.5) = $25. Economic models of decision-making asserted that preferences would be made after computing the Expected Value (EV) and selecting the highest valued path (Morgenstern & Von Neumann, 1953). However, most individuals do not consistently rely on EV calculations to make preferential choices involving monetary risk. Psychological studies of decision-making have demonstrated robust preferences for safer options over risky options, even when the potential gained outcome is higher under risk, a phenomenon known as loss aversion (Kahneman & Tversky, 1979). That is to say, a person may choose an option with a lower expected value if it also has a lower risk. One simple mathematical way to calculate the risk of a gamble, for example with two possible outcomes, is by using variance (see Equation 1). Higher variance gambles are understood to be of higher risk due to the larger range of potential outcomes, including a larger potential loss or higher probability to not achieve any gain. The equation to calculate discrete outcome variance (Peck, Olsen & Devore, 2015) is as follows,