Explore chapters and articles related to this topic
Basic Computations
Published in Jhareswar Maiti, Multivariate Statistical Modeling in Engineering and Management, 2023
So, the difference of probability function and likelihood function in practice is that, given a particular set of parameters, the probability function computes the pdf or pmf of different known observations and is a function of the data, defined on the data scale as shown in Figures 3.14 and 3.15, whereas the likelihood function is a function of the parameters, defined on the parametric scale and computes the likelihood of a particular parameter value for a given data set (Myung, 2003; Lindsey, 2006). The likelihood functions for y=1 and y=5 shown in Table 3.7 are plotted in Figures 3.16 (a) and (b).
Handling missing data in large databases
Published in Uwe Engel, Anabel Quan-Haase, Sunny Xun Liu, Lars Lyberg, Handbook of Computational Social Science, Volume 2, 2021
Martin Spiess, Thomas Augustin
The likelihood function is also at the heart of parametric Bayesian inference, where prior knowledge (or ignorance) about model parameters in form of a prior distribution is combined with observed data information via the likelihood function to form the so-called posterior distribution of the parameter. This posterior distribution reflects the knowledge about the parameters of scientific interest in the light of new data and is used to draw inferences. Bayesian inferences, like direct-likelihood inferences, are generally not evaluated from a frequentist perspective but are based on their plausibility or their support from the observed data (Rubin, 1976). To evaluate models, the posterior distribution of the parameter may be inspected and Bayes factors comparing different models can be calculated. There is, however, also a demand to evaluate Bayesian inferences from a frequentist point of view (e.g. Rubin, 1996). In case of direct-likelihood inferences, models are compared via likelihood ratios, that is, relations of likelihood functions based on different models at their respective maximum.
Modelling demand-caused failures. Estimation procedure
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
R. Mullor, A.I. Sánchez, P. Martorell, S. Martorell
From these expressions, the maximum likelihood estimation (MLE) method provide estimates of the parameters of reliability and maintenance models. The maximum likelihood estimations of these parameters are those values that maximize the likelihood function, that is, that maximize the probability that the observed events occur. Although sometimes direct methods can be applied to obtain the solution, this is not usually the situation in reliability models. In our applications, to maximize the likelihood function of each model, we use the Nelder-Mead Simplex method.
Learning attack-defense response in continuous-time discrete-states Stackelberg Security Markov games
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
The likelihood of observations is conceptualised as the probability to jump from state to at time , continued by a jump from state to at time , etc. The maximum-likelihood estimator is a method for estimating the parameters of a probability distribution by maximising a likelihood function. The objective is to obtain the values of the model parameters that maximise the likelihood function over the parameter space as follows
Error by omitted variables in regret-based choice models: formal and empirical comparison with utility-based models using orthogonal design data
Published in Transportmetrica A: Transport Science, 2020
Sunghoon Jang, Soora Rasouli, Harry Timmermans
As shown in Equation (5), the error generated by omitted variable(s) is alternative-specific. Since the utility of an alternative only depends on its own attribute values, the effect of omitted variables is also alternative-specific. Therefore, alternative specific constants (ASCs) have been proposed to represent the average effect of the omitted variable(s) (Tardiff 1978; Ben-Akiva and Lerman 1985; Train 2009; Ortuzar and Willumsen 2011). Based on the assumption that error terms are identically, independently Gumbel distributed (IID-Gumbel), the probability of choosing an alternative can be derived using maximum likelihood estimation: where is the likelihood function and is log likelihood function. is 1 if individual n chooses alternative i and 0 otherwise.
Probabilistic estimation of variogram parameters of geotechnical properties with a trend based on Bayesian inference using Markov chain Monte Carlo simulation
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2021
Jiabao Xu, Lulu Zhang, Jinhui Li, Zijun Cao, Haoqing Yang, Xiangyu Chen
A set of prior information of parameters and a likelihood function are needed to be defined before carrying out the Bayesian inference method. In this study, uniform prior distributions are adopted because of the relatively limited information on model parameters. A uniform distribution reflects the state of non-informative prior knowledge (Xiong et al. 2015; Cao, Wang, and Li 2016), which contains much uncertainty. Other types of distribution can also be used as prior information of parameters such as normal distribution, if prior information of parameters is available in literature or based on engineering experience.