Explore chapters and articles related to this topic
Definition of Exploratory Trajectories in Robotics Using Genetic Algorithms
Published in Don Potter, Manton Matthews, Moonis Ali, Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 2020
Maria João Rendas, Wilfrid Têtenoire
The last criterion evaluates the ability of the platform to acquire its position and orientation at the end of each linear segment, and is its explicit consideration is new. We define it using a performance analysis tool based on notions of Statistical Estimation Theory, and which describes the asymptotic properties of Maximum Likelihood parameter estimates, the Cramer-Rao lower bound. Let z be a vector that denotes the available observations, which are probabilistically related to a parameter θ that we whish to determine (estimate), by a known conditionally family of density functions: {p(z|θ)|θ∈Θ}.
Sustaining Product Reliability
Published in Ali J Jamnia, Khaled Atua, Executing Design for Reliability within the Product Life Cycle, 2019
There are other rules of thumb to keep in mind when selecting a distribution model to fit failure data and calculating reliability metrics.2When the number of failures is 20 or less, the two-parameter Weibull distribution is to be chosen for the analysis (Wenham 1997).When fitting large-size reliability data (greater than 500 failures), use the maximum likelihood estimation (MLE) method. MLE is a method that computes the maximum likelihood estimates for the parameters of a statistical distribution. The mathematics involved in the MLE method is beyond the scope of this book, but discussed in more detail in other works such as Aldrich (1997).In general, when fitting less than 500 data points (failures), rank regression is more accurate (Liu 1997).
Spherical metrics in general relativity
Published in Maricel Agop, Ioan Merches, Operational Procedures Describing Physical Systems, 2018
As a rule, the likelihood function used in estimations is simply the product of the values of the probability density for the different measured values of X. In taking the maximum likelihood with respect to parameters, it would be therefore appropriate to work with the logarithm of the likelihood, and this is what practically happens. For instance if one measures two values of X having the probability density (2.193), say x1 and x2, the likelihood function constructed based on this information is simply: L(θ|x1,x2)=θ22π2|x1-θ|2|x2-θ|2. $$ L(\theta |x_{1} ,~x_{2} ) = \,\frac{{\theta _{2}^{2} }}{{\pi ^{2} |x_{1} - \theta |^{2} |x_{2} - \theta |^{2} }}. $$
Efficient Uncertainty Quantification of Wharf Structures under Seismic Scenarios Using Gaussian Process Surrogate Model
Published in Journal of Earthquake Engineering, 2021
Lei Su, Hua-Ping Wan, You Dong, Dan M. Frangopol, Xian-Zhang Ling
Since the zero-mean function is used, the covariance function parameters uniquely determine the GP surrogate model. The governing parameters are usually called hyperparameters in machine learning. As mentioned above, GP surrogate model is a Bayesian emulator with GP prior specified by the mean and covariance functions. In the Bayesian context, maximum likelihood estimation (MLE) estimator is a widely used Bayesian inference method for determination of model parameters, given observations. Specifically, MLE attempts to find the model parameters that maximize the likelihood function that represents how likely the observations are predicted by the model given a set of parameters. Therefore, MLE estimator is employed to infer the hyperparameters of GP surrogate model given training data. In doing so, estimation of the hyperparameters is converted in an optimization problem of minimizing the negative logarithmic marginal likelihood (NLML). With a Gaussian likelihood, the NLML and its partial derivatives are [Rasmussen and Williams, 2006]
A statistical study on lognormal central tendency estimation in probabilistic seismic assessments
Published in Structure and Infrastructure Engineering, 2020
Mohamad Zarrin, Mohsen Abyani, Behrouz Asgarian
Maximum Likelihood Estimation (MLE) (Walpole et al., 2002) is a specific method for determining the appropriate estimators for which the Likelihood function (L) is maximized. Solving the equation for a lognormal random variable such as the optimized estimators of and are determined as: and
Batch sequential designs for accelerated life tests and an application to polymer composite fatigue test
Published in Quality Engineering, 2021
Hung-Ping Tung, I-Chen Lee, Yili Hong, Sheng-Tsaing Tseng, Bing Xing Wang
By maximizing (2), we have the maximum likelihood (ML) estimator of and its corresponding Fisher information matrix is