Explore chapters and articles related to this topic
Sustaining Product Reliability
Published in Ali J Jamnia, Khaled Atua, Executing Design for Reliability within the Product Life Cycle, 2019
There are other rules of thumb to keep in mind when selecting a distribution model to fit failure data and calculating reliability metrics.2When the number of failures is 20 or less, the two-parameter Weibull distribution is to be chosen for the analysis (Wenham 1997).When fitting large-size reliability data (greater than 500 failures), use the maximum likelihood estimation (MLE) method. MLE is a method that computes the maximum likelihood estimates for the parameters of a statistical distribution. The mathematics involved in the MLE method is beyond the scope of this book, but discussed in more detail in other works such as Aldrich (1997).In general, when fitting less than 500 data points (failures), rank regression is more accurate (Liu 1997).
Image Recovery Using the EM Algorithm
Published in Vijay K. Madisetti, The Digital Signal Processing Handbook, 2017
Jun Zhang, Aggelos K. Katsaggelos
In the MLE, the likelihood function is the pdf evaluated at an observed data sample conditioned on the parameters of interest, e.g., blur filter coefficients and noise level, and the MLE seeks the parameters that maximize the likelihood function, i.e., best explain the observed data. Besides being intuitively appealing, the MLE also has several good asymptotic (large sample) properties [10] such as consistency (the estimate converges to the true parameters as the sample size increases). However, for many nontrivial image recovery problems, the direct evaluation of the MLE can be difficult, if not impossible. This difficulty is due to the fact that likelihood functions are usually highly nonlinear and often cannot be written in closed forms (e.g., they are often integrals of some other pdfs). While the former case would prevent analytic solutions, the latter case could make any numerical procedure impractical.
Wiener Filtering
Published in Philipos C. Loizou, Speech Enhancement, 2013
In the previous section, we discussed the ML approach for parameter estimation, in which we assumed that the parameter of interest, θ, was deterministic but unknown. Now, we assume that θ is a random variable, and we therefore need to estimate the realization of that random variable. This approach is called the Bayesian approach because its implementation is based on Bayes’ theorem. The main motivation behind the Bayesian approach is the fact that if we have available a priori knowledge about θ, that is, if we know p(θ), we should incorporate that knowledge in the estimator to improve estimation accuracy. The Bayesian estimators typically perform better than the MLE estimators, as they make use of prior knowledge. Next, we describe methods that minimize the mean-square error (in the Bayesian sense) between the true and estimated magnitude spectra.
Penalized Estimation of Sparse Markov Regime-Switching Vector Auto-Regressive Models
Published in Technometrics, 2023
Gilberto Chavez-Martinez, Ankush Agarwal, Abbas Khalili, Syed Ejaz Ahmed
Maximum Likelihood Estimation (MLE) is the most common frequentist method of inference in MSVAR models. However, a limitation often encountered with MLE is the potentially large number of parameters to be estimated. In an MSVAR model with M regimes and autoregressive order p, the total number of parameters is , which can be large even for moderate values of (d, p, M), compared to a typical sample size. For instance, in our case study each observation is 10-dimensional, and for an MSVAR model with AR-order p = 1 and the number of regimes M ranging from 1 to 5, there are parameters to estimate based on a sample of size 481. Therefore, besides an obfuscated model interpretation, it can also be difficult to perform stable MLE in large-dimensional parameter spaces. It thus becomes essential to consider strategies that enable more stable and interpretable parameter estimation. With this motivation, we perform parameter estimation using regularization techniques that have been successful in both high-dimensional VAR and covariance estimation problems (Lam 2020; Basu and Matteson 2021). These techniques arise from the assumption that many of the model parameters are null. In the context of MSVAR models, we assume that both VAR coefficient matrices and the noise covariance—or precision—matrices are sparse, that is, many of their entries are zero. This also results in more meaningful model interpretations.
Design of nodule-lifting apparatus of seabed mining electric vehicle considering physical properties of polymetallic nodules
Published in Marine Georesources & Geotechnology, 2023
Saekyeol Kim, Su-gil Cho, Jae Wan Park, Tae Hee Lee, Jong-Su Choi, Sanghyun Park, Sup Hong, Hyung-Woo Kim, Cheon-Hong Min, Young-Tak Ko, Sang-Bum Chi
The log-likelihood function is maximized through a numerical optimization technique called maximum likelihood estimation (MLE). The maximum function value in MLE is denoted by and is the length of the parameter vector. As shown in Eq. (4), the AIC balances the goodness-of-fit and model complexity. Twelve candidate distributions were selected based on our prior work (Kim et al. 2021). After the best-fit probability distribution was identified and its optimal parameters were determined according to the AIC and MLE, a probabilistic analysis was performed using the cumulative distribution function (CDF) and the target response value. Figure 12 shows the AIC results of an example for a response. The given data, which were randomly generated from a normal distribution, are presented in a histogram, and the PDFs of the candidate probability distributions with their optimized statistical parameters are illustrated. The legends are arranged in order from the probability distribution with the lowest AIC value to that with the highest AIC value. The Rician and normal distributions were identified as the best-fit and second-best-fit distributions, respectively. The Rician distribution is approximately equivalent to the normal distribution for a certain range of parameters. This can also be observed in Figure 12, in which these two probability distributions are nearly identical. After determining the best-fit distribution and its parameters, the probability analysis is performed using the CDF.
Optimum synthesis of mechanisms with uncertainties quantification throughout the maximum likelihood estimators and bootstrap confidence intervals
Published in Mechanics Based Design of Structures and Machines, 2022
José A. Montoya, R. Peón-Escalante, O. Carvente, C. Cab, M. A. Zambrano-Arjona, F. Peñuñuri
The likelihood function compares plausibilities of different values of for the given fixed value of If < then the desired makes more plausible than Thus, means that the desired value will occur p times more frequently in repeated samples from the population defined by the value than from the population defined by The value that maximizes (3) is called the maximum likelihood estimate (MLE) of Thus, the MLE corresponding to the likelihood function is