Explore chapters and articles related to this topic
Super-Resolution of 3D Magnetic Resonance Images of the Brain
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Enrique Domínguez, Domingo López-Rodríguez, Ezequiel López-Rubio, Rosa Maza-Quiroga, Miguel A. Molina-Cabello, Karl Thurnhofer-Hemsi
Two approximations can be distinguished for this shifting and consensus strategy. The first one is presented in Thurnhofer-Hemsi et al. (2018), and it is called random shifting. Here the displacement vector is modeled as a random variable that is uniformly distributed over a cube of possible integer displacements. Then the mathematical expectation of the unshifted output image is estimated over the distribution of the displacement vector, which is to be taken as the consensus final image. This estimation is carried out by averaging the unshifted images for a sample of randomly drawn displacements. As the number of samples tends to infinity, by the law of large numbers, it is known that such estimation converges to the true value of the mathematical expectation. Therefore, the more samples that are considered, the more accurate the consensus image is expected to be. This is because the mathematical expectation of the unshifted output image has no dependency on a specific displacement vector, while the unshifted images depend on the displacement vectors that have been used to produce them. Under this approach, two parameters must be optimized, namely the size of the cube of possible integer displacements and the number of samples.
Kinetics of Blood to Cell Uptake of Radiotracers*
Published in Lelio G. Colombetti, Biological Transport of Radiotracers, 2020
James B. Bassingthwaighte, Bernd Winkle
When both PSC and are free to be varied in the fitting procedure, the fitting is improved, the coefficient of variation between hó (t) and hD(t) is reduced. This is the mathematical expectation, namely that increasing the degrees of freedom in a model allows for a better fit. Note that values of PSC are only very slightly improved with v; /Vc free to vary, indicating that the dominant sensitivity is to PSC.
Complex assessment of the flight crew’s psychophysiological state
Published in Waldemar Wójcik, Sergii Pavlov, Maksat Kalimoldayev, Information Technology in Medical Diagnostics II, 2019
V.D. Kuzovyk, O. Bulyhina, O. Ivanets, Y. Onykiienko, P.F. Kolesnic, W. Wójcik, D. Nuradilova
During the developed techniques’ application and as a result of the experiments, the normalised values of the informative indicators of the psychophysiological state and the confidence intervals of the parameters, possible changes under the influence of the extreme environment factors were established. The data of the normalised values and the corresponding confidence intervals (CI) are given in Table 1. If the crew member being under test according to the results of the previous grouping belongs to the first or second group (super strong and strong type), then according to the indicators of Table 1, for example, the coefficient characterising the change in the sample mathematical expectation should be in the range from 0.48 to 0.5 for group I and from 0.2 to 0.27 for group II. Similar steps are taken comparing the experimental data with the normalised values of Table 1, in accordance with the other coefficients.
Short-term effects of ambient (outdoor) air pollution on cardiovascular death in Tehran, Iran – a time series study
Published in Toxin Reviews, 2020
Azizallah Dehghan, Narges Khanjani, Abbas Bahrampour, Gholamreza Goudarzi, Masoud Yunesian
Missing air pollution data was estimated using the Expectation-Maximization (EM) algorithm in SPSS20 software. This approach was proposed by Dempster et al. (1977). This method is an algorithm for nonlinear optimization and is appropriate for time series applications involving unknown components (Anava et al.2015). EM is a repetitive and effective process that uses maximum likelihood estimation in estimating missing data. Each repetition of the algorithm consists of two steps: the mathematical expectation stage (E-Step) and the maximization step (M-Step). In the mathematical expectation step, missing data is estimated based on observed data and the current estimation of the model parameters. In the maximization step, the likelihood function is maximized with the assumption that the missing data is known. Here, the estimates of missing data from the E-step are placed instead of missing values. By repeating the algorithm, the missing value is corrected at each step, and convergence can be assured (Afshari Safavi et al.2015).
The influence of uncertainties of radon exposure on the results of case-control epidemiological study
Published in International Journal of Radiation Biology, 2019
Aleksandra Onishchenko, Michael Zhukovsky
The correction of the influence of the uncertainty of determining the average value of radon concentration was performed by the regression calibration (RC) method in accordance with the procedure described in the work of Fearn et al. (2008). The RC method consists in replacing the unobserved true value of the variable in the regression by the expected value x assessed from the observed value z. To estimate the true value of x for the observed value of z, Equations (5) and (6) were used. 2а =σ2obs–σ2err is the adjusted value of the observed dispersion, taking into account the known uncertainty; σ2err is the variance of the uncertainty; σ2obs is the observed variance in the control sample; μA is the mathematical expectation of radon levels in control sample. Since the radon distribution is lognormal, and the uncertainty is multiplicative, instead of the usual values, the values of μA and σ2obs corresponding to the normal distribution of the logarithms of radon concentration should be used. The quantities x at the observed value of z. The mathematical expectation of the real personal value of radon concentration was calculated as:
Streaming constrained binary logistic regression with online standardized data
Published in Journal of Applied Statistics, 2022
Benoît Lalloué, Jean-Marie Monnez, Eliane Albuisson
When using a stochastic gradient process, a numerical explosion can be encountered [19]. To avoid such phenomenon, methods of gradient variance reduction [3,13], such as gradient clipping [19], can be used. The idea underpinning gradient clipping is to limit the norm of the gradient to a maximum number called threshold. This number must be chosen and a poor choice of threshold can affect computing speed. In our approach, the limitation of the gradient is implicitly obtained by online standardization of the data: each continuous variable is standardized with respect to the estimations at the current step of its expectation and its standard deviation computed online. Indeed, in the case of a data stream, the mathematical expectation and the variance of each variable are a priori unknown and the usual offline standardization cannot be performed. This may also be an issue when using a shrinkage method such as LASSO or ridge, which first necessitates standardizing the explanatory variables. Again, it is not possible to perform the offline standardization in the case of a data stream and an online process can be used, with a projection at each step on the convex set defined by the constraint on the parameters of the regression function. More generally this type of process can be used for any convex set, for example if it is imposed that the parameters associated with the explanatory variables are positive. Finally, we can consider a case where the expectations and the variances of the explanatory variables depend on the step n or on the values of controlled variables and a regression model with standardized explanatory variables is defined. Assuming that we can estimate online the expectation and the variance of these variables, we can also use the same type of process to estimate the parameters of the regression function.