Explore chapters and articles related to this topic
Monte Carlo Methods
Published in Dirk P. Kroese, Zdravko I. Botev, Thomas Taimre, Radislav Vaisman, Data Science and Machine Learning, 2019
Dirk P. Kroese, Zdravko I. Botev, Thomas Taimre, Radislav Vaisman
A third method for stochastic optimization is the cross-entropy method. In particular, Algorithm 3.4.3 can easily be modified to minimize noisy functions S(x)=ES˜(x,ξ), as defined in (3.31). The only change required in the algorithm is that every function value S (x) be replaced by its estimate S^(x). Depending on the level of noise in the function, the sample size N might have to be increased considerably. ☞ 101
Monitoring and model updating of an FRP pedestrian truss bridge
Published in Jaap Bakker, Dan M. Frangopol, Klaas van Breugel, Life-Cycle of Engineering Systems, 2017
G. Hayashi, C.W. Kim, Y. Suzuki, K. Sugiura, P.J. McGetrick, H. Hibi
The cross-entropy method is a parameter learning method that is similar to the distribution estimation algorithm. First, a sample distribution Ur is defined by N samples of the design variables such as mean value and standard deviation of elastic modulus in this study. Next, a distribution having the minimum cross-entropy distance DCE is selected from distribution set G, such as Gaussian or Bernoulli distributions etc. in which the cross-entropy distance DCE is defined by Equation 10. DCE(g‖Uˠ)=∫g(x)logUˠ(x)g(x)dx
A user equilibrium-based fast-charging location model considering heterogeneous vehicles in urban networks
Published in Transportmetrica A: Transport Science, 2021
Cong Quoc Tran, Dong Ngoduy, Mehdi Keyvan-Ekbatani, David Watling
The cross-entropy method (CEM) originated from an adaptive variance minimization algorithm for estimating the probabilities of rare events on stochastic networks and can be adapted to solve static and noisy combinatorial optimization problems. For further details about CEM and its specific applications in a range of transportation problems, we refer to Ngoduy and Maher (2011), Ngoduy and Maher (2012), Maher, Liu, and Ngoduy (2013), Abudayyeh, Ngoduy, and Nicholson (2018), Zhong et al. (2016).
S-SNHF: sentiment based social neural hybrid filtering
Published in International Journal of General Systems, 2023
Lamia Berkani, Nassim Boudjenah
Traditional neural networks, such as artificial neural networks and MLP architectures, are among the earliest invented neural networks. Kim, Kim, and Ryu (2005) exploited MLP to learn the correlation between users and items. They included a variety of information to solve the sparsity problem and selected users or similar items as references to further improve the recommendation performance. Recently, He et al. (2017) used MLP and proposed the Neural CF algorithm (NCF) to model the non-linear relationship between users and items in conjunction with generalized matrix factorization to model the linear relationship (He et al. 2017). This model is widely used in recommender systems and is considered as a general model for user-item interactions. To overcome the cold start problem, Berkani, Kerboua, and Zeghoud (2020) and Berkani, Zeghoud, and Kerboua (2022) proposed an extension of the NCF model, considering a hybridization of CF and CBF based on GMF and Hybrid MLP architecture. Nahta et al. (2021) proposed a Meta embedding deep CF model (MEDCF) that inputs user-item interaction and metadata features and outputs the predicted rating. The GMF model identifies the linear interaction between user and item latent features. MLP is utilized on the metadata information to identify the non-linear interaction between the user and the item. Rama, Kumar, and Bhasker (2021) proposed a novel discriminative model based on the incorporation of features from auto-encoders in combination with embeddings into a deep neural network to predict ratings in recommender systems. Recently, a deep neural network-based CF using a matrix factorization with a twofold regularization is proposed by Ngaffo and Choukair (2022). In this recommendation approach, the user’s latent factors and item latent factors are extracted from a doubly-regularized matrix factorization process to feed a deep learning structure in a forward-propagation process. A normalized cross-entropy method is used to enhance the precision of the deep neural network through a back-propagation process. Li et al. (2022) proposed an enhanced matrix completion method based on non-negative latent factors for the recommendation systems. The authors adopted a momentum additive gradient descent method to accelerate the learning, where a truncating strategy is used to guarantee the non-negativity of latent factors.