Explore chapters and articles related to this topic
Recommendation Systems
Published in Yulei Wu, Fei Hu, Geyong Min, Albert Y. Zomaya, Big Data and Computational Intelligence in Networking, 2017
Nonnegative matrix factorization (NMF) is a matrix factorization technique approximating M≈UVΤ under the constraint that all entries in U and V should be nonnegative. This requirement can be important in some applications where the representation of each element is inherently nonnegative, or it seeks low-rank matrices which are enforced to have only nonnegative values. For example, a text document is represented as a vector of nonnegative numbers with the term-frequency encoding. Each element in this representation is the number of appearances of each term in the document, so it is nonnegative. Another example is image processing. Digital images are represented by a matrix of pixel intensities, which are inherently nonnegative. In natural sciences such as chemistry or biology, chemical concentrations or gene expressions are also nonnegative [6]. In recommendation systems, a rating matrix is also usually nonnegative. Although other matrix factorization methods may allow negative entries in factorized low-rank matrices, it still makes sense to enforce nonnegativity as the original data is nonnegative.
The evolution of recommender systems: From the beginning to the Big Data era
Published in Matthias Dehmer, Frank Emmert-Streib, Frontiers in Data Science, 2017
Beatrice Paoli, Monika Laner, Beat Tödtli, Jouri Semenov
In recent recommender system competitions, alternating least squares (ALS) and stochastic gradient descent (SGD) appear to be the two most widely used methods for matrix factorization. ALS switches between updating the latent factors of users and those of items while fixing the other. As mentioned in Reference 16, SGD has become one of the most popular methods for matrix factorization in recommender systems due to its efficiency and simple implementation. The time complexity per iteration of SGD is lower than ALS. However, compared with ALS, SGD usually needs more iterations to obtain a good enough model, and the performance is sensitive to the choice of the learning rate.
Image and Video Copy Detection Using Content-Based Fingerprinting
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Mehrdad Fatourechi, Xudong Lv, Mani Malek Esmaeili, Z. Jane Wang, Rabab K. Ward
Matrix decomposition: In this approach, a matrix of features is decomposed and some invariant features are extracted. Some of the popular matrix decomposition algorithms are singular value decomposition (SVD) [8,51] and nonnegative matrix factorization (NMF) [6,24].
Supervised subgraph augmented non-negative matrix factorization for interpretable manufacturing time series data analytics
Published in IISE Transactions, 2020
Hongyue Sun, Ran Jin, Yuan Luo
As raw time series are usually high dimensional and hard to understand, various time series representation methods and variable and feature selection methods are proposed for dimensionality reduction. One of the simplest ways to represent time series is to use summary statistics such as mean, standard deviation, skewness, and kurtosis (Sun et al., 2015). These summary statistics are often insufficient to keep the useful information. Mathematical transformations are the most widely used techniques for time series representation. These transformations include: (i) basis expansion approaches, such as spline expansion, Fourier expansion, and wavelet expansion; and (ii) matrix factorization approaches, such as principal component analysis and its variants, and independent component analysis. We refer interested readers to the extensive reviews in Ramsay and Silverman (2005) and Morris (2015) for details.
Dual Stage Normalization Approach Towards Classification of Breast Cancer
Published in IETE Journal of Research, 2022
BenTaieb and Ghassan [35] introduced an image analysis model that transfers stains across different input datasets. The network is trained with previous dataset staining properties. Matrix factorization methods, such as component analyses and non-negative matrix factorization (NMF) are some of the important unsupervised mechanisms. These approaches have the benefit of manually selecting the pixels instead of training the data [14]. The method suggested by Li et al. [36] employed the NMF method for images stained by H&E with an initialization function to exclude local minima. In contrast, Xu et al. [31] later recommended that sparse non-negative matrix factorization (SNMF) is superior to PCA, ICA, and NMF for H&E and IHC stained images.
CF-AMVRGO: Collaborative Filtering based Adaptive Moment Variance Reduction Gradient Optimizer for Movie Recommendations
Published in International Journal of Computers and Applications, 2022
V. Lakshmi Chetana, Hari Seetha
To highlight the aforementioned problems, the matrix factorization technique is applied in the recommender system, where it decomposes the user and item interactions matrix into a low dimensional rectangular matrix, and a simple dot product is applied to predict the unknown ratings. The linear dot product of the rectangular matrix cannot capture the user and item interaction completely. For addressing the data sparsity problem, a new deep learning-based matrix factorization technique called Neural Collaborative Filtering (NCF) is combined with Adaptive Moment Variance Reduction Gradient Optimizer (AMVRGO) algorithm named as CF-AMVRGO to update the parameters of the network. In this manuscript, NCF technique is used as a data handler and the AMVRGO is proposed as a movie recommendation model. The main idea behind matrix factorization in CF is to indicate the users and items in a lower-dimensional latent space. In CF-based matrix factorization technique, the AMVRGO algorithm initializes the matrices with some random variables and then minimizes the item differences iteratively. In the resulting section, the CF-AMVRGO algorithm’s performance is validated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) on movielens 100K, 1M, and 10M datasets. In addition, the CF-AMVRGO algorithm’s performance is compared with an existing deep learning-based CF algorithm [9] to evaluate its performance on movielens 1M, and 10M datasets. The proposed CF-AMVRGO algorithm showed 0.08 RMSE value improvement in movielens 1M dataset and 0.056 RMSE value improvement in movielens 10M dataset compared to the existing deep learning-based CF algorithm [9].