Explore chapters and articles related to this topic
Recommendation Systems
Published in Yulei Wu, Fei Hu, Geyong Min, Albert Y. Zomaya, Big Data and Computational Intelligence in Networking, 2017
Normalized discounted cumulative gain (NDCG) is a popular metric for measuring the effectiveness of a web search engine or other information retrieval applications. Discounted Cumulative Gain (DCG) measures the usefulness of an item based on its rank in a recommended list. NDCG normalizes the DCG in order to compensate for the varying length of the recommendation list. For this, the DCG score is divided by the maximum possible gain with the given list. This is conceptually similar to HLU, but gives slightly different result. Formal definition of DCG and NDCG with a list of p recommendations is given by
iTourSPOT: a context-aware framework for next POI recommendation in location-based social networks
Published in International Journal of Digital Earth, 2022
Lin Wan, Han Wang, Yuming Hong, Ran Li, Wei Chen, Zhou Huang
We adopt two Top-N metrics, F1@N and Normalized Discounted Cumulative Gain(NDCG@N), to evaluate recommendation performance. F1@N measures the overall recommendation accuracy considering the classification precision and recall. NDCG@N emphasizes the ranking of ground truth and assigns higher weights to higher positions. where represents the top-N POIs predicted by the model for user u at time t. represents the ground truth of user u at time t. U is the set of all users. NDCG@N is the DCG@N normalized to [0, 1], where one signifies a perfect ranking.
SAMA: a real-time Web search architecture
Published in International Journal of Computers and Applications, 2022
The information retrieval evaluation campaign provides training and testing sets of queries, whereas the relevance judgment team provides the proposed solution for each query. Experimentally, web information retrieval approaches are evaluated for two tasks: diversity and ad hoc. The diversity task is similar to the ad hoc task, but they differ in the evaluation metrics and judging process. The final goal is to provide a complete coverage and ranked list of pages for a query that aim together to avoid excessive redundancy. The primary effectiveness measure for both tasks is specified by measuring the graded precision of top ten results or graded precision (GP) at k, and documents can be judged as Nav, Key, Hrel, Rel, or Non-relevant. The relevancy of selected resource is determined by calculating GP in the subset of results [30,31]. While a Normalized Discounted Cumulative Gain (nDCG) is the metric of measuring ranking quality by maximizing the relevancy as a whole, it takes into account the graded relevance levels of documents within top ten. Similarly, selecting vertical for a given query is determined by the best running of search query in all resources involved in that vertical. It means the relevancy of vertical is computed by the maximum GP of its resources. In the final analysis, the GP of a vertical is specified by a threshold since some queries have a small set of relevant verticals (we assume 0.5 was a sensitive relevant precision). If no vertical was selected for a given query, the top vertical with maximum relevancy would be selected as relevant.
A novel graph neural networks approach for 3D product model retrieval
Published in International Journal of Computer Integrated Manufacturing, 2023
Chengfeng Jian, Yiqiang Lu, Mingliang Lin, Meiyu Zhang
To assess the advantages of the proposed method comprehensively, we adopted multiple common metrics to measure the results, including Nearest Neighbor (NN), First Tier (FT), Second Tier (ST), E-Measure (E), Discounted Cumulative Gain (DCG) and Precision Recall ratio (PR). NN evaluates the retrieval accuracy of the first returned result. FT is defined as the recall of the top C results, where C is the number of relevant objects for the query. ST is defined as the recall of the top 2C results. E represents the proportion of correct correlation models in the retrieval results. DCG is a statistic that assigns relevant results at the top ranking positions with higher weights under the assumption that a user is less likely to consider lower results. We conducted a series of experiments use ten different categories of models as the query model respectively. First, repeat the experiment 50 times for each model and take the average value, then the experimental results were averaged. The performance of the four retrieval methods is shown in Table 3 and Figure 5. The scores demonstrate that the AAG method outperforms the ant colony algorithm; the GNN outperforms the AAG method; the MAAGNN outperforms the GNN. The major reason for the low performance of the ant colony algorithm and the AAG method is the lack of non-geometric information; the reason for the low performance of the GNN is the lack of attention mechanism. On the other hand, the main reason for the high performance of the MAAGNN is this model’s ability to fully extract both geometric and non-geometric information of the model and recognize more valuable information.