Explore chapters and articles related to this topic
A Semi-Supervised and Active Learning Method for Alternatives Ranking Functions
Published in Qurban A. Memon, Distributed Networks, 2017
Faïza Dammak, Hager Kammoun, Abdelmajid Ben Hamadou
In learning to rank, a number of queries are provided, and each query is associated with a perfect ranking list of documents; a ranking function assigns a score to each document, and ranks the documents in descending order of the scores [7]. The ranking order represents the relative relevance of documents with respect to the query. In a problem related to learning to rank, an instance is a set of objects and a label is a sorting applied over the instance. Learning to rank aims to construct a ranking model from training data.
Preference-Based Multi-Robot Planning for Nuclear Power Plant Online Monitoring and Diagnostics
Published in Nuclear Science and Engineering, 2023
Alan Hesu, Sungmin Kim, Fan Zhang
Instead, a more context-aware feature selection process may find that the importance of certain features varies depending on the specific data instance. Here, ranking techniques, which score a collection of documents given a query, may be applicable, where the set of possible features makes up the documents. Learning to rank is often studied in the context of problems such as web search result retrieval or product ranking and therefore also maintains a large body of work in applying machine learning methods to recover document scores.[32–34] While ranking metrics such as normalized discounted cumulative gain (NDCG)[35] or mean average precision (MAP)[36] are nondifferentiable, there exist a number of differentiable loss functions that allow machine learning approaches to optimize for these metrics such as LambdaLoss[37] or pointwise root-mean-squared error (RMSE).[33]
A Ranking-based Weakly Supervised Learning model for telemonitoring of Parkinson’s disease
Published in IISE Transactions on Healthcare Systems Engineering, 2022
Dhari F. Alenezi, Hang Shi, Jing Li
Although our method can incorporate ranked samples in training, the objective of our method is different from existing learning-to-rank algorithms in IR. First, learning-to-rank algorithms are mainly aimed to rank a set of documents related to a search query. This means that the objective of the problem is to find the best set of coefficients that meets an unknown latent ranking variable. Hence, a solution of the problem may meet the ordinal requirement but not necessarily produce a good prediction performance. In contrast, our method aims to integrate ranked samples with labeled samples to build an accurate predictive model. Second, learning-to-rank algorithms start by a set of documents whose relevance to a search query is judged using a feature extraction or retrieval function. For example, decisions about ranking a certain pair of documents relative to a query may be based on how frequently a matching term appears within a specific document. This is not how features are intended to be used in our problem, while our objective is to find how features are relevant to response variable of interest.