Explore chapters and articles related to this topic
Contextual Bandit Approach-based Recommendation System for Personalized Web-based Services
Published in Applied Artificial Intelligence, 2021
Akshay Pilani, Kritagya Mathur, Himanshu Agrawald, Deeksha Chandola, Vinay Anand Tikkiwal, Arun Kumar
Multi-armed bandit problem is a classical problem in the field of Computer Science. Multi-armed bandit algorithms provide a solution to the exploration-exploitation dilemma. Contextual Bandits are a type of bandit which uses expected payoff to recommend news articles. The expected payoff of a user is calculated using context and unknown bandit parameters. Here, context is a feature vector obtained by using the information of both user and news articles. Some Contextual Bandit algorithms consider the users to be independent of each other, i.e. unknown bandit parameters are calculated for each user independently.