Explore chapters and articles related to this topic
Human-Computer Interaction and the Web
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
Helen Ashman, Declan Dagger, Tim Brailsford, James Goulding, Declan O’Sullivan, Jan-Felix Schmakeit, Vincent Wade
Personalization began with simple forms of adaptation of web pages; for example, by displaying the actual user’s name on the top of the web page and welcoming him or her back, or by providing a panel of suggested content in the home page which was potentially of specific interest to that user based on previous history using the web page. One of the longest established examples of this is http://www.amazon.com, which stores information about the customer’s interests, gleaned from various sources, to generate a personalized home page and suggest items that are likely to be of interest. Adaptive hypermedia, now typically referred to as adaptive web, is an academic discipline that is dedicated to bringing personalization to the web (Brusilovsky 2007). The principal application areas of Adaptive Web systems have traditionally been in information kiosk-style systems, educational systems, and tourism. However, personalization is now emerging in areas as diverse as news access and publishing (Billsus and Pazzani 2007) to healthcare (Cawsey Grasso, and Paris 2007) and even within museum information systems (Brusilovsky and Maybury 2002).
How do conversational case-based reasoning systems interact with their users: a literature review
Published in Behaviour & Information Technology, 2021
When humans work with automated systems, the allocation of functions to either the human or the automation can vary between situations. Such flexibility comes in two types: adaptive or adaptable automation (Parasuraman and Wickens 2008). Adaptive automation means that a system autonomously decides which tasks are to be carried out by the human, while with adaptable automation humans control the sharing of tasks. In CCBR systems, both types have been implemented. Examples for adaptive automation are the selection of questions based on a classification of users (Jalali and Leake 2012a, 2012b) or the adaptation of system outputs based on automatically generated user profiles (Gómez-Gauchía et al. 2005; Gómez-Gauchía, Díaz-Agudo, and González-Calero 2006a, 2006b, 2006c). Conversely, adaptable automation is reflected in several subprinciples of mixed initiative. For instance, users can transfer aspects of dialogue regulation to the system and reclaim it anytime (Branting, Lester, and Mott 2004), or they can switch between different modes (Göker 2003; Göker et al. 1998; McSherry 2001a). Nowadays, a fully automated adaptation of system behaviour to user characteristics and states is highly popular among human-computer interaction designers. For instance, adaptive hypermedia applications collect user data to build user models, which can then be used to adapt contents to the goals, preferences, and knowledge of individual users (Brusilovsky 1998, 2001; Kobsa, Koenemann, and Pohl 2001). However, it should be considered that adaptive automation has its costs: While it aims to reduce workload as users do not have to make decisions about function allocation, it can also reduce users’ ability to predict the behaviour of the system (Miller and Parasuraman 2007). This concern has been supported empirically (Sauer, Kao, and Wastell 2012): In a process control task, no performance benefits for adaptive automation were found but participants reported higher workload, fatigue, and anxiety, while with adaptable automation they were more active and felt more confident. Adaptivity and adaptability can also co-exist within one and the same application, and the choice of a particular combination requires a careful consideration of several factors such as demands, convenience, irritation, or the consequences of false adaptations (Kobsa, Koenemann, and Pohl 2001). In CCBR systems, a specific risk of adaptive automation is that it is far from clear what constitutes an appropriate consequence of user classifications. For instance, should novices receive less information to prevent information overload, or should they receive more information to foster learning? Attempts to make it easier for inexperienced users can backfire when information is withheld that would help them to learn about the system or domain. Therefore, such design decisions require a thorough analysis of the tasks and learning goals, which is not common in CCBR research.