Explore chapters and articles related to this topic
Characterizing Glanceable Visualizations: From Perception to Behavior Change
Published in Bongshin Lee, Raimund Dachselt, Petra Isenberg, Eun Kyoung Choe, Mobile Data Visualization, 2021
Tanja Blascheck, Frank Bentley, Eun Kyoung Choe, Tom Horak, Petra Isenberg
Since its inception, the field of Ubiquitous Computing has been interested in systems that present information as a part of a person's environment. Visualization are often meant to be sensed in the periphery of one's environment. In 1996, Mark Weiser and John Seely Brown at Xerox PARC [57] described the periphery as “what we are attuned to without attending to explicitly” and the goal of the peripheral displays they created was to allow people to have a subconscious awareness of a variety of information and to move particular pieces of information from the periphery to the center of their attention and back again when that information was particularly salient and relevant. They described this calm technology as allowing people to be aware of many things without overburdening them with needing to explicitly attend to differences in graphs or data tables to gain information.
Eliciting Public Display Awareness and Engagement: An Experimental Study and Semantic Strategy
Published in International Journal of Human–Computer Interaction, 2019
Jason O. Germany, Philip Speranza, Dan Anthony
Implicit interactions are not a new approach within the interaction design field but are often positioned as ‘calm technology’ (Weiser & Brown, 1997) where there is a removal of direct or instrumental interaction between the machine and user. The systems in this type of approach are contextually aware, responsive, and predictive (Vasilakos & Pedrycz, 2006). Although this stagey parallels some of the use cases around public displays, it does not move people from the periphery to the foreground of direct engagement with the screen. In fact, much of calm technology is designed to operate beneath the perception and action of the end user. Our strategic implementation of implicit signals was coupled with explicit signals to move beyond the systems awareness of the individual pedestrian to the pedestrian’s awareness of the interactivity of the system. Utilizing both implicit and explicit singles in communication is a fundamental element in face to face human communication. These signals are typically referred to as non-verbal (implicit) and verbal (explicit). The term “signal” which is later referenced as stimuli in the experimental section of this article, refers to the goal-directed “code” that is meant to communicate from one person to another and differs from “signs” (Argyle, 1975). For humans, this encoding and decoding (Knapp, 1978) of signals is a continuous process between sender and receiver. If one considers the potential greeting protocol that humans engage in before a face to face conversation, this general linguistic framework could be imposed on a potential greeting between public display and street-level pedestrian. This positions the display as an actor in the dialog process and having a level of agency within the environment. As many researchers within the artificial intelligence community have used human communication models as a basis for creating meaningful interfaces (Jokinen, 2009), our research sought to leverage this approach as well. If creating meaning is about conveying signs (Boradkar, 2010) then in linguistic dialog meaning is related to message clarity. Considering interplay or overlapping of the two signal types, non-verbal (implicit) and verbal (explicit), these help to synchronize the meaning through “complementation” and reinforcement of code types (Ekman & Friesen, 1969), which leads to improved message clarity. Although this concept is born in human to human communication, explicit and implicit signals or stimuli are readily present in the artificial world as well. Everyday objects as well as computing interfaces share a mixture of text based (explicit) and behavioral or graphics-based (implicit) signals. Although these are not always are coded, they are not discretely coordinated. Our screen-based approach to promote awareness and attention of interactivity of street-level displays utilizes this fundamental linguistic concept of complementation as a means of coordinating these coupled signals. To test the implications of this approach, we developed a series of screen-based treatments that isolated each of these signal types in an effort to determine which ones produced the best response from pedestrians.