Explore chapters and articles related to this topic
Psychological Principles Underlying Consumer Perceptions of Imitation Brands
Published in J. L. Zaichkowsky, The Psychology Behind Trademark Infringement and Counterfeiting, 2020
Feature integration theory proposes that features are perceived before objects, and in parallel across the field of vision. Objects are differentiated afterward and require focused attention to be identified correctly. For example, the consumer may be faced with a number of different dimensions, such as color and shape, which may or may not help distinguish among brands. Think about Coca-Cola and various store brands with red cola labels in big 2-liter bottles. If a feature, “a particular value on a dimension” (Treisman & Gelade, 1980), is the same across objects, that feature cannot be used to distinguish between them. Only features that are different across objects will help to distinguish them from each other. However, differentiation requires focused attention and serial processing of the objects. Describing what is similar between objects is more difficult than describing what is different because similar features are not serially processed. Only different features are noticed.2
Visual Search
Published in Christopher D. Wickens, Jason S. McCarley, Applied Attention Theory, 2019
Christopher D. Wickens, Jason S. McCarley
Work by Anne Treisman and colleagues elaborated on Neisser’s (1963, 1967) findings and couched them within an elaborate theory of visual perception and attention. Treisman’s feature integration theory (FIT) (Treisman and Gelade 1980) proposed that complex objects are represented in the visual system as conjunctions of rudimentary properties. Simple attributes—color, motion, orientation, coarse aspects of shape—are encoded preattentively in mental representations known as feature maps, where a single map is selective for a particular value (e.g., red) along a given stimulus dimension (e.g., color). A multidimensional object like a large red square is represented by the pattern of activity it evokes across the array of feature maps. Representation is constrained, however, by the need to coordinate this activity. In the initial stages of processing, multiple feature maps are not spatially registered with one another. Focal attention is therefore necessary to appropriately bind together the various features of an object. Only by focusing attention on a single location can the observer link activity at that position across multiple maps, producing an accurate representation of a multifeature object. Outside of focused attention, features from multiple objects can be jumbled or combined inappropriately. Quickly viewing a display that contains a red square and a blue cross, for example, the subject may perceive a red cross and blue square. Such inappropriate combinations of features are termed illusory conjunctions (Treisman and Schmidt 1982).
Human Information Processing
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
Robert W. Proctor, Kim-Phuong L. Vu
In a visual search task, subjects are to detect whether a target is present among distractors. Treisman and Gelade (1980) developed feature integration theory to explain the results from visual search studies. When the target is distinguished from the distractors by a basic feature such as color (feature search), RT and error rate often show little increase as the number of distractors increases. However, when two or more features must be combined to distinguish the target from distractors (conjunctive search), RT and error rate typically increase sharply as the number of distractors increases. To account for these results, feature integration theory assumes that basic features of stimuli are encoded into feature maps in parallel across the visual field at a preattentive stage. Feature search can be based on this preattentive stage because a “target-present” response requires only detection of the feature. The second stage involves focusing attention on a specific location and combining features that occupy the location into objects. Attention is required for conjunctive search because responses cannot be based on detection of a single feature. According to feature integration theory, performance in conjunctive search tasks decreases as the number of distractors increases because attention must be moved sequentially across the search field until a target is detected or all items present have been searched. Feature integration theory served to generate a large amount of research on visual search that showed, as typically the case, that the situation is not as simple as depicted by the theory. This has resulted in modifications of the theory, as well as alternative theories. For example, Wolfe’s (2007) Guided Search Theory maintains the distinction between an initial stage of feature maps and a second stage of attentional binding, but assumes that the second stage is guided by the initial feature analysis.
Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation
Published in International Journal of Occupational Safety and Ergonomics, 2019
Pin-Chao Liao, Xinlu Sun, Mei Liu, Yu-Nien Shih
Bottom-up guidance is thought to be automatic and parallel, while top-down guidance is controlled and serial. Disputes exist in the relationship between bottom-up and top-down guidance for selective attention. Some researchers insist that bottom-up guidance is never affected by top-down guidance, while others assume that top-down guidance may override bottom-up guidance. For example, Dowd and Mitroff [47] tested the interaction between working memory and visual salience, and determined that working memory could override salience cues during a visual search. This implies that the matching of items in working memory takes less time to determine the searching target even if it is less salient than the distractors in the scene. Some theories suggest that the two guidance cues exist simultaneously. The most accepted theory is Treisman and Gelade's [48] feature-integration theory suggesting that complex objects are detected in two stages: recognition of features in the early stage and recognition of objects in the later stage. When searching for a target, attention is first automatically guided by features occurring in the visual field, and then serial processing is informed. It is believed that simple visual search tasks, in which targets differ from distractors only by a single feature, are manipulated only by parallel processing. However, in a typical real-world visual search, when targets differ from the distractors by a combination of several features, serial processing is necessary.