Explore chapters and articles related to this topic
Retention and Comprehension of Information
Published in Robert W. Proctor, Van Zandt Trisha, Human Factors in Simple and Complex Systems, 2018
Robert W. Proctor, Van Zandt Trisha
Strings of digits are often used for such things as telephone numbers, bank accounts, customer identification, and so on. From a customer’s perspective, these numbers are essentially random. It is very difficult to remember random strings of digits, so chunking is an important strategy that can be used to remember them. Some important chunking strategies involve the size of the chunk and the modality in which the information is presented. Wickelgren (1964) showed that lists of digits are easiest to remember if they are organized into groups of a maximum of four. Grouping provides a better benefit when digits are presented auditorily rather than visually, because people tend to chunk visual digits into pairs even when they are not grouped (Nordby, Raanaas, & Magnussen, 2002).
Memory and Training
Published in Christopher D. Wickens, Justin G. Hollands, Simon. Banbury, Raja. Parasuraman, Engineering Psychology and Human Performance, 2015
Christopher D. Wickens, Justin G. Hollands, Simon. Banbury, Raja. Parasuraman
Chunking may also be facilitated by parsing; that is, by physically separating likely chunks. The sequence 4149283141865 is probably less easily encoded than 4 1492 8 314 1865, which is parsed to emphasize five chunks (“for Columbus ate pie at Appomattox”). For an imaginative reader these five chunks may be “chunked” in turn as a single visual image. Loftus, Dark, and Williams (1979) investigated pilots’ memory of air traffic control information and observed that four-digit codes were better retained when parsed into two chunks (27 84) than when presented as four digits (2 7 8 4). Bower and Springston (1970) presented sequences of letters that contained familiar acronyms and found that memory was better if pauses separated the acronyms (FBI JFK TV) than if they did not (FB IJF KTV). Finally, Wickelgren (1964) found that our recall of telephone numbers is optimal if numbers are grouped into chunks of three digits. Results such as these have led to the general recommendation that the optimum size of grouping for any arbitrary alphanumeric strings used in codes is three to four (Bailey, 1989).
Social Dynamics
Published in John Flach, Fred Voorhorst, A Meaning Processing Approach to Cognition, 2019
One way to think about coordinative structures is that they are the complement of chunks. Earlier we suggested that a ‘chunk’ is an organization of information that typically is designed to take advantage of natural constraints of a situation. In essence, the process of chunking uses constraints on situations to reduce the possibilities that need to be considered (i.e., reduce the information demands). In a similar way, a coordinative structure sets up constraints on the action side, so that they align with the constraints of the situation to reduce the demands of the observation and control problems.
From Design Requirements to Effective Privacy Notifications: Empowering Users of Online Services to Make Informed Decisions
Published in International Journal of Human–Computer Interaction, 2021
Patrick Murmann, Farzaneh Karegar
Tarrell et al. (2014) call the process of grouping information for the purpose of facilitating cognition “chunking”. They maintain that grouping information with strong cohesive properties can help minimize the cognitive load on a user’s memory while processing information. Extending their train of thought to informational structures based on dedicated layers, this principle lends itself to motivate the notion of “divide and conquer” that underlies multilayering. Similarly, Dix (2012) discusses various methods of visualizing large data structures with hierarchical or multi-faceted topologies through the lens of perceptual and cognitive factors. He highlights the necessity for, and particularities of means of interaction that enable users to navigate through and between multiple views associated with such data.
Software Innovations to Support the Use of Social Media by Emergency Managers
Published in International Journal of Human–Computer Interaction, 2018
Linda Plotnick, Starr Roxanne Hiltz
During the last decade, research has burgeoned with software solutions to enable the identification, classification, organization, and assessment of SM posts that could aid EMs. There are scores of prototype systems, often developed as academic research with little or no consultation or participation with EMs to determine if the innovation would be attractive to them. Few of these prototypes have been developed to an operational level where they are actually deployed and used in real-time during major crises, and none have become “standard” for U.S. government organizations. These tools differ in their purposes and approaches but many of them are useful to create categories of related data, called “chunking” by Miller (1956) as a way of decreasing information overload and thus improving SM effectiveness.
File Semantic Aware Primary Storage Deduplication System
Published in IETE Journal of Research, 2022
Amdewar Godavari, Chapram Sudhakar, T. Ramesh
File semantic aware primary storage deduplication system needs to consider file size and file type as main attributes. Many researchers Shemi et al. [2], Meyer et al. [4] and Jin et al. [13] have tried to assess the effect of file size and type on the performance of primary storage deduplication systems. These systems have domination of small sized file access over large sized file access. As applying deduplication on small files is a resource-intensive task with less space-saving, most of the works [8,9] can be found applying deduplication only on large sized files. However, some researchers [7] have found that though small file deduplication results in less space-saving, system performance can be improved. In the context of large sized files, file type also has an important role in deduplication. Based on the file type, duplicate content level of the file and the possibility of change of content over a period of time can be predicted. Files such as text, document, backup, and virtual machine images undergo frequent changes and have more data redundancy. Files such as video, audio, image, and compressed types have low data redundancy and these files merely undergo changes. Few types of files' content redundancy can't be determined. Based on content redundancy, files can be partitioned as highly duplicate, low duplicate, and unpredictable duplicate. It has been observed that data redundancy across different types of files is negligible [2,14]. Deduplication among files of mismatched types results in increased deduplication overhead and less storage capacity saving. If deduplication metadata is maintained based on file size and file type, deduplication overhead can be reduced. Apart from file semantics, chunking that determines the duplicate identification level, affects the duplicate elimination ratio. Chunking can be applied at fixed-size or variable-sized block level or at whole file-level. Though variable sized chunking identifies more data redundancy, its application is not feasible in the primary storage system, due to its overhead. Between the fixed size and file-level chunking, the former identifies more data redundancy than the latter. Application of file-level chunking for high redundancy files reduces resource usage with decrease in the deduplication ratio. Similarly, the application of block level deduplication for low redundancy files results in low deduplication ratio at the cost of high resource usage. However, the file type-specific deduplication strategy helps in reducing the deduplication overhead and achieves storage space-saving.