Explore chapters and articles related to this topic
Intelligent Systems
Published in Shuzhi Sam Ge, Frank L. Lewis, Autonomous Mobile Robots, 2018
Sesh Commuri, James S. Albus, Anthony Barbera
Echelons of control are defined by decomposition of tasks into subtasks and the assignment of task skills and responsibilities to organizational units in a chain of command. Range and resolution of signals, images, and maps are defined by sampling interval and field of regard over space and time (e.g., pixel size and field of view in images, scale and size of maps, and sampling frequency of signals). Levels of abstraction are defined by grouping and segmentation algorithms that operate on the geometry of entities (e.g., points, lines, vertices, surfaces, objects, groups) and the duration of events (e.g., milliseconds, seconds, minutes, hours, days). These three hierarchies are related, but not congruent. For example, the range and resolution of maps are related to echelons of control by speed and size of the system being controlled. Resolution of images is related to spatial dimension by magnification. Resolution of maps is related to spatial dimension by scale. Pixels in images are related to pixels on maps by transformation of coordinates.
Design and Mounting of Windows, Domes, and Filters
Published in Paul Yoder, Daniel Vukobratovich, Opto-Mechanical Systems Design, 2017
The multisegmented configurations blend poorly with their surrounds. The dualglazing window represented in Figures 6.6 and 6.7 is of this general type. It was designed to fit into the bottom of the fuselage of a military reconnaissance aircraft. View (a) of Figure 6.18 depicts the instantaneous field of view of the sensor as well as the field of regard swept out by the optical axis as a scan prism rotates about a transverse axis or as the sensor itself scans about orthogonal gimbal axes. In view (b), the eight triangular flat-plate segments form a conical surface that presents a more aerodynamically favorable shape than that of the conventional spherical dome. The apex of this window is capped with a tungsten/zirconium/molybdenum (TZM) tip that is resistant to the extreme temperature caused by friction at high velocity. The interface with the missile is by way of a titanium ring as indicated. The windows typically are attached to this ring with elastomer seals if the missile velocity is not too high.
Cognitive Engineering: Designing for Situation Awareness
Published in Eduardo Salas, Aaron S Dietz, Situational Awareness, 2017
Rasmussen’s physical form level represents the hardware implementation and physical details of the work domain. The relative locations and appearances of the hardware are considered at this level. Properties such as physical ergonomics (e.g., the reach envelop, the field or regard, and field of view), the arrangement of displays and controls, the format of individual displays (e.g., circular or tape display of airspeed, head-up or head-down orientation) and controls (e.g., spring-centered or force stick; shape coding, etc.), and the spatial distribution of components within the aircraft (i.e., what is connected to what and what is near to what) are all considered at this level. In addition, the specific weather conditions, the geography of the flight area, and the topology of the airport are some of the important physical details to be considered. Again, it is important to note that properties of physical form take meaning in the context of the higher levels of abstraction. Hutchins (1995b) provided a nice example of how a simple physical attribute, the size of a marker on a display, can impact the computational demands within a cockpit.
Examining the representativeness of a virtual reality environment for simulation of tennis performance
Published in Journal of Sports Sciences, 2021
Peter Le Noury, Tim Buszard, Machar Reid, Damian Farrow
With regards to the action responses, the type of stance used to perform groundstrokes in VR tennis paralleled the stance used in real-world tennis (85% match between VR and real-world for forehands, 70% for backhands). This is indicative of action fidelity – a core tenant of representative learning design. We acknowledge, however, that a significant difference was observed between VR and real-world tennis in the number of steps taken when performing groundstrokes. This difference equated to 0.6 fewer steps (mean difference) in VR tennis. We suspect this might have been caused by player’s movements being constrained by the VR headset (e.g., the wire attached to the headset, and the headset’s mass). Additionally, although field of regard (total area that can be captured by a person) was not restricted in the VR environment, field of view (the extent of the environment that can be seen at a given moment) was limited, which may have influenced perception and action when playing VR tennis. It is also possible that players actions (such as number of steps) were influenced by the presence of the crowd in the VR environment, therein causing differences to the real-world condition where no crowd was present. Certainly, research has shown that surroundings within VR environments can induce a sense of anxiety (Stinson & Bowman, 2014). Further research is warranted to understand the factors that potentially influence movement behaviour.
The Effect of Head Tracking on the Degree of Presence in Virtual Reality
Published in International Journal of Human–Computer Interaction, 2019
Tina L.Y. Wu, Adam Gomes, Keegan Fernandes, David Wang
Human-VR interaction consists of a computer system generating an immersive environment that is perceived by the user. The user can then interact with the system through input devices. Two key components that may help evaluate the performance of the VR system are the degree of immersion and presence (Bowman & McMahan, 2007). Immersion is defined by the objective metrics of sensory inputs provided by a VR system, while presence is related to the user’s subjective experience of the VE (Bowman & McMahan, 2007). Factors that contribute to the level of immersion include the display field of view (the size of the visual field that is viewed instantaneously by the user), stereoscopy, field of regard (the size of the visual field surrounding the user), display size and resolution, frame rate, refresh rate, and head-based rendering (the product of head tracking which allows the VE to rotate and translate relative to the user’s head) (Bowman & McMahan, 2007). The purpose of the current study is to examine how changes in immersion, by altering head tracking, can affect the objective measure of postural stability and how presence, as an output of the VR experience, is affected by this change in immersion.