Explore chapters and articles related to this topic
Human Factors Testing and Evaluation: An Historical Perspective
Published in Samuel G. Charlton, Thomas G. O’Brien, Handbook of Human Factors Testing and Evaluation, 2019
Thomas G. O’Brien, David Meister
Initial attempts based on variations of task analysis to represent the human in computer models and to generalize those results across a heterogeneous population drew limited results, especially when attempting to represent large-scale operations. Employing the model data, it was quickly realized that validating the performance outcome of computer-based simulations was difficult and cost ineffective. Indeed, some would argue that if it was necessary to validate the model through the need for actual, operational testing, then why bother with the model. One answer came in the form of distributed interactive simulations (DIS). With DIS, it is now possible to conduct cost-effective, large-scale simulations with participants from geographically distributed areas. Work, however, continues on models that focused on specific system functionality.
An Introduction to DEVS Standardization
Published in Gabriel A. Wainer, Pieter J. Mosterman, Discrete-Event Modeling and Simulation, 2018
Gabriel A. Wainer, Khaldoon Al-Zoubi, David R.C. Hill, Saurabh Mittal, José L. Risco Martín, Hessam Sarjoughian, Luc Touraille, Mamadou K. Traoré, Bernard P. Zeigler
Parallel simulation middleware (e.g., GATech Time Warp [1], Warped [2], SPEEDES and WarpIV [3], etc.) had usually focused on tightly coupled systems. Instead, distributed simulation middleware must allow partitioning and running simulations remotely. For instance, DIS (Distributed Interactive Simulation [4]) allowed the development of distributed simulation-based training solutions (by sharing data and computing power remotely). Other solutions included the HLA (High Level Architecture [5]), which was designed for interoperability of distributed simulation assets, and TENA (Test and Training Enabling Architecture [6]), which was built on top of CORBA (and subject to its strengths and limitations, see below) to enable real-time interoperation of assets in geographically distributed test ranges. This distributed simulation middleware focuses on data sharing, distributed processes, communication, and time management (in HLA), and has facilitated the development of large-scale distributed simulations. Nevertheless, model reuse using this kind of middleware is still difficult, ad hoc, and costly. The motivation for the discussion in this chapter stems from this need of model interoperability between the disparate simulator implementations; we intend to discuss the means to provide simulators that are transparent to model execution. We claim that a DEVS-based standard would improve sharing and interoperability of M&S assets, both locally and distributed, including specifications to facilitate a System of Systems that interact using a net-centric platform. At this level, model interoperability is one of the major concerns.
Arrangement and Accomplishment of Interconnected Networks with Virtual Reality
Published in IETE Journal of Research, 2022
This is a project underway that builds the simulated realism transmission standard, particularly to connected VR. This definition of connectivity needs as one of the key difficulties for connected audiovisual and VR affects most of several aforementioned concerns [6]. A systematic method of transmitting consumer demands through to the telecommunication layer is still a work in progress, and mappings across multiple layers of QoS definition “is only becoming to be recognized.” In this study, we offer the interconnectivity paradigm, which reflects an internet perspective of a decentralized VE [7], as a complement to continuing studies. The concept goes into further depth on capacity needs for shareable virtualized entities that vary as a consequence of human activities. In a test case, an experimental multiuser connected VE for remote monitoring of a robot manipulator was employed. This VE uses a mixture of common technologies, including VRML, Distributed Interactive Simulation (DIS), and Java [8–10], to operate across User Datagram Protocol (UDP) employing IP multiplex. A virtual environment is a programme that creates separated python virtual environments for distinct projects to keep their dependencies separate. Most Python programmers utilise this as one of their most significant tools. When assessing an algorithm's efficiency, Big O notation is used to indicate the complexity of the method, which in this context refers to how effectively the algorithm scales with the size of the dataset.