Explore chapters and articles related to this topic
Acquiring and Processing Turbulent Flow Data
Published in Richard J. Goldstein, Fluid Mechanics Measurements, 2017
The central processing unit is the heart of the machine, where computation takes place. The data are organized as 8-, 16-, or 32-bit words. All computers have some fast random-access memory (RAM). The size of this memory typically ranges from 250 kbytes to 100 Mbytes in small laboratory computers. Computers also have read-only memory (ROM) for special-purpose memory, typically the “bootstrap” program for getting the system started and operating system or library programs. Computer systems usually have some mass storage like disks or tapes. Typical laboratory computers would have mass storage ranging from a floppy disk at 250 kbytes to hard disk storage of 40-80 Mbytes (or larger). Mass storage is generally slow in relation to computer cycle times; tapes are the slowest and least expensive, with access times of tens to hundreds of milliseconds, and hard disks are fastest, with access times in fractions of milliseconds. Once data have been located, data transfer is rapid, typically 10,000-100,000 bytes/s or faster. Tape storage is rapidly being displaced by disk storage, which continues to decrease in price, precipitously. Peripherals such as an alphanumeric keyboard, line printer, and plotter allow interaction with the experimenter. Access to the experiment is via digital or analog (through a D/A converter) signals sent to switches or actuators and via digital or digitized analog signals received from the sensors. The data bus provides an interface between the CPU and the peripherals and between the CPU and memory. Since the components on the data bus may be from different manufacturers, it is important that the bus (and interfacing) be standardized. Standard buses are IEEE-488, RS-232, S100, Q-bus, and Multibus. Most scientific equipment is designed for communication on the IEEE-488 or RS-232 interface bus. The data bus contains lines for transferring bits of data and bits of logic information needed to coordinate the data transfer.
Advanced Architecture Computers
Published in Hojjat Adeli, Supercomputing in Engineering Analysis, 2020
Peripherals: Disk subsystems (1-2 controllers, 1-8 drives), P64/40 (850 Mbytes to 6.8 Gbytes), P64/20 (255 Mbytes to 2.0 Gbytes removable). Mass storage subsystem: P64/110 (128 Mbytes to 15.7 Gbytes at up to 22 Mbytes/s). I/O subsystem: P64/210 (high-speed interface to disks, tapes, graphics terminals, allowing shared files with VAX front-end).
Performance analysis for Bernoulli feedback queues subject to disasters: a system with batch Poisson arrivals under a multiple vacation policy
Published in Quality Technology & Quantitative Management, 2023
George C. Mytalas, Michael A. Zazanis
Disasters may, for instance, represent Distributed Denial of Service (DDoS) attacks to the servers of Storage Area Networks (SANs) used to provide high reliability mass storage of data. When these DDoS occur they cause network resources and data to become unavailable to their intended clients. The client requests affected by the DDoS correspond to customers that are removed by the disaster. This application is discussed in Kim & Lee (2014) who analyze a queueing system with disasters and server breakdowns using the supplementary variable technique. Communication systems using intrinsically unreliable channels subject to clearing events may also be modelled as queues with disasters. In manufacturing systems, disasters may represent catastrophic failures resulting in machine breakdowns and damaging work in process inventory.
A virtual geographic environment for dynamic simulation and analysis of tailings dam failure
Published in International Journal of Digital Earth, 2021
Dayu Yu, Liyu Tang, Fan Ye, Chongcheng Chen
For the same purpose of visualizing geospatial space, the concepts of digital earth and virtual geographic environments (VGEs) were put forward by Gore (1998) and Lin and Gong (2001) respectively. Both emphasize the utilization of information technology to digitally reproduce the real geographic environment. Among them, digital earth is dedicated to digitizing the entire space–time changes of the Earth's environment to form a virtual globe (Guo, Liu, and Zhu 2010). The realization of a complete digital earth is a long-term goal, and due to the constraints of current technology, most of the digital earth applications that have been developed so far are based on mass storage technology and use earth observation (EO) data to describe the Earth in 2-D or 2.5-D at multiple resolutions, scales, and times. The term VGE originated because one can also use visual environments to represent geographic environments and to simulate the dynamic processes that take place in them.
Airport pavement responses obtained from wireless sensing network upon digital signal processing
Published in International Journal of Pavement Engineering, 2018
Zejiao Dong, Xianyong Ma, Xianzhi Shao
Therefore, it could be seen that data acquisition and utilisation of pavement response based on in situ sensors is an effective tool to understand pavement mechanical behaviour and evaluate pavement performance. However, fast processing, compression and further mining for continuous monitoring data is of significant importance considering the mass volume of obtained data, especially for strain/stress information collected with high frequency (generally more than 500 Hz for real traffic). The data needs mass storage space and sharply increases with measurement time. It is urgent to explore an effective way to process these data and acquire critical information faster. Digital signal processing (DSP)-related methodologies are reliable means which are widely applied for monitoring data processing in civil engineering (Dai and Van Deusen 1996, Burnham et al.2007).