Explore chapters and articles related to this topic
Basic IT for Radiographers
Published in Alexander Peck, Clark’s Essential PACS, RIS and Imaging Informatics, 2017
Processor: the heart of every workstation is a processor. This device in its simplest form takes collections of digital inputs (0 or 1) and processes them into outputs upon the instructions received from the programs currently running. The speed of the processor directly influences the speed the user perceives when using the workstation.Memory: in this context, random access memory (RAM) is a highspeed place for items currently in use to be stored. The more RAM available, the more items (programs, instructions, user data, etc.) can be used and manipulated at the same time. Applications, such as three-dimensional (3D) reconstructions, require larger amounts of RAM than viewing a single plain radiographic image. Equated to a human task, RAM is equivalent to human short-term memory – some people can remember longer sequences of numbers than others. RAM is measured in gigabytes (Gb).Graphics card: this is a second processor and extra memory dedicated to displaying images. It is needed owing to the greater complexity of modern applications requiring different mathematical operations than the standard processor is designed for.Storage: either a hard drive (a spinning magnetic disk) or solid state drive (a miniaturised internal device, similar in principle to a large, fast memory card) where data are stored, even when powered off.
Introduction to Video Compression
Published in Cliff Wootton, A Practical Guide to Video and Audio Compression, 2005
In the abbreviations we use, note that uppercase B refers to bytes, and lowercase b is bits. So GB is gigabytes (not gazillions of bytes). When we multiply bits or bytes by each increment, the value 1000 is actually replaced by the nearest equivalent base-2 number. So we multiply memory size by 1024 instead of 1000 to get kilobytes. As you learn the numbers represented by powers of 2, you will start to see patterns appearing in computer science, and it will help you guess at the correct value to choose when setting parameters.
Satellites
Published in Mohammad Razani, Commercial Space Technologies and Applications, 2018
Ikonos can acquire data over almost any area of the Earth’s surface, being equipped with an onboard recorder. This recorder can hold 64 GB of data which is approximately 26 full images of both Pan and MS data. A network of ground receiving stations owned by Space Imaging’s affiliates is being constructed to enable direct downlinking of data in many areas.
An Integrated Approach Improved Fast S-transform and SVD Noise Reduction for Classification of Power Quality Disruptions in Noisy Environments
Published in Electric Power Components and Systems, 2022
Hui Hwang Goh, Ling Liao, Dongdong Zhang, Wei Dai, Chee Shen Lim, Tonni Agustiono Kurniawan, Kai Chen Goh
To demonstrate the proposed technique’s efficiency in signal detection, the proposed approach, the classic ST method, and the DRST method are used to detect the same disturbance at the same sampling frequency () but with different total sampling points (sampling time). The suggested method’s detection time is calculated as the product of signal denoising, disturbance location detection, and IFST-based detection. The preceding experiments were conducted using the same apparatus. The experiment was conducted using a computer equipped with an Intel Core i5-6200U processor and 8.0 GB of random access memory (RAM). Table 4 compares the results of various strategies for varying the time required to detect a disturbance with As demonstrated in Table 4, the detection time of the approach suggested in this study is significantly less than that of the ST and DRST methods, and the advantages of the proposed detection method become more apparent as N increases. The overall time necessary to classify disturbances using this method is 0.1860 sec (), which includes the time required to detect the disturbance and the time required to assess the type of disturbance using the ruled decision tree.
Macro-scale complex form generation through a swarm intelligence-based model with urban morphology constants
Published in Architectural Science Review, 2021
In the simulation process, point geometries (the Locust output) were connected to the Trail component from the Kangaroo plug-in. This allowed the trajectories the agents followed during their motions to be represented as curves (output). A geometry component, which can create complex geometry, was connected to these curves. This geometry was based on the NURBS surfaces. During the simulation, based on the midpoint of each curve, the NURBS surfaces were formed for the building blocks. The simulation could be stopped at the desired time (Figure 4). Simulation results were registered in ‘minutes’, which produced differing results depending on the processing hardware. For this study, a portable workstation was used (equipped with the Intel Core i7 – 3820QM processor – 8 MB Cache up to 3.70 GHz, 8 GB RAM, 500 GB Hard Disk Drive, 7200 rpm).
Evolutionary Multi-Objective Optimization Algorithm for Resource Allocation Using Deep Neural Network in 5G Multi-User Massive MIMO
Published in International Journal of Electronics, 2021
K. E. Purushothaman, V. Nagarajan
The simulation outcomes are then included to attain the effectiveness of this proposed resource allocation process. The base stations’ maximum transmit power on each subcarrier is, and the value for number of subcarrier is 600. The bandwidth of this subcarrier is 15 kHz (Li et al., 2018). The computation server performs the training procedure with four Intel Core i9 CPUs, four Intel Xeon E7-4800 processors, and 128 GB random access memory. Then, the testing outcomes are obtained by utilising an Intel Core i7-6500 U processor and a computer having 8 GB random access memory. The training and testing outputs are then produced by applying and channel realisations. The following section describes the comparative performance of the proposed system with some of the existing algorithms in terms of some network parameters such as average throughput, fairness index and energy efficiency. The training parameters of auto-encoder are a sigmoid transfer function, the maximum number of epochs = 1000, learning rate = 0.001. Here, an adaptive auto-encoder is applied to develop this DNN model. The number of hidden layers in this DNN is five with 200 neurons (Ahmed et al., 2019).