Explore chapters and articles related to this topic
Innovations in Computer-Based Ability Testing
Published in Milton D. Hakel, Beyond Multiple Choice, 2013
The human factors review led to the opinion that because of these differences, “human processing time” was one of the most important variables likely to influence test scores when the battery was moved to another computer. Simply put, if some characteristics of the new computer had the effect of increasing the processing time available to examinees in the elapsed time limits for each test, scores on that test were likely to increase. Both human factors characteristics and computer performance characteristics might cause the scores to increase. Examples of salient human factors characteristics include display size, luminance contrast, response key size and spacing, keyboard position, and distance from the display. Examples of salient computer performance characteristics include processor speed, video display speed, disk access speed, and data transfer rate.
Design of Molecular Integrated Circuits
Published in Sergey Edward Lyshevski, Molecular Electronics, Circuits, and Processing Platforms, 2018
Advanced computer architectures (beyond von Neumann architecture) can be devised and implemented to guarantee superior processing, reconfigurability, robustness, networking, and so forth. In the von Neumann computer architecture, the CPU executes sequences of instructions and operands, which are fetched by the program control unit (PCU), executed by the data processing unit (DPU), and then placed in the memory. Caches (high-speed memory in which data is copied when it is retrieved from the RAM, improving the overall performance by reducing the average memory access time) are used. The CPU may have more than one processors and coprocessors with various execution units and multilevel instruction and data caches. These processors can share or have their own caches. The datapath contains ICs to perform arithmetic and logical operations on words such as fixed or floating-point numbers. The CPU design involves the trade-off between the hardware/software requirements, performance, and affordability. The CPU is usually partitioned on the control and datapath units. The control unit selects and sequences the data processing operations. The core interface unit is a switch that can be implemented as autonomous cache controllers operating concurrently and feeding the specified number (64 or 128) of bytes of data per cycle. This core interface unit connects all controllers to the data or instruction caches of processors. Additionally, the core interface unit accepts and sequences information from the processors. A control unit is responsible for controlling data flow between controllers that regulate the in and out information flows. The interface is accomplished by means of input/output devices and units. On-chip debuging, error detection, sequencing logic, selftest, monitoring, and other units must be integrated to control a pipelined computer. The computer performance depends on the architecture, organization, and hardware components.Figure 4.11. illustrates the conventional computer architecture.
A Review of the EMI Effect on Natural Convection Heatsinks
Published in IETE Journal of Research, 2023
Abdullah Genc, Habib Dogan, Ibrahim Bahadır Basyigit, Selcuk Helhel
The parallel-plate fin heatsinks with easy manufacture in Figure 3 are commonly utilized in circuit design applications. The design stage is performed for a typical PCB heatsink with a CPU processor. The simulation set-up consists of its body (base and fins), coaxial waveguide, and the ground that represents PCB ground in Figure 3(a). The boundary condition is set as open space as in Figure 3(b). The design parameters of the heatsink geometrically in Figure 3(c) are the number of fins (n), length (l), fin height (h), width (w), base height (b), and thickness of fins (t). That is fed with coaxial waveguide excited with TM01 mode in Figure 3(d). CST Microwave Studio based on Finite Integration Technique (FIT) is used in the simulation step with a time-domain solver. Hexahedral and λmin/20 are set as mesh shape and maximum mesh length, respectively. The heatsink geometry and its size change not only the computing time depending on computer performance but also the number of mesh cells. In the simulation design, it should be taken into account that the value of accuracy is at least 10−5 [42–45].
A novel geostatistical index of uncertainty for short-term mining plan
Published in CIM Journal, 2023
G. M. C. Dias, M. M. Rocha, V. M. Silva
In the twenty-first century, computational limitations have been overcome, and computer performance has improved by a factor of 1.7–76 trillion compared to manual computing (Nordhaus, 2007). The ability to use multiple cores and graphics processing units means that it is not necessary to compromise on complexity to consider all realizations in downstream calculations, that is, pass all realizations through a transfer function to construct a distribution of responses for resource estimates (Deutsch, 2018). However, working with multiple scenarios remains a shortcoming in the industry. Simple summary models could be useful, such as modeling the probability of meeting an economic threshold or modeling the local variance. For example, the realizations could be passed through a decision tree structure to help support a decision. Another approach is to collapse uncertainty into a few summary measures and base the plan on them. These approaches will never be as good as using all the realizations simultaneously, but they provide a practical solution using available software (Deutsch, 2018). Based on these concerns, geostatistical simulations are commonly overlooked in real mining industry applications (Dominy, Noppé, & Annels, 2002; Ortiz, Magri, & Libano, 2012; Verly, 2005; Yamamoto, 2001).
Modulus backcalculation methodology based on full-scale testing road and its rationality and feasibility analysis
Published in International Journal of Pavement Engineering, 2022
Chunlong Xiong, Jiangmiao Yu, Xiaoning Zhang, Evgeniy Korolev, Shekhovtsova Svetlana, Bo Chen, Fuda Chen, E. Yang
The use of irrational mathematical algorithms is the most concerned and researched problem in the modulus backcalculation, which has been proved to result in low efficiency, multiple or no solutions and poor stability of the backcalculation results (Plati et al.2020). Low efficiency is related to the amount of data of the algorithm, the difficulty of algorithm convergence and computer performance. As for the number of solutions, it is not always possible to obtain a corresponding and unique set of backcalculated modulus of pavement structural layer for a certain measured deflection basin. Because most of the algorithms in the modulus backcalculation are locally convergent. When the seed modulus value and the optimal solution are far away, it will have no solution. And when the algorithm has more than one locally convergent solution domains, non-uniqueness solutions will be obtained with the different seed values (Ghanizadeh et al.2020). Owing to the multiple solutions, the results of the backcalculated modulus must have stability problems, and the backcalculated modulus may only be the optimal solution in the mathematical, which does not conform to the engineering (Zaabar et al.2014).