Explore chapters and articles related to this topic
A Review on Probabilistic CMOS (PCMOS) Technology: From Device Characteristics to Ultra-Low-Energy SOC Architectures
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
Krishna V. Palem, Lakshmi N. Chakrapani, Bilge E. S. Akgul, Pinar Korkmaz
This chapter reviews a novel technique, referred to as probabilistic CMOS or PCMOS technology, as a promising approach to addressing the aforementioned challenges and as a dramatic shift from previous work. Devices based on this technology, in which noise is harnessed as a resource to implement CMOS devices exhibiting probabilistic behavior, are guaranteed to compute correctly with a probability p. Here, p is a design parameter; and by design, the devices are expected to compute incorrectly with a probability (1 – p). The foundations of PCMOS technology are rooted in the physics of computation, algorithms, and information theory. Earlier, using techniques derived from the physics of computation and information theory, Palem 2003 showed that the thermodynamic cost of computing a bit of information is directly related to its probability p of being correct. This proof uses purely entropic arguments derived from the second law of thermodynamics (Palem 2005). Further, using an abstract model of computation, the randomized bit-level random access machine (RaBRAM) (Palem 2003), and the example of the distinct vector problem (Palem 2003), it has been demonstrated that such energy savings at the switching level can be harnessed at the application level to construct a probabilistic algorithm [the value amplification algorithm (Palem 2003)] that is more (energy) efficient than is the best possible deterministic algorithm.
BiCMOS Process Simulations
Published in Chinmay K. Maiti, Introducing Technology Computer-Aided Design (TCAD), 2017
Process simulation is especially helpful in the initial phase of technology development. As device lots become more and more expensive, process modeling is increasingly important. Process simulation and modeling is increasingly sophisticated, but accuracy remains a problem. There is generally a time lag between the introduction of a particular process and its accurate modeling. Modeling of front-end processes has been facing challenges caused by a continuous reduction of the implant energies and of the thermal budgets of the annealing schemes from soak anneals to the spike anneals which are now used for production. The complexity of physical models is a major factor that impacts process simulation. Simplified physics minimizes computation time. With technology scaling, however, the need for ever more accurate doping/stress profiles has increased and complex physical models are added at each new generation. Also, the best models available are usually too complicated and slow to be used in multidimensional process simulation, so often compromises have to be made. It has to provide insight during design, optimization guidelines during implementation to manufacturing, and debug during large-scale manufacturing.
Gas-Turbine Exhaust Diffusers
Published in Bijay K. Sultanian, Logan's Turbomachinery, 2019
For a physics-based computation of Cp for a diffuser with nonuniform flow properties at its inlet and outlet, we propose the following methods for incompressible and compressible flows. These methods are ideally suited to post-process CFD results to compute Cp.
Thermodynamic cost of edge detection in artificial neural network (ANN)-based processors
Published in International Journal of Parallel, Emergent and Distributed Systems, 2021
In this work, we study the thermodynamic cost of edge detection in ANN-based processors using MLP. We provide a comparative analysis on the fundamental energy bounds of edge-detection task as well as the cost associated with training the network and simulate the results using Python and MATLAB. The comparison we provide using three processors based on different architectures shows that a trained ANN requires orders of magnitude lower energy to perform edge-detection task. Our results support the trade-off between energy efficiency and ‘general-purposeness’ in processors which is deeply rooted in physics of computation. This simply suggests that even if processors were to be operated under ideal conditions, there will still be an underlining inefficiency based on the dynamics of information processing in the computing approach. Here, we show that special purposeness in ANN-based structures can help minimise this cost. In addition to the structure of the network, we also study the effect of data structure in the amount of energy dissipated to detect the edges. We show that the randomness of the data set as well as the size of the image plays a role in the amount of energy dissipated to train the ANNs.
Quantifying quantumness of correlations using Gaussian Rényi-2 entropy in optomechanical interfaces
Published in Journal of Modern Optics, 2018
J. El Qars, M. Daoud, R. Ahl Laamara
where , which usually written as [45]. These entropies are continuous, non-negatives, invariants under the action of the unitary operations, additive on tensor-product states, and reduce in the limit to the conventional von Neumann entropy defined by [46]. In quantum information theory, this famous entropy captures the degree of quantum information possessed in an ensemble of a large number of independent and identically distributed (i.i.d.) copies of the state [47], where it has been intensively used in different fields of sciences such as communication and coding theory [48], statistical physics [49], quantum computation [50], statistics and related fields [51]. In addition, the von Neumann entropy has been widely used with the well-known name entropy of entanglement to quantify entanglement in miscellaneous areas of physics like theory of black holes [52], relativistic quantum field theory [53], spins of noninteracting electron gases [54], three-level atom under influence of a Kerr-like medium [55], harmonic lattice systems [56], the Lipkin-Meshkov-Glick model [57] and also spin-orbital models [58]. We notice that in the case of pure states, many entanglement measures (including entanglement cost [59,60], entanglement of formation (EoF) [59,61], squashed entanglement [62] and also the quantum discord [7,11]) coincide with the entropy of entanglement as the only measure of entanglement in this case [3]. In contrast, some other quantifiers of entanglement such as the negativity [63] and the logarithmic negativity [64] can not coincide with the entropy of entanglement even on pure states.