Explore chapters and articles related to this topic
The composite professional
Published in Riadh Habash, Professional Practice in Engineering and Computing, 2019
The electromechanical age signaled the beginnings of telecommunications. This age can be outlined around the time between 1840 and 1940. Charles Babbage (1791–1871) worked on his “difference engine”, a mechanical computer which could perform mathematical calculations. The world’s first computer algorithm was written by Ada Lovelace (1815–1852) in the UK. In 1890, Herman Hollerith used punch cards to help classify information for the US Census Bureau. John Von Neumann (1903–1957) developed many concepts, including the “Von Neumann architecture”. In the US, Alonzo Church (1903–1995) developed the key concepts of computability and computing, such as lambda calculus. In the UK, Alan Turing (1912–1954) introduced many core concepts of computer science, including “Turing machine”, and “Turing test”. Grace Brewster Murray Hopper (1906–1992), an American computer scientist and US Navy rear admiral, was a pioneer of computer programming who invented one of the first compiler-related tools. The first, most famous computer, called Electronic Numerical Integrator and Computer (ENIAC), was built in the 1940s, and the first hard-disk drive, weighing a ton and storing five megabytes, was built in 1956.
Introduction to computer architecture
Published in Joseph D. Dumas, Computer Architecture, 2016
The goal of the original von Neumann machine was to numerically solve scientific and engineering problems involving differential equations, but it has proven remarkably adaptable to many other classes of problems from weather prediction to word processing. It is so versatile that the vast majority of computer systems today are quite similar, although much faster. The main factor distinguishing the von Neumann architecture from previous machines, and the primary reason for its success and adoption as the dominant computing paradigm, was the stored program concept. Because the machine receives its instructions from a (easily modified) program in memory rather than from hard wiring, the same hardware can easily perform a wide variety of tasks. The next four chapters take a much more detailed look at each of the major subsystems used in von Neumann–type machines. Memory is discussed in Chapter 2, the CPU in Chapters 3 and 4, and I/O in Chapter 5.
Real-Time Expert Systems
Published in Robert F. Hodson, Abraham Kandel, Real-Time Expert Systems Computer Architecture, 1991
Robert F. Hodson, Abraham Kandel
The von Neumann computer architecture has performed admirably for a large number of varied applications. Many real-time applications currently use von Neumann style microprocessor control systems. Unfortunately, the von Neumann architecture has not adapted as readily to expert system applications. Processing in expert systems is primarily non-numeric and does not run efficiently on the conventional von Neumann machine. Symbolic knowledge representations used in expert systems are fundamentally different from those of numeric processing. Symbolic operations are memory intensive. A von Neumann architecture presents a processor to memory bottleneck when intensive irregular memory accesses are made [Hwang87]. Additionally, the von Neumann architecture is fundamentally sequential in nature and does not utilize concurrency to increase performance.
Emerging memristive neurons for neuromorphic computing and sensing
Published in Science and Technology of Advanced Materials, 2023
Zhiyuan Li, Wei Tang, Beining Zhang, Rui Yang, Xiangshui Miao
Over the past few decades, significant progress has been achieved in artificial intelligence (AI) as a result of availability of big data, increased computational power, and development of machine learning, with even more dramatic changes envisioned in the further [1–4]. However, the rapid development of AI technology poses considerable challenges for the underlying electronic hardware and system, particularly their energy consumption [5]. Traditional mainstream hardware platforms are based on von Neumann architecture in which computing and storage units are physically separated, so computing requires continuously swapping of data between the units. This architecture is efficient for precision computing tasks, but becomes inefficient when it comes to handling unstructured data-intensive applications required for AI. Thus, it is urgent to develop new computing architectures and devices.
Biological function simulation in neuromorphic devices: from synapse and neuron to behavior
Published in Science and Technology of Advanced Materials, 2023
Hui Chen, Huilin Li, Ting Ma, Shuangshuang Han, Qiuping Zhao
With the continuous development of the Internet, the online world demands that the computer should have more storage and faster processing for the big-data. However, these abilities are quickly approaching their theoretical limit for the storage and processing. Because the modern computing systems are built on von Neumann architecture, memory wall (a physical separation) is formed between memory and processor, which hinds the speed for information processing. Moore’s law shows that the electronic devices should be getting smaller and smaller in order to increase their storage density, but the physical size limit and high energy consumption have not made it go down any further. Therefore, it is an urgent requirement to develop new devices beyond the Moore’s law and von Neumann bottleneck [1,2].
Resistive Random Access Memory: A Review of Device Challenges
Published in IETE Technical Review, 2020
Varshita Gupta, Shagun Kapur, Sneh Saurabh, Anuj Grover
It is well-known that Von Neumann architecture-based systems suffer from bandwidth constraints and related speed impact due to separate processing and memory units. In-memory computing alleviates these limitations by integrating the processing unit and the memory [87,88]. This makes neuromorphic computing using RRAMs promising due to the possibility of in-memory computation [89,90]. Some synaptic devices based on oxides such as PrCaMnO (PCMO), HfO, TaO, TiO, NiO, AlO, WO, etc. have been explored in the literature [91]. Additionally, the need to realize high-density neural networks have made RRAMs a potential candidate for these applications [92]. In recent times, hybrid combinations of transistors and RRAMs such as 1T1R, 2T1R, and 4T1R structures are gaining significant interest in these applications [89,93].