Explore chapters and articles related to this topic
Reconfigurable Binary Neural Networks Hardware Accelerator for Accurate Data Analysis in Intelligent Systems
Published in Kavita Taneja, Harmunish Taneja, Kuldeep Kumar, Arvind Selwal, Eng Lieh Ouh, Data Science and Innovations for Intelligent Systems, 2021
Acceleration is a term used to describe tasks being offloaded to devices and hardware, which are specialized to handle them effectively. They are suitable for any repetitive intensive key algorithm and also can vary from a small functional unit to a larger functional block like motion estimation and video processing. It helps to perform crucial functions more efficiently than possible in running on a general purpose computer. It offloads certain processes onto the hardware that can be best equipped for boosting the performance of a system. Hardware acceleration utilizes the power of graphical or sound processing units of a computing system to increase performance for certain applications. In most computers, by default, the CPUs may not be powerful to handle complex tasks, and this is where hardware acceleration comes into play. Sound cards in computers are meant for processing and recording of high quality sounds. Graphics cards can similarly be utilized by hardware acceleration to allow quicker higher quality playback of videos and mostly also used for gaming applications. They can also perform complex mathematical computations when compared to CPUs. Generally, processes or sequential instructions are executed one by one and are designed to run general-purpose algorithms controlled by fetching mechanisms. Deploying file hardware acceleration in these kinds of tasks improves the execution of a specific algorithm by allowing greater concurrency, by providing specific data paths for its data, and possibly reducing the overhead of instruction control. Modern processors are multi-core and they often feature parallel computation units.
A novel and low-power wireless SOC design for pervasive bio-sensor
Published in Amir Hussain, Mirjana Ivanovic, Electronics, Communications and Networks IV, 2015
Jianhui Sun, Juntao Liu, Xinyang Liu, Yan Fan, Xinxia Cai, Tao Yin, Haigang Yang, Xinxia Cai
The whole SOC is fabricated in SMIC 180 nm lowpower process. The die area is about 5mm*5mm. The power consumption of the whole SOC is controlled at a low level (see Figure 7). The mini-RISC CPU achieves its design goal of delivering the performance required to implement the biological application sensing at about 20pJ per instruction. The energy consumption is reduced not only through the low transistor power voltage (logic at 1.0V ), but also by architectural optimization and PG/CG technologies, etc. The whole digital part is partitioned into different power islands, which locate at on or off state according to different working modes (see Figure 8 ). The hardware accelerator is faster than the software, and it also achieves lower energy consumption. The mini-RISC CPU works at 5MHz, and the load/store instructions are over twice as energy-expensive as other instructions. But, the radio circuit consumes the most energy, especially the RF transmitter circuit (70nJ/bit,@20Ks/s). In the CS method (compression ratio is 20 ), the recovered signal is better for perceptual identification (see Figure 2), also decreases the RF transmitter power consumption effectively (about X15 20 less energy consumption). The SCC hardware accelerator can satisfy the current application by using scenario, and all the energy consumption and BER are controlled at an appropriate level.
A novel variable neighborhood search for the offloading and resource allocation in Mobile-Edge Computing
Published in International Journal of Computers and Applications, 2022
Mohamed Younes Kaci, Malika Bessedik, Amina Lammari
In recent years, MEC has attracted the interest of several researchers and research centers, thus several architectures have been proposed as well as different algorithms for resource or computation offloading. This involves transferring resource-intensive computing tasks to a separate processor, such as a hardware accelerator, or an external platform. An offloading algorithm is then designed to determine the optimal offloading decision for all mobile users in the MEC system [3]. Many studies have applied the concept of computation offloading on the mobile edge computing paradigm to minimize energy consumption, satisfy delay requirements, allocate radio resources efficiently, maximize total revenue, maximize system utility, and/or reduce total cost of mobile users (or devices, equipment).