Explore chapters and articles related to this topic
Introduction
Published in Santanu Kundu, Santanu Chattopadhyay, Network-on-Chip, 2018
Santanu Kundu, Santanu Chattopadhyay
To mitigate the ever increasing design productivity gap and to meet the time-to-market requirement, reuse of IP cores is widely used in SoC development. Besides IP cores, the bus interface protocol can also be reused to integrate the IPs. While reuse is one of the key challenges that IC design houses try to address, reuse of IPs, NI, and communication infrastructure such as routers, underlying network, and flow control protocols can be adopted in the NoC paradigm. Although selection of network topology and router architecture is purely application specific, reusing these in different applications will not give the optimal solution. Hence, the reusability is limited to a particular type of applications. For example, the network topology and router architecture used for mobile application cannot be same as those of video processing application. For similar applications, the design and verification effort due to reuse will be drastically reduced.
FPGAs for Rapid Prototyping
Published in Vojin G. Oklobdzija, Digital Design and Fabrication, 2017
Intellectual property (IP) cores are widely used in large designs. IP cores are commercial hardware designs that provide frequently used operations. These previously developed designs are available as commercially licensed products from both FPGA manufacturers and third-party IP vendors. FPGA manufacturers typically provide several basic hardware functions bundled with their devices and CAD tools. These functions will work only on their devices. These include RAM, ROM, CAM, FIFO buffers, shift registers, addition, multiply, and divide hardwares. A few of these device-specific functions may be used by an HDL synthesis tool automatically, some must be called as library functions from an HDL, or entered using special symbols in a schematic. Explicitly invoking these FPGA vendor-specific functions in HDL function calls or using the special symbols in a schematic may improve performance, but it also makes it more difficult to retarget a design to a different FPGA manufacturer.
Introduction
Published in M. Michael Vai, Vlsi Design, 2017
Bringing the macro-cell concept one step ahead, more complicated building blocks such as microprocessors, digital signal processors, memory modules, etc. can be integrated on the same chip. These building blocks are called intellectual property cores (IP cores), which can be optimized, verified, and documented to allow efficient reuses. IP cores can exist in different formats. A hard IP core is the mask information of the circuit, custom built, optimized, and verified for a specific application. A soft IP core is the behavioral description (e.g., VHDL) of a circuit, which can be parameterized and synthesized for different technologies. A hard IP core usually provides a better performance than a soft IP core but the latter is more flexible. A middle ground between hard IP core and soft IP core can be taken to produce firm IP cores. The firm IP core approach provides a register transfer level (RTL) description of a circuit, which specifies the operations to be performed on operands stored in registers. A design created using IP cores is called a system-on-a-chip (SOC) design, which has the advantage of a shorter design time by allowing previous designs to be reused.
Unsupervised image thresholding: hardware architecture and its usage for FPGA-SoC platform
Published in International Journal of Electronics, 2019
Jai Gopal Pandey, Abhijit Karmakar
In order to compare the implementation result, architectures of (Asari et al., 1999; Jianlai et al., 2009; Tian et al., 2003b) and (Jianlai et al., (2009 have been selected. The architecture as proposed by Asari et al. (1999) and Tian et al. (2003b) has been implemented by Tian et al. (2003b) on Xilinx Virtex xcv800 FPGA. Synthesis results of the above architectures along with architectural comparisons are shown in Table 5. Here, available result from literature is provided and the unavailable data are represented as not available (NA). As shown in Table 5, in implementations of (Asari et al., 1999; Tian et al., 2003b), image of size has been kept in the FPGA RAM resources. Similarly, the architecture provided by Jianlai et al. (2009) uses the Altera Cyclone II FPGA 2C35-based DE2 board. The image acquisition process works with resolution with frame frequency of 45 frames/second. The architecture uses double-port RAM, multiplier and divider IP cores. In comparison to the above three architectures, the proposed architecture acquires standard VGA resolution image in realtime. The image acquisition process uses a custom interfaced PTZ camera and a DDR2 SDRAM. The presented architecture requires low computing resources in comparison to other architectures, as described above. This is because of logarithmic approximation of BCV and incorporation of self-normalized operations that are based on simple fixed-point arithmetic components.
Butterfly-Fat-Tree topology based fault-tolerant Network-on-Chip design using particle swarm optimisation
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2019
P. Veda Bhanu, Pranav Venkatesh Kulkarni, Soumya J
The advancements in silicon technology have increased the number of transistors being packed on integrated chips (ICs). With scaling of ICs in a system-on-chip (SoC), the communication complexity has been increased (Lee, Chang, Ogras, & Marculescu, 2008). Hence, there is a need to develop an efficient and reliable communication architecture. The traditional bus based architectures in SoCs have limited bandwidth capabilities and cannot handle the increased bandwidth requirements (Benini & Micheli, 2002). Due to these limitations, network-on-chip (NoC) has been proposed as a viable solution to address the current application requirements (Dally & Towles, 2001). The major components in NoC are cores, routers and links. The communication among different cores in NoC is achieved by packet switching techniques using routers via links (Soumya & Chattopadhyay, 2013). In nano-scale era, the major challenge is to design an NoC that satisfies current application requirement. Hence, there is a need to build efficient, reliable and accurate NoC design that can improve the system performance. As the application requirement is increasing, it is difficult to integrate more number of IP cores which in turn increases the probability of failure. Therefore, it is necessary to develop reliable system from unreliable components without introducing excessive overhead. These highly scaled NoCs are prone to faults like transient, intermittent and permanent (Radetzki, Feng, Zhao, & Jantsch, 2013). Transient faults can be caused by temporary interferences like cross-talk, voltage noises or radiations etc. Intermittent faults can be caused by marginal or unstable hardware which occurs repeatedly at the same location, often in bursts. Permanent faults are caused by transistor or wires resulting in logic and delay faults, respectively. These faults can occur in any component of NoC.
Construction of intelligent multi-construction management platform for bridges based on BIM technology
Published in Intelligent Buildings International, 2023
When implementing FFT on FPGA, it can be realized by programming according to the principle of FFT butterfly operation. The FFT (Fast Fourier Transform) butterfly operation is a fundamental step in computing the FFT of a sequence of complex numbers. It is called a butterfly because of its shape, which resembles the wings of a butterfly. The butterfly operation combines two complex numbers that are opposite in some sense, such as being separated by a fixed distance in the input sequence. In order to reduce the delay caused by FPGA calculation multiplication, we can use the look-up table method to store the twiddle factors in the FPGA internal ROM to speed up the calculation speed of FFT. The steps to be taken for storing the twiddle factors are generating a look-up table (LUT), the size of the LUT should be determined by the FFT size, the FPGA should be programed to access the LUT during the FFT calculation, to speed up the FFT calculation speed, the LUT should be optimized for efficient access. In addition, it can also be implemented using the IP core that comes with the FPGA. The FPGA chip used in this sensor is MAX10 series with its own FFTIP core. Using IP core development can speed up development, reduce potential design risks, and improve development efficiency. At the same time, the calculation amount of FFT with different points is different, and the delay caused is also different, and the frequency resolution of FFT with different points is also different. It's important to note that the resolution of an FFT is limited by the Nyquist-Shannon sampling theorem, which states that the sampling frequency must be at least twice the maximum frequency component of the signal. If the sampling frequency is not high enough, the FFT will suffer from aliasing and may not provide accurate results. Generally speaking, the more calculation points of the FFT, the higher the delay caused by the calculation of the FFT, and the higher the frequency resolution of the calculation result. Therefore, it is necessary to select an appropriate number of calculation points based on factors such as calculation delay and frequency resolution.