Explore chapters and articles related to this topic
Basic Concepts
Published in Michael Pecht, Placement and Routing of Electronic modules, 2020
Guoqing Li, Yeun Tsun Wong, Michael Pecht
Graph theory describes the relations between vertices and their interconnections [ALA88]. A printed wire board can be treated as a graph system in which terminals (pins) of modules are considered to be vertices and wires can be interconnections between terminals of modules. A circuit on the board can be defined by a set of modules M, a set of terminals T, and a set of nets N. A terminal is either an input or output pin on the boundaries pf the chips or modules. A net (signal net) is a set of terminals to be interconnected by conductive paths (wires). A wire segment is a line segment on a specified layer which implements all or part of a net. A point, other than a terminal, at which two or more wire segments meet and are electrically connected is called a junction. The junction degree is the number of wire segments joined at a particular junction. This section presents the basic definitions and concepts of the graph theory that will be employed in the discussions of trees, placement, and routing methods. (For elementary graph theory, consult [BOR85, HAR73, IIAR69]).
Semi-custom devices, programmable logic and device technology
Published in D.A. Bradley, N.C. Burd, D. Dawson, A.J. Loader, Mechatronics, 2018
D.A. Bradley, N.C. Burd, D. Dawson, A.J. Loader
The next step is to choose a gate array with the required number of gates. The gates must then be allocated to specific circuit functions and interconnected according to the circuit design. This is usually done automatically by a place and route software package. A net list is produced by the computer from the circuit diagram and contains a list of all the components in the circuit together with their connections. The place and route software takes this information and maps it on to the gate array, allocating gates to specific functions of the circuit and interconnecting them. The space for wiring between the gates will be fixed, so that it may not always be possible to utilize all the gates on the array for a particular application; 70 to 95% utilization is typical with automated place and route, depending on the circuit complexity. Manual intervention in the place and route procedure can always be used to achieve higher utilization, or to optimize the layout for areas of circuit where the timing is critical.
Combinational Logic Design Using Verilog HDL
Published in Joseph Cavanagh, Digital Design and Verilog HDL Fundamentals, 2017
Example 4.1 The logic diagram of Figure 4.1 will be designed using built-in primitives for the logic gates which consist of NAND gates and one OR gate to generate the two outputs z1 and z2 . The output of the gate labeled inst2 (instantiation 2) will be at a high voltage level if either x1 ,x2, or x3 is deasserted. Therefore, by DeMorgan’s theorem, the output will be at a low voltage level if x1x2 , and x3 are all asserted. Note that the gate labeled inst3 is an OR gate that is drawn as an AND gate with active-low inputs and an active-low output. The output of each gate is assigned a net name, where a net is one or more interconnecting wires that connect the output of one logic element to the input of one or more logic elements. The remaining gates in Figure 4.1 are drawn in the standard manner as NAND gates.
Deep Learning Frameworks on Apache Spark: A Review
Published in IETE Technical Review, 2018
Nikitha Johnsirani Venkatesan, ChoonSung Nam, Dong Ryeol Shin
DeepLearning4j was developed by Skymind's Adam Gibson which is also referred as DL4j. To build a commercial-grade deep net application, Deeplearning4j can be put into a better utilization [50]. In addition to a big selection of deep nets, this Java library provides many tools such as distributed multi-node map-reduce procedure, a package for vectorization. DL4j can run on a distributed, multi-node setup. The programming language is mainly Java in which DL4j is built upon. In addition to that, the library also can run in Scala and Clojure. The library configures a deep net by selecting the values for its hyper-parameters. It comes with a built-in GPU support for a distributed framework, which is an important feature for the training process. TensorFlow supports parallel training on multiple nodes. The topology of the NN like multi-model hyper-parameter tuning is supported in DL4j. DL4j supports all topologies like parallelized GPUs and distributed TensorFlow. But TensorFlow uses proprietary way of distributing it. So, we have to install TensorFlow on each node, but in DL4j it uses the existing Spark clusters and chips the libraries using Spark context. To instantiate the deep model in Spark cluster, three parameters are passed namely configuration, Spark context, and training the master object which basically tells how to do the parameter evaluation. All the multiple nodes will deal with the subsets of data. The weights and biases are updated in each cluster after each iteration. DL4j has built-in DL models say, RBM [58], DBN [59], CNN [60], recurrent net, RNTN [61], autoencoders, and MLP [62]. It includes a vectorization library called Canova which is also built by Adam Gibson team. DL4j trains the deep net model on a distributed platform by iterative map-reduce.