Explore chapters and articles related to this topic
Parallel Computing Models
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
In dataflow computers, an operation to execute is governed not by the current instruction of a program but by the availability of its operands. A dataflow computer simultaneously performs all operations that can be currently done, that is, whose operands are known. Therefore, data plays an active role in computation, as its availability establishes the operation and the time when it is executed. In a multiprocessor computer, parallel computation is specified by a programmer by indicating which sequences of operations can be executed concurrently. In a dataflow computer, the possibilities of parallel execution are not specified explicitly—they come from dependencies between data, since the input data of operations is the output data (results) of operations carried out previously.
Tools and Methodologies for System-Level Design
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC System Design, Verification, and Testing, 2018
Shuvra Bhattacharyya, Wayne Wolf
For most DSP applications, a significant part of the computational structure is well suited to modeling in a dataflow model of computation. In the context of programming models, dataflow refers to a modeling methodology where computations are represented as directed graphs in which vertices (actors) represent functional components and edges between actors represent first-in-first-out (FIFO) channels that buffer data values (tokens) as they pass from an output of one actor to an input of another. Dataflow actors can represent computations of arbitrary complexity; typically in DSP design environments, they are specified using conventional languages such as C or assembly language, and their associated tasks range from simple, “fine-grained” functions such as addition and multiplication to “coarse-grain” DSP kernels or subsystems such as FFT units and adaptive filters. The development of application modeling and analysis techniques based on dataflow graphs was inspired significantly by the computation graphs of Karp and Miller [14], and the process networks of Kahn [15]. A unified formulation of dataflow modeling principles, as they apply to DSP design environment, is provided by the dataflow process networks model of computation of Lee and Parks [16].
Tools and Methodologies for System-Level Design
Published in Luciano Lavagno, Igor L. Markov, Grant Martin, Louis K. Scheffer, Electronic Design Automation for IC System Design, Verification, and Testing, 2017
Shuvra Bhattacharyya, Marilyn Wolf
For most DSP applications, a significant part of the computational structure is well suited to modeling in a dataflow model of computation. In the context of programming models, dataflow refers to a modeling methodology where computations are represented as directed graphs in which vertices (actors) represent functional components and edges between actors represent first-in-first-out (FIFO) channels that buffer data values (tokens) as they pass from an output of one actor to an input of another. Dataflow actors can represent computations of arbitrary complexity; typically in DSP design environments, they are specified using conventional languages such as C or assembly language, and their associated tasks range from simple, fine-grained functions such as addition and multiplication to coarse-grain DSP kernels or subsystems such as FFT units and adaptive filters.
Neural decoding on imbalanced calcium imaging data with a network of support vector machines
Published in Advanced Robotics, 2021
Kyunghun Lee, Xiaomin Wu, Yaesop Lee, Da-Ting Lin, Shuvra S. Bhattacharyya, Rong Chen
For an efficient, reliable, and modular implementation of real-time neural decoding functionality, we employ a form of model-based design referred to as dataflow. In the context of signal and information processing systems, dataflow-based design involves representing application functionality as a directed graph in which vertices, called actors, correspond to signal processing modules, and edges represent communication of data between actors. Dataflow actors execute in terms of discrete units of execution, called firings (not to be confused with neuron firings). Actors in signal processing dataflow graphs can be of arbitrary complexity. Typical examples of actors include digital filters, classifier components (e.g. individual layers of an ANN), or entire classifier subsystems (e.g. an SVM classifier or an entire ANN).
Data change analysis based on function call path
Published in International Journal of Computers and Applications, 2018
Cao Yong, Mu Yongmin, Shen Meie
In 1977, L.J. Osterweil and L.D. Fosdick [2] put forward the concept of data flow analysis. Data flow analysis is a technology [3] used to obtain information about how related data flow along the executed path. In 2003, Bogdan Korel [4] proposed a slicing technology based on state model (Statement-Based model). His research focuses in the system testing. In his research, decrease in the regression test case was done using extended finite state machine (EFSM). In 2011, JinHui [5] introduced the data flow analysis draw lessons from Hardware fault diagnosis system.
Mixed-language automatic differentiation
Published in Optimization Methods and Software, 2018
Valérie Pascual, Laurent Hascoët
Static data-flow analysis [10] is an essential step to achieve efficient differentiation with a source-to-source AD tool. The goal of static data-flow analysis is to provide information on the data computed and returned by a program without knowing the values of the program's run-time inputs. In other words, static data-flow analysis extracts useful information on the program at compile time, this information being thus valid for any run-time execution on any inputs. Obviously, such an information can only be partial and must often resort to the undecidable ‘I don't know’ reply in addition to ‘yes’ or ‘no’. Abstract interpretation [4] is a framework for static data-flow analysis in which the values computed in the original code are replaced with abstract values containing the propagated abstract information. One classical example of the abstract information that one may want to propagate is the interval in which the run-time value will range, or the set of possible destinations of each pointer variable. Starting from some abstract information on the inputs or outputs (which may be empty), abstract interpretation propagates it through the program, possibly guided by its control-flow structure. Instead of a true execution of the program, possible only at run-time, this propagation must stand for every possible execution path. Some data-flow analyses follow these paths forward, others need to follow them backward. As call graphs may be cyclic in general (recursivity), and flow graphs may be cyclic (loops), completion of the analysis requires the reaching of a fixed point both on the call graph and on each flow graph. The abstract domain in which the propagated information range is designed in such a way that this fixed point is reached in a finite number of iterations. Most of the classical data-flow analyses prove useful for AD as well as specific analyses such as activity and TBR analyses [8,14]. In most AD-specific data-flow analysis, the abstract information is, for each variable v, a boolean value (e.g. does v influence the output in a differentiable way?) or a set of other variables (e.g. which input variables have a differentiable influence on v?).