Explore chapters and articles related to this topic
Introduction to LabVIEW
Published in Rick Bitter, Taqi Mohiuddin, Matt Nawrocki, LabVIEW™ Advanced Programming Techniques, 2017
Rick Bitter, Taqi Mohiuddin, Matt Nawrocki
A node can be executed when all inputs that are necessary have been applied. For example, it is impossible for an addition operation to happen unless both numbers to be added are available. One of these numbers may be an input from a control and would be available immediately, where the second number is the output of a VI. When this is the case, the addition operation is suspended until the second number becomes available. It is entirely possible to have multiple nodes receive all inputs at approximately the same time. Data flow programming allows for the tasks to be processed more or less concurrently. This makes multitasking code diagrams extremely easy to design. Parallel loops that do not require inputs will be executed in parallel as each node becomes available to execute. Multitasking has been an ability of LabVIEW since Version 1.0. Multitasking is a fundamental ability to LabVIEW that is not directly available in languages like C, Visual Basic, and C++. When multiple nodes are available to execute, LabVIEW uses a process called arbitrary interleaving to determine which node should be executed first. If you watch a VI in execution highlighting mode and see that nodes execute in the desired order, you may be in for a rude shock if the order of execution is not always the same. For example, if three addition operations were set up in parallel using inputs from user controls, it is possible for eight different orders of execution. Similar to many operating systems’ multithreading models, LabVIEW does not make any guarantees about which order parallel operations can occur.
Software for Electric Power Instrumentation
Published in Felix Alberto Farret, Marcelo Godoy Simões, Danilo Iglesias Brandão, Electronic Instrumentation for Distributed Generation and Power Processes, 2017
Felix Alberto Farret, Marcelo Godoy Simões, Danilo Iglesias Brandão
The programs created in LabVIEW are called virtual instruments (VIs), since they are workstations endowed with a flexible software allowing modularity. Such graphical programs are based on the concept of data flow programming, which means that execution of a block diagram only occurs when all of its inputs are available. Then, the output data of the block are sent to all other connected blocks. The movement of data through the blocks determines the execution order of the VIs and functions. Data flow programming allows multiple operations to be performed in parallel, since its execution is determined by the flow of data and not by sequential lines of code [1].
A review on big data real-time stream processing and its scheduling techniques
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
Nicoleta Tantalaki, Stavros Souravlas, Manos Roumeliotis
The basic data abstraction for stream processing is called DataStream. It executes arbitrary dataflow programmes in a data-parallel and pipelined manner, which results in achieving low latency. Apache Flink's dataflow programming model provides event-at-a-time processing [61]. Tuples can be collected in buffers with an adjustable timeout before they are sent to the next operator to turn the knob between throughput and latency. It performs at large scale, running on thousands of nodes with very good throughput and latency characteristics based on existing benchmarks. When using stateful computations, it ensures exactly once semantics. Apache Flink includes a lightweight fault tolerance mechanism based on distributed checkpoints. Its algorithm is based on a technique introduced by Chandy and Lamport [62] and periodically draws consistent snapshots of the current state of the distributed system without missing information and without recording duplicates. These snapshots are stored to a durable storage. In case of failure, the latest snapshot is restored, the stream source is rewinded to the point when the snapshot was taken, and is replayed [23]. Flink is currently a unique option in the processing framework world but at the moment, it is a young project and there hasn't been much research into its scaling limitations. It is a declarative system, providing higher level abstractions to users like Spark. The DAG is implied by the ordering of the transformations while its engine can reorder the transformations if needed.
JOIN: an integrated platform for joint simulation of occupant-building interactions
Published in Architectural Science Review, 2020
Davide Schaumann, Seonghyeon Moon, Muhammad Usman, Rhys Goldstein, Simon Breslav, Azam Khan, Petros Faloutsos, Mubbasir Kapadia
To address this issue, Goldstein, Breslav, and Khan (2018) introduced Symmetric DEVS, a set of conventions that can be used as a basis for a model-independent simulator. Symmetric DEVS builds upon building performance simulation (BPS) modellers’ familiarity with certain programming techniques, namely conventional procedural programming, which uses familiar ‘if–then-else’ statements, and dataflow visual programming, a popular technique supporting parametric design (Woodbury 2010). Symmetric DEVS incorporates procedural and dataflow programming into the Discrete Event System Specification (DEVS) formalism, known for its generality, relatively minimalistic set of conventions, and ability to represent any time-varying system (Zeigler, Kim, and Praehofer 2000).
Semantic segmentation of high-resolution remote sensing images using fully convolutional network with adaptive threshold
Published in Connection Science, 2019
Zhihuan Wu, Yongming Gao, Lei Li, Junshi Xue, Yuntao Li
The model was implemented by Keras with Tensorflow backend. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano (Parvat et al., 2017). Tensorflow developed by the Google Brain team is an open-source software library for dataflow programming across a range of tasks. Many 3rd party libraries are required such as Tifffle for reading remote sensing imagery, OpenCV for basic image processing, Shapely for handling polygon data, Matplotlib for data visualisation, scikit-learn for basic machine learning functions. The experiments were conducted on a Sugon W560-G20 Server with E5-2650 v3 CPU, 32 GB memory, and Nvidia GTX 1080 Ti GPU.