Explore chapters and articles related to this topic
Digraphs
Published in Kenneth H. Rosen, Graphs, Algorithms, and Optimization, 2005
In this section, we present an algorithm to find the strong components of a digraph G. It is based on a depth-first search. It is very similar to the DFS used to find the blocks of a graph in Chapter 6, and to the DFS used above to find a topological ordering in an acyclic digraph. When finding a topological ordering, we saw that in a digraph G,Algorithm 10.3.2 constructs a spanning forest of out-directed, rooted trees. Each time DFS(ix) is called, a DF-tree rooted at u is built. The edges of G can be classified as either tree-edges or fronds. For example, a spanning forest for the graph of Figure 10.3 is shown in Figure 10.4 below. The fronds are shown as dashed edges. Not all the fronds are shown, as can be seen by comparing Figures 10.3 and 10.4. The numbering of the nodes is the DF-numbering.
Cyber Terrorism Launched against Smart Cities
Published in Rocky Dr. Termanini, The Nano Age of Digital Immunity Infrastructure Fundamentals and Applications, 2018
To plan an attack on a Smart City, attackers use a well-known graphical method called the directed acyclic graph (DAG, a graph without a cycle). It is made of nodes, which are represented as circles or ovals. Edges are the connections between the nodes. An edge connects two nodes. They are usually represented by lines, or lines with arrows. A directed acyclic graph has a topological ordering which means that the nodes are ordered in sequence—the starting node has a lower value than the ending node. The DAG is a probabilistic causality graph that shows how events are happening according to their respective probabilities of occurrence. Figures 13.1a,b show how cyber terrorists have used causality methods to acquire higher visibility and confidence on the incoming attack.
Efficient distributed algorithms for distance join queries in spark-based spatial analytics systems
Published in International Journal of General Systems, 2023
Francisco García-García, Antonio Corral, Luis Iribarne, Michael Vassilakopoulos
In the era of Big Spatial Data (Eldawy and Mokbel 2017), the research community is focusing on developing new distributed data management systems that can efficiently analyze and process large-scale spatial data. Nowadays, with the development of modern mobile applications, the increase of the volume of available spatial data, from smart-phones, sensors, cars, satellites, etc., is huge world-wide. Recent developments of big spatial data systems have motivated the emergence of novel technologies for processing large-scale spatial data on clusters of computers in a distributed environment. Parallel and distributed computing using shared-nothing clusters on large-scale datasets is becoming a dominant trend in the context of data processing and analysis. Apache Spark1 is a fast, reliable, and distributed in-memory large-scale data processing framework. It takes advantage of Resilient Distributed Datasets (RDDs), that allow data to be stored transparently in memory and persisted to disk only if it is needed. Hence, Spark can avoid a huge number of disk writes and reads, and it outperforms the Hadoop-MapReduce platform (Shi et al. 2015). Since Spark maintains the status of assigned resources until a job is completed, it reduces time consumption in resource preparation and collection. Spark programming model consists of an advanced Directed Acyclic Graph (DAG) engine. Spark builds its query computations as a DAG; its DAG scheduler and query optimizer construct an efficient computational graph that can usually be decomposed into tasks that are executed in parallel across workers on the cluster (Damji et al. 2020). Conceptually, a DAG is a directed graph that has a topological ordering, a sequence of the vertices (RDDs) such that every edge (data dependency) is directed from earlier to later in the sequence, i.e. a DAG is a graph that never goes back. Spark DAGs allows simple jobs to complete after just one stage, and more complex tasks to complete in a single run of many stages. Spark DAGs help to achieve fault tolerance, and thus we can recover the lost data.