Explore chapters and articles related to this topic
Test and Verification
Published in Miroslav Popovic, Communication Protocol Engineering, 2018
PAT is an extensible and modularized framework for automatic system analysis based on CSP, which is freely available for noncommercial research at http://sav.sutd.edu.sg/PAT/. This self-contained framework supports modeling, simulating, and verifying concurrent real-time systems including communication protocols. PAT supports various model checking techniques targeting different properties, such as deadlock-freeness, divergence-freeness, reachability, LTL properties with fairness assumptions, refinement checking, and probabilistic model checking. Moreover, the PAT development team implemented advanced optimization techniques, including partial order reduction, symmetry reduction, process counter abstraction, and parallel model checking, in order to achieve good performance from the user point of view.
Reasoning about Sensor Networks
Published in S. Sitharama Iyengar, Richard R. Brooks, Distributed Sensor Networks, 2016
Manuel Peralta, Supratik Mukhopadhyay, Ramesh Bharadwaj
The approach given by Guo and Subramaniam [7] exhibits a method by which change impact analysis is modeled and verified in a distributed setting. This approach is loosely based on model checking. Their model is, in essence, a network of state machines that communicate either via shared variables or queues. Changes are modeled as adding and/or deleting transitions from the composite state machine that represents the distributed system. The work in Ref. [7] is similar to ours in the context of two aspects: (1) the authors are formally representing change in a system; however, their approach targets distributed computations and is based on model checking, whereas our approach targets sequential computation, and it is based on theorem proving and (2) this approach prunes the global state space by using partial order reduction in order to infer the valid transitions when a change occurs, whereas our approach deals with change at the source code level, and the validity of the change is inferred by our logical calculus. In Ref. [19], Subramaniam et al. enhance the approach shown in Ref. [7]. The changes are still represented by adding and/or deleting transitions of a composite state machine. However, this work addresses the issue of test suite coverage when changes occur. This approach detects the affected tests based on whether or not these include the affected transitions. Using formal verification techniques similar to the ones presented in Ref. [7], the authors are able to reduce the total regression test suite based on which tests are relevant after a given change. Our approach goes in a different direction by formally characterizing the source-code change and determining if the changes to the current source code version are logically consistent with its properties and the future desired properties.
Exploration strategies for balancing efficiency and comprehensibility in model checking with ant colony optimization
Published in Journal of Information and Telecommunication, 2022
Tsutomu Kumazawa, Munehiro Takimoto, Yasushi Kambayashi
We next discuss the internal issues of our experiments. First, our prototype implementation has much room for improvement especially in terms of performance enhancement. We are planning to incorporate various optimization techniques of model checking, e.g. partial order reduction (Chicano & Alba, 2008a). Since our implementation runs in a sequential manner, we can improve the performance of the proposed strategies and our search procedure by parallelizing them. Second, our parameter settings of ACOhg can affect its convergence efficiency. Although our settings are based on the one established in the previous study (Chicano & Alba, 2008b), there may exist more efficient settings than ours for each benchmark. However, such settings can be too specific to the problem domain and can arise difficulty with respect to generalization. Third, because ACO is a stochastic search algorithm, we have to draw many samples and observe the averaged performance. Unfortunately, it is very difficult to determine the appropriate sample size in general. In our experiments, we sampled thirty runs independently for each condition. We believe that our sample size is reasonable.