Explore chapters and articles related to this topic
Generalized Buffer Insertion
Published in Charles J. Alpert, Dinesh P. Mehta, Sachin S. Sapatnekar, Handbook of Algorithms for Physical Design Automation, 2008
Once the permutation is obtained, thus defining all topologies to be explored, the algorithm proceeds to topology embedding. The routing target is specified by a Hanan grid (note that in Ref. [10] this step of the flow is redesigned to handle general graph model, giving the capability to account for blockages and congestion). Instead of embedding topologies one at a time, algorithm exploits the structure of permutation constrained topologies and achieves polynomial computational complexity while exploring an exponential search space. For example, topologies in Figure 27.7a and b have identical subtree containing sinks B and D, and there is no need to compute solution for this subtree more than once.
Big data for cyber physical systems in industry 4.0: a survey
Published in Enterprise Information Systems, 2019
In the efficiency direction, the first well-known algorithm is the Apriori (Agrawal and Srikant 1994). It is proposed to search the set of items co-occurring frequently by utilizing a downward-closed property. As the event occurring probability follows a Power-law distribution in many cases, the Apriori algorithm can help to prune the exponential search space. Other classical methods in this type include FP-Tree (Han, Pei, and Yin 2000), which uses an extended prefix-tree structure for storing compressed information, and ECLAT (Zaki 2000) which store transaction information in a vertical data layout for fast support counting. Besides classical methods, other current researches include Niche-Aided Gene Expression Programming, a specialized algorithm for gene data (Chen, Li, and Fan 2015), a multi-objective particle swarm optimization for numerical attributes without a priori discretization (Beiranvand, Mobasher-Kashani, and Bakar 2014), and a more efficient candidate generation algorithm for Apriori (Jiao 2013). However, the main problem for the this type of algorithms is that it utilizes the co-occurrence function as a sub-optimal measure for correlation. Take supermarket purchases for example. Both banana and milk are frequently purchased by customers, while spaghetti and tomato sauce are less frequently purchased. If the co-occurrence function is used, it indicates the correlation between banana and milk is stronger than that between spaghetti and tomato sauce, which contradicts with our intuition. Therefore, it is more appropriate to use any correlation function which will compare the actual co-occurrence against the expected co-occurrence under the assumption of independency. However, no existing correlation function has the downward-closed property to prune the search space. For this issue, Duan and Street (2009) proposed a fully-correlated itemset framework to decouple correlation functions from satisfying the downward-closed property. Although the fully-correlated itemset framework helps to prune the 3-itemset and above, it cannot speed up the search for pairs. To speed up pairs, Xiong et al. (2006) utilized the monatomic property of the upper bound for Pearson correlation, and Duan, Street, and Liu (2013) extended his work to the correlation functions that satisfies three correlation properties. Another extension in the efficiency direction is for the confounding factor search. As the confounding factor involves 3 items, the search space is much bigger than pairs. Zhou and Xiong (2009) proposed a method to speed up the search, but this method can only be applied to a special type of confounding factors, called reversing confounding factors.