Explore chapters and articles related to this topic
Findset, Find Min, and Find Word
Published in Suman Saha, Shailendra Shukla, Advanced Data Structures, 2019
The disjoint-set data structure is characterized by its unique operations and provides the way to understand the data which is always arranged as a disjoint collection of sets. The operations are described below. Separate three operations below by vertical spaces so they’re easier to read. makeset(x): The makeset(x) operation uses an element x as input and returns a new singleton set represented by itself.union(x,y): The union(x,y) operation uses two sets represented by x and y elements as inputs and unites those sets to return a new set consisting of elements from both the sets.findset(x): The findset(x) operation uses an element as input and returns the representative of the set to which it belongs.Find connected components of a graph G(V, E)In this example, we present an algorithm to compute connected components of a given graph. The algorithm is very straightforward and given below.Algorithm to determine connected components of a graph G(V,E)1: procedure CONNECTED-COMPONENTS G(V,E) ⊳ computes connected components of a graph2: for ∀ v ∈ Vdo3: makeset(v) ⊳ creates |V| number of singleton sets4: remember representatives5: for ∀ (u, v) ∈ Edo6: if findset(u) ≠ findset(v) then7: union(u,v) ⊳ unite if different set8: reduce representative9: return representative ⊳ one representative for each component
Estimation of raw silk quality using rough set theory
Published in The Journal of The Textile Institute, 2022
Niharendu Bikash Kar, Anindya Ghosh, Subhasis Das, Debamalya Banerjee
Suppose a data set is defined as information system (S) where S = (U, A), X ⊆ U and B ⊆ A where U and A are finite, nonempty sets as the universe and the set of attributes, respectively. Set A contains two disjoint sets of attributes called condition and decision attributes. The system is donated by S = (U, C, D) where C is called condition attribute and D is called decision attribute. With every attribute a ⊆ A, set Va of its values is associated which is called domain of a. Any subset B of A determines a binary relation I(B) on U, which is called an indiscernibility relation and defined as (x, y) ⊆ I(B) if and only if a(x) = a(y) for every a ⊆ A, where a(x) denotes the value of attribute a for element x. Obviously I(B) is an equivalence relation. When (x, y) belongs to I(B), x and y are called B-indiscernible (indiscernible with respect to B). Equivalence classes of the relation I(B) are referred to as B-elementary sets or B-granules. The lower approximation () and upper approximation ( of X are expressed as follows:
On the optimal separating hyperplane for arbitrary sets: a generalization of the SVM formulation and a convex hull approach
Published in Optimization, 2022
Ademir A. Ribeiro, Mael Sachine
In this paper we generalize the concepts of linear separability of sets by means of their convex hulls. In the context of Support Vector Machine (SVM) the separability is established only for finite (and consequently, bounded and closed) sets. In this work we deal with arbitrary sets, allowing infinite, unbounded and not necessarily closed sets. In particular, we extend the SVM formulation of the problem of finding the optimal separating hyperplane to arbitrary linearly separable sets. To be precise, consider two disjoint sets and define the function as The problem1 we consider is then formulated as where and . The solution of this problem provides the parameters that define the largest slab that separates the sets and . Note that this problem may have infinitely many constraints. Even so, we shall prove existence and uniqueness of the solution. Moreover, we establish that the solution does not change if we drop all the inactive constraints. Still on the subjects separability together with convex hulls, we provide some examples and counterexamples to many properties discussed in the text and statements in the literature.
The value of options for time charterparty extension: an artificial neural networks (ANN) approach
Published in Maritime Policy & Management, 2018
Heesung Yun, Sangseop Lim, Kihwan Lee
After randomization, the data was divided into two disjoint sets to form a training set and a test set. In most cases, the sample data is split into two data sets, that is, training and test sets, to assess the validity of a model (Paliwal and Kumar 2009). The training set was used to find the parameters of the networks and the unused test set, to validate the model. The data split ratio between the training and the test set was 70:30. The most important point in machine learning is to avoid an over- or under-fitting of the model. According to Deryfus (2005), there are two strategies to prevent the overfitting of a neural network model. The passive approach is to discard models that reveal overfitting through cross-validation, whereas the active approach is to limit the magnitude of the parameters through regularization. Penalty methods, such as early stopping and weight decay are the most frequently used methods for regularization. The regularization technique is used when it is not possible to prevent overfitting by controlling the number of parameters. Hence, it is mostly used in classification or visual pattern recognition problems.