Explore chapters and articles related to this topic
In-Process Measurement in Manufacturing Processes
Published in Wasim Ahmed Khan, Ghulam Abbas, Khalid Rahman, Ghulam Hussain, Cedric Aimal Edwin, Functional Reverse Engineering of Machine Tools, 2019
Ahmad Junaid, Muftooh Ur Rehman Siddiqi, Riaz Mohammad, Muhammad Usman Abbasi
After converting the grayscale image into the binary image using Otsu’s segmentation technique, the next step is to find the target object in the image. For this, the objects were labeled using the connected component tool. There are different ways to connect the objects using different connectivity. A connected group formed in a binary image by the set of pixels gives rise to the connected component. The identification of the connected components and assigning each object a unique label is done by using the process of connected component labeling.
Verification experiment of stocking and disposal tasks by automatic shelf and mobile single-arm manipulation robot
Published in Advanced Robotics, 2022
Junya Tanaka, Daisuke Yamamoto, Ryo Nakashima, Yuto Yamaji, Hiroshi Ohtsu, Keisuke Kamata, Kohei Nara, Tetsuya Asayama
The details of the pose detection algorithm are described below. We first estimate blobs belonging to each objects using depth images. Thanks to the automatic shelf, we can assume that the relative pose between the shelf and the RGB-D camera is constant. Therefore we can easily extract depth measurements belonging to the objects by picking up the measurements whose depths are smaller than the shelf (i.e. higher in the 3D space). We then group them using connected component labeling to obtain blobs representing each objects. For each blobs, we estimate the maximum height to determine whether each object is laid on the shelf or standing by itself. We use the detection from RGB images (Sec. 3.1.1) for the former case, since we can expect that matchable textures are visible. For the latter case we use the detection from the depth images, which we will describe in Sec. 3.1.2.
Pulmonary Nodule Detection in CT Images Using Optimal Multilevel Thresholds and Rule-based Filtering
Published in IETE Journal of Research, 2022
Satya Prakash Sahu, Narendra D Londhe, Shrish Verma
The ROIs obtained after application of multilevel thresholds are the potential candidates of different types of nodules, vessels or noises. For removing the non-nodule objects (vessels and noises), rule-based nodule detection algorithm is proposed. The various significant 2D and 3D features of nodules are selected to formulate nine if-then-else rules and can be defined in such a way that if the given ROI satisfies the conditions then it may be accepted as nodule candidate else rejected as non-nodule candidate. The neighborhood scheme or 2D-connectivity connected component labeling of the object follows the 8-connectedness connected component as shown in Figure 4. The 2D features used to frame the rules are Area (A), Major-Axis length (MJA), Minor-Axis length (MNA), Roundness or Circularity (CR), and Longness (LN) (obtained by taking the ratio of MNA and MJA). For classifying nodules and non-nodules candidates, each feature set must be evaluated or compared with a respective maximum and minimum cut-off values. Here [Amin, Amax], [MJAmin, MJAmax], [MNAmin, MNAmax] represent the range values of Area, Major-Axis length, Minor-Axis length, respectively, of nodule candidate for the evaluation of size of the nodule. If the object has smaller area, major axis and minor axis length than minimum cut-off values, it may not be the nodule candidate and needs to be rejected, similarly if the object has larger area, major axis and minor axis length than maximum cut-off values, it may be classified as a non-nodule candidate. The CRc and LNc are the cut-off or threshold values of Circularity and Longness.
A raster-based typification method for multiscale visualization of building features considering distribution patterns
Published in International Journal of Digital Earth, 2022
Yilang Shen, Jingzhong Li, Ziqi Wang, Rong Zhao, Lu Wang
In the field of map generalization, superpixel technology was first applied to simplify polygonal and linear features (Shen et al. 2018b), and the authors analysed the similarities and differences between superpixel segmentation and map generalization. Subsequently, superpixel technology has been applied to building aggregation (Shen et al. 2019a), building simplification (Shen, Ai, and Li 2019b) and collapse (Shen, Ai, and Yang 2019c). Following this direction, in this paper, superpixel technology was applied to another important generalization operator: typification. After performing connected component labeling, superpixel segmentation algorithms are used for grouping. To generate relatively regularly shaped superpixels without manually setting the compactness parameter, the zero version of the simple linear iterative clustering (SLICO) method (Achanta et al. 2012) is applied. Compactness, which is the ratio of the perimeter to the area of the object, is used to measure the shape of an object. As shown in Figure 3(e, b and h), the superpixel sizes used for the first segmentation are 20, 25 and 30 pixels, respectively. The generated superpixel tends to have a hexagonal shape in blank regions, and it presents slight shape changes in building regions. When the boundary of a superpixel and the boundary of a building connected component overlap, this superpixel and building connected component are connected. When a superpixel (marked in yellow) generated by the first segmentation connects two or more connected components of buildings, these connected components will be grouped. In Figure 3, after grouping via superpixel analysis, the linear buildings are divided into one group, the grid buildings are divided into six groups, and the irregular buildings are divided into four groups.