Explore chapters and articles related to this topic
Image Descriptors and Features
Published in Manas Kamal Bhuyan, Computer Vision and Image Processing, 2019
Corner detection is an approach used in computer vision to extract certain kinds of features and infer the contents of an image. In many computer vision applications, it is required to relate two or more images to extract information from them. For example, two successive video frames can be matched to acquire information. Instead of matching all the points of two images, two images can be related only by matching the specific image locations that are in some way interesting. Such points are termed as interest points. The interest points can be located using an interest point detector. Corner points are interesting in a scene where they are formed from two or more edges. In many computer vision applications, one very important step is to detect interest points by corner detectors to find corresponding points across multiple images. Some popular corner point and interest point detection algorithms are discussed in this chapter. Also, the concept of saliency is introduced in this chapter.
Defining quality metrics for photogrammetric construction site monitoring
Published in Jan Karlshøj, Raimar Scherer, eWork and eBusiness in Architecture, Engineering and Construction, 2018
Felix Eickeler, A. Borrmann, Arnaud Mistre
Warping, a geometrical distortion from rotation of single elements, of the 3D reconstruction is important when analyzing deviation in construction. The error of the reconstruction adds to the building errors of the construction site. If we want to identify warping in comparison to the ground truth, we can use a projective transformation matrix. The needed points can be found using corner detection on both datasets. While there are different algorithms for corner detection, we achieved good results with a Harris Corner Detector (Harris and Stephens, 1988). We consider the non-euclidean factors as warping. () Hw=[00vTϑ]
Segmentation and Edge/Line Detection
Published in Scott E. Umbaugh, Digital Image Processing and Analysis, 2017
A more robust corner detector is the Harris corner detection algorithm, developed by Harris and Stephens in 1988. The Harris method consists of five steps: (1) blur the image with a 2D Gaussian convolution mask, (2) find the approximate brightness gradient in two perpendicular directions, for example, use the two Prewitt or Sobel masks, (3) blur the two brightness results with a 2D Gaussian, (4) find the corner response function, CRF(r,c), and (5) threshold the CRF and apply nonmaxima suppression, similar to what was done with the Canny.
A progressive method for the collapse of river representation considering geographical characteristics
Published in International Journal of Digital Earth, 2020
Yilang Shen, Tinghua Ai, Jingzhong Li, Lina Huang, Wende Li
Then, the corner of each superpixel is detected to determine which pixels should be further segmented. In general, a local character point possessing two different extending directions used to represent the local critical characteristics of images is considered a corner; for example, a point on an edge with a local maximal curvature, an endpoint of a line or a single point with a local intensity maximum can be considered a corner. Over the past few decades, many algorithms for corner detection have been proposed (Kitchen and Rosenfeld 1982; Smith and Brady 1997; Rosenfeld and Johnston 1973; Liu and Srinath 1990; Chabat, Yang, and Hansell 1999). However, the Harris corner detection method (Harris and Stephens 1988) used to detect local characteristics in image data is applied because of the following two reasons: (1) it is easy to be implemented, and it is efficient, and (2) it has characteristics of scale, rotation and brightness invariance. The fundamental principle of Harris corner detection method is detecting the displacement differences at different directions based on a mathematical form (Harris and Stephens 1988). Since the Harris corner detection method was first developed by Harris and Stephens in 1988, it has been widely improved and applied in many computer vision applications such as image registration (Kang et al. 2011), object recognition (Lowe 1999), object tracking (Anitha and Deepa 2014).
Vision-based portable yarn density measure method and system for basic single color woven fabrics
Published in The Journal of The Textile Institute, 2018
Zhong Xiang, Jianfeng Zhang, Xudong Hu
The corners of the calibration square should be detected for the preliminary correction and accurate correction processes. In our research, the Shi–Tomasi corner detector is adopted (Shi & Tomasi, 1994). From the Shi–Tomasi point of view, corners can be detected by computing the saliency degree of each pixel considering the local texture surrounding the considered pixel. In particular, the corner detection criterion is based on a confidence calculated for each pixel. Considering the image gray value array I(x, y), where x and y are the horizontal and vertical pixel indexes, respectively, the sum of squared differences between a window in I(x, y) and its shifted window can be expressed as (Shi & Tomasi, 1994)
Keypoint-based passive method for image manipulation detection
Published in Cogent Engineering, 2018
Choudhary Shyam Prakash, Hari Om, Sushila Maheshkar, Vikas Maheshkar
The Harris corner detection algorithm (Harris & Stephens, 1988) uses the fact that, the intensity of image changes a lot in multiple directions at a corner, while it changes greatly in a certain direction at an edge. To formulate this anomaly of change in intensity, a local window is used to get the result from shifting the local window. When the window is shifted in an arbitrary direction around a corner point, the image intensity changes greatly. Around the edge point, it will change greatly when the window is shifted perpendicularly. By using this perception, Harris corner detection algorithm uses a second-order moment matrix as the basis of its corner decisions.