Explore chapters and articles related to this topic
Visual Search for Objects in a Complex Visual Context: What We Wish to See
Published in Spyrou Evaggelos, Iakovidis Dimitris, Mylonas Phivos, Semantic Multimedia Analysis and Processing, 2017
Hugo Boujut, Aurèlie Bugeau, Jenny Benois-Pineau
Now that a good representation of each image or video has been extracted, the problem of classification or retrieval can be addressed. In case of image retrieval or indexing, the goal is to find, within a database, the image(s) that best matches a query image given by the user. In the context of classification, the purpose is to assign the image to the category to which it corresponds. The categories are defined beforehand by the user and a learning phase is necessary to learn the most important properties of each category. When relying to BoVW approaches, at the end of the pooling step, every object or image is represented by one histogram over the visual dictionary. In this section, we will see how these histograms can be used for image or object retrieval on one hand and for classification on the other hand.
Computer Analysis of Mammograms
Published in Paolo Russo, Handbook of X-ray Imaging, 2017
Chisako Muramatsu, Hiroshi Fujita
Content-based image retrieval methods have been actively studied for a couple of decades in the field of computer vision and medical informatics. For assisting breast lesion classification, Qi and Snyder (1999) investigated an image retrieval of masses on mammograms based on feature vector distance between a query and images in the archive. Sklansky et al. (2000) proposed a mapped-database system, which provides a biopsy recommendation and a relational map of a query and images in the database using a visual neural network for microcalcification clusters, as shown in Figure 60.2. Giger et al. (2002) developed a similar image retrieval system, called an intelligent workstation, as illustrated in Figure 60.3, which provides the likelihood of malignancy of a query mass and selects similar images on the basis of a single feature, multiple features or the likelihood of malignancy.
Image Retrieval
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Basically, there are two image retrieval frameworks: text-based image retrieval and content-based image retrieval (CBIR). Text-based image retrieval can be traced back to the late 1970s. In traditional text-based image retrieval systems, images were first annotated with text and then searched using a text-based approach from traditional database management systems. Through text descriptions, images can be organized by semantic topics to facilitate easy navigation and browsing. However, since automatically generating descriptive texts for a wide spectrum of images is not feasible, most text-based image retrieval systems require manual annotation of images, which is a tedious and expensive task for large image databases, making the traditional text-based methods not scalable. Moreover, as manual annotations are usually subjective, imprecise, and incomplete, it is inevitably for the text-based methods to return inaccurate and mismatched results.
New Weighted Mean-Based Patterns for Texture Analysis and Classification
Published in Applied Artificial Intelligence, 2021
Hadis Heidari, Abdolah Chalechale
With increasing growth of digital images available on the Internet over recent years, methods of information retrieval from massive image datasets have become a subject of great interest. Image retrieval is searching for an image in an image dataset, which has many applications in technology and science, including machine vision, information security, and biometric systems (Gao et al. 2014). In image retrieval systems, the similarity of an input image and the images of the dataset is calculated by a distance criterion based on visual contents like shape, color, and texture. But texture is a particularly important feature. In content-based image retrieval (CBIR), texture is prized for its ability to provide useful features. Despite much advancement in this field, there are still many challenges in managing large image datasets for biometric identity recognition purposes. The main challenge in the use of biometric systems is how to achieve a high recognition rate, especially when working with massive datasets. The systems initially developed for this purpose were operating based on color, shape, and texture. However, over the years, the need for higher image retrieval performance has led to the development of more advanced methods based on texture information. The image retrieval solution proposed in this work aims to improve image retrieval system based on texture features.
Content-based image retrieval: A review of recent trends
Published in Cogent Engineering, 2021
Ibtihaal M. Hameed, Sadiq H. Abdulhussain, Basheera M. Mahmmod
A massive amount of image databases has been generated by educational, industrial, medical, social, and other life facilities. All these image repositories require a powerful image search mechanism. There are two common search methods. The first method is based on keywords used to annotate images, which is known as text-based image retrieval (Y. Liu et al., 2007). This method suffers from many disadvantages: 1) manually annotating large databases is not feasible, 2) the end user must make annotations, which in turns makes this method subject to human perception, and 3) these annotations are applicable for only one language. The second method is “content-based image retrieval” (CBIR), which highly recommended to overcome the disadvantages of the text-based image retrieval methods (Raghunathan & Acton, 1999).
Content-based image retrieval using block truncation coding based on edge quantization
Published in Connection Science, 2020
Yan-Hong Chen, Ching-Chun Chang, Cheng-Yi Hsu
In recent years, image retrieval has received a lot of attention. Image retrieval methods are classified into three major categories, i.e. text-based Image retrieval (TBIR) (Farruggia et al., 2014; Hong et al., 1998; Moraleda, 2012; Squire et al., 2000), content-based Image retrieval (CBIR), and semantic-based Image retrieval (SBIR) (Caicedo et al., 2011; Liu et al., 2007; Palandurkar & Karale, 2019; Wu et al., 2018). TBIR tries to transplant the traditional text retrieval technology to the image retrieval. By this method, we can retrieve images using keywords that are annotated on the images (Alzubi et al., 2015). However, when the image database is very large, manual annotation becomes very difficult. More importantly, the annotation is subjective and uncertain, so it cannot meet users’ requirements fully. Thus, CBIR is proposed in order to overcome the limitations of TBIR. CBIR usually is used to retrieve images using low-level features, such as colour, texture, shape, contour, spatial relationships, and other characteristics. Visual features cannot characterise semantic content completely, but they are easier to integrate into mathematical formulations (Squire et al., 2000). CBIR is concerned with the similarity of visual features, but SBIR focuses on the associations between high-level concepts and low-level features. In general, it is difficult to extract semantic features.