Explore chapters and articles related to this topic
Marine Trash Detection Using Deep Learning Models
Published in B. K. Mishra, Samarjeet Borah, Hemant Kasturiwale, Computing and Communications Engineering in Real-Time Application Development, 2023
Kimbrel Dias, Sadaf Ansari, Ameeta Amonkar
The mAP and IoU are defined as follows: mAP (mean average precision) is the mean of area under the precision-recall curve.16 Precision is given by:TPTP+FP
A novel image recognition using Fuzzy C-Means and content-based fabric image retrieval
Published in The Imaging Science Journal, 2023
A. Meenakshi, A. P. Janani, S. Devi Mahalakshmi, S. Vanitha Sivagami
Deng et al. [23] designed a focus ranking that is integrated with CNN to retrieve fabric images. The method employed a huge dataset of fabric image retrieval dataset (FIRD) which includes 25,000 images of 4,300 fabrics. Recall, mean average precision (mAP), Normalized Mutual Information (NMI) score, and F1-score were the performance metrics taken for evaluation. comparison with the existing embedded method resulted that the method outperforming all the existing methods. Luo et al. [24] illustrated a method named a residual net-enabled approach to recognize the cashmere and sheep wool fibres. Microscopic images of six classes were taken as input images for classification. The approach had quick recognition efficiency and within 20 s the method processed 6000 sample images. The evaluation was done with previous works such as ResNet18, and InceptionV4. The results demonstrated that the method reduces computational time and achieved an overall mean accuracy of 97.1%. Motivated by these existing works, this paper presents a CBFIR based on texture features and a machine-learning algorithm is explained as follows.
On the search behaviour of users in the context of interactive social book search
Published in Behaviour & Information Technology, 2020
Relevance can be categorised as user relevance and system relevance (Saracevic 2016). The former is covered by the IIR evaluation methodology under the Interactive SBS and consists of log analysis, pre- and post-experiment, pre- and post-task as well as UES scale questionnaires (Bellot et al. 2014; Hall et al. 2014; Gaede et al. 2015; Gäde et al. 2016). The latter is reflected in the performance evaluation of the SBS retrieval systems using evaluation metrics including Precision (P), P@1000, Recall (R), R@1000, Mean Average Precision, Normalized Decentralized Cumulative Gain (nDCG), and nDCG@10 (Bellot et al. 2013, 2014; Koolen et al. 2016) to evaluate and compare the submitted runs. Both go in separate directions, and neither of these can benefit from each other. Therefore, some robust evaluation framework is required to consider the fusion of these measures in a balanced manner (Koolen et al. 2017). To achieve this common understanding of the relevance and its possible better exploitation in Interactive SBS evaluation campaign, researchers must consider both sides of relevance towards the common goal of the efficient retrieval system and higher user satisfaction. According to (Bogers et al. 2017), such an IIR evaluation framework must possess the essential properties including continuity, complexity, flexibility, realism, and measurability.
Recommending collaborations with newly emerged services for composition creation in cloud manufacturing
Published in International Journal of Computer Integrated Manufacturing, 2021
Two widely accepted metrics, Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG), are adopted in the evaluation. MAP for a recommendation list of length , i.e., MAP@L is defined as follows: