Explore chapters and articles related to this topic
Adaptive Pillar K Means Algorithm To Detect Colon Cancer From Biopsy Samples
Published in T. Kishore Kumar, Ravi Kumar Jatoth, V. V. Mani, Electronics and Communications Engineering, 2019
B. Saroja, A. Selwin Mich Priyadharson
Entropy-based score computation is used for classification purpose. From that entropy-based score, we have to easily classify that c1 is normal or abnormal. Entropy is commonly used in the information theory measure, which characterizes the purity of an arbitrary collection of c1. Entropy is also the measure of disorder or impurity. Entropy is the sum of the probability of each label times the log probability of that same label. In our case, we will be interested in the entropy of the output values of a set of training outliers. It is at the foundation of the IG-attributed ranking methods. The entropy measure is considered as a measure of system’s unpredictability. The entropy of E is A(E)=−Σe∈ym(e)log2(m(e))
Statistical perspectives on reliability of artificial intelligence systems
Published in Quality Engineering, 2023
Yili Hong, Jiayi Lian, Li Xu, Jie Min, Yueyao Wang, Laura J. Freeman, Xinwei Deng
There are various existing methods for detecting OOD samples for technologies used in AI. Lee, Hong, et al. (2018) used the intermediate-layer data of one pre-trained CNN to classify the outliers in both training and testing datasets. The intermediate-layer data of the CNN are assumed to follow a multivariate normal distribution and outliers are classified by a large Mahalanobis distance. Liang, Li, and Srikant (2017) used temperature scaling to distill the knowledge in multiple pre-trained DNN models of the same prediction task to classify the outliers data. Winkens et al. (2020) used confusion log-probability to measure the similarity of the samples where OOD samples will have a low confusion log-probability. Other popular methods for detecting OOD samples include representations from classification networks (Sastry and Oore 2020), alternative training strategies (Lee et al. 2017), Bayesian approaches (Blundell et al. 2015), and generative and hybrid models (Choi, Jang, and Alemi 2018).
Beyond descriptive statistics: using additional analyses to determine the technological feasibility of meeting a new exposure limit
Published in Journal of Occupational and Environmental Hygiene, 2021
Benjamin Roberts, Taylor Tarpey, Nicole Zoghby, Thomas Slavin
The log-probability plot is another useful tool to characterize occupational exposures. In this case, it shows that approximately 44% of the data OSHA relied upon during its feasibility analysis for foundries are above the current PEL50. While this is slightly less than half of the total measurements presented by OSHA, this still represents a significant number of measurements in the foundry sector that are out of compliance with the revised PEL50 and will require additional control efforts to achieve compliance. The exposure measurements show an exponential increase starting at the 90th percentile, which is already above the PEL50, and therefore the exposures in the 90th to 99th percentile, or at least 10% of the silica measurements among all types of foundries and job categories, would be expected to be much higher than the PEL50. This further supports that a significant number of measurements in the foundry sector are well out of the range of compliance and will be more difficult to find feasible control efforts for which to achieve compliance.
A new hierarchical temporal memory based on recurrent learning unit
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2023
Dejiao Niu, Le Yang, Tianquan Liu, Tao Cai, Shijie Zhou, Lei Li
Compared to the prediction of scalar value, natural language modelling is more challenging. Then the HTM and RUHTM’s performance on natural language modelling task is evaluated using PTB dataset. The learning rate is set 0.0001, and the number of epochs is 20, 40, 60, 80, and 100, respectively. HTM and RUHTM are run with various number of columns and hidden neurons. For the evaluation metric, perplexity (PPL) which measures the average log probability of a sentence in natural language processing is reported. The results are shown in Figure 5 and Figure 6.