Explore chapters and articles related to this topic
Machine Learning Applications and Challenges to Protect Privacy in the Internet of Things
Published in Anand Sharma, Sunil Kumar Jangir, Manish Kumar, Dilip Kumar Choubey, Tarun Shrivastava, S. Balamurugan, Industrial Internet of Things, 2022
Mahadev Gawas, Aishwarya Parab, Hemprasad Y. Patil
The Naïve Bayes classifier computes the posterior probability and makes use of the Bayes theorem to forecast the likelihood of a given dataset. It examines the dataset for unidentified samples and provides and provides a particular tag to these samples. It can be used in intrusion detection to categorize the traffic as regular or malicious. A set of attributes are used by the Naïve Bayes classifiers to categorize the network congestion. The attributes used are connection interval, protocol, and status flag. These attributes could depend on each other but they are worked on separately by the Naïve Bayes classifier. These attributes can be used to predict network traffic as malicious or regular. For that reason, it is called “naïve.” The Naïve Bayes classifier finds its applications in intrusion detection over the network [53, 54] and anomaly detection [55, 56]. The striking features of the Naïve Bayes classifier are straightforward execution, simplicity, and sturdiness to unsuitable attributes and require a smaller training set and can be applied to both single and multiple class categorization. Nonetheless, the classifier may not group the required signals from the connections and communications among the attributes. For precise categorization of the attributes, the relations between the attributes are important [57].
Spear Phishing Detection
Published in Debabrata Samanta, SK Hafizul Islam, Naveen Chilamkurti, Mohammad Hammoudeh, Data Analytics, Computational Statistics, and Operations Research for Engineers, 2022
Shibayan Mondal, Samrajnee Ghosh, Achiket Kumar, SK Hafizul Islam, Rajdeep Chatterjee
Advantages: A Naive Bayes classifier outperforms most other models when the assumption of independent classifiers is true.To estimate the test data, Naive Bayes requires a small amount of training data. As a result, the training period is shorter.Multinomial Naive Bayes is very easy to implement because only the probability has to be calculated. Disadvantages: The assumption of independent predictors is the main flaw in Naive Bayes classifier. All of the attributes in Naive Bayes are assumed to be mutually independent. In reality, obtaining a set of predictors that are completely independent is nearly impossible.Assume that a categorical variable in the test dataset has a category that was not present in the training dataset. In that case, the model will assign a probability of 0 (zero) and will be unable to make a prediction. Zero frequency is another name for it.
An Integration of Blockchain and Machine Learning into the Health Care System
Published in Om Prakash Jena, Sabyasachi Pramanik, Ahmed A. Elngar, Machine Learning Adoption in Blockchain-Based Intelligent Manufacturing, 2022
Mahita Sri Arza, Sandeep Kumar Panda
Consider the same example as in the logistic regression section, but now the Naïve Bayes classifier is employed to predict whether a person would buy a car based on their age and salary. Figure 3.5 shows the visualization of the training data set, and Figure 3.6 shows the visualization of the test data set of the Naïve Bayes classifier. Figure 3.5 demonstrates that Naïve Bayes has segregated the data points with a fine boundary, resulting in a Gaussian curve. Figure 3.6 shows the final output for the test set data. The classifier puts to use a Gaussian curve to distinguish between the variables “purchased” and “not purchased,” as displayed, as well as a few incorrect predictions. Other applications of the Naïve Bayes classifier include text classification, spam filtering, credit scoring, and the like.
Differentially private model release for healthcare applications
Published in International Journal of Computers and Applications, 2022
S. Sangeetha, G. Sudha Sadasivam, Ayush Srikanth
The Naïve Bayes classifier is based on Bayes' theorem and is a probability classifier. This algorithm has the advantage of producing reasonable results even with a smaller training dataset. It only requires the frequencies of the attributes to be calculated from the training data set. But its major disadvantage is also its strength of being naïve or simple. While the algorithm is easy to use, it assumes the data are independent of each other. Despite its naivety, it is still widely used in medical applications. The mathematical formula of Thomas Bayes given in the following equation is used to determine the conditional probability where is the probability of the occurrence of event A when event B occurs; is the probability of the occurrence of A; is the probability of the occurrence of event B when event A occurs; is the probability of the occurrence of B.
Classification of Customer Reviews Using Machine Learning Algorithms
Published in Applied Artificial Intelligence, 2021
NB Tree is a supervised classifier that combines Bayesian rule and decision tree. This algorithm uses the Bayes rule to calculate the likelihood of each given class of instances, assuming that the properties are the conditional independent of the label given (Gulsoy and Kulluk 2019). A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes’ theorem with strong (naive) independence assumptions. In simple terms, a Naive Bayes classifier assumes that the presence (or absence) of a particular feature of a class (i.e. attribute) is unrelated to the presence (or absence) of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 4 inches in diameter. Even if these features depend on each other or upon the existence of the other features, a Naive Bayes classifier considers all of these properties to independently contribute to the probability that this fruit is an apple.
Advanced Classification of Architectural Heritage: A Corpus of Desert on Rout Caravanserais
Published in International Journal of Architectural Heritage, 2020
Elham Andaroodi, Frederic Andres
The research implemented Bayesian Networks in a different way. It is used to model uncertain descriptive design features. Here, the classification algorithm based on Naïve Bayes inference is used. It worked well with the data set of caravanserais, combining categorical or qualitative textual features. Naïve Bayes is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature (Ray 2018). For example, a caravanserai belongs to the selected corpus of this research, if it has open courtyard, the ring of rooms are separated from stables, and is located outside historic cities. Even if these features depend on each other or upon the existence of the other features, all of these properties independently, contribute to the probability that this caravanserai belongs to this corpus and that is why it is known as “Naive.” Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods. If the data can be converted into a frequency table, then a table of likelihood can be created, and by using Naïve Bayesian equation, it is possible to calculate the posterior probability for each class (Ray 2018).