Explore chapters and articles related to this topic
IoT Crypt – An Intelligent System for Securing IoT Devices Using Artificial Intelligence and Machine Learning
Published in Puneet Kumar, Vinod Kumar Jain, Dharminder Kumar, Artificial Intelligence and Global Society, 2021
Deep learning (deep neural learning or deep neural network) is an advanced research field of artificial intelligence technology that focuses on getting its system artificially trained to perform high computational tasks and works with a base concept of neural networks (just like the nervous system in a human body) by forming a complete network architecture consisting of various nodes connected altogether [9]. Deep learning is capable of learning unsupervised algorithms from unstructured or unlabeled data and forming a trained dataset with each node connected to other nodes, passing relevant information throughout the system. The artificial neural networks in a deep learning model are built like the human brain with neuron nodes, to work on high computational power and predict better decisions (Figure 15.4).
Intelligent Transport Systems and Traffic Management
Published in Rajshree Srivastava, Sandeep Kautish, Rajeev Tiwari, Green Information and Communication Systems for a Sustainable Future, 2020
Pranav Arora, Deepak Kumar Sharma
Neural networks (NN) are computer systems that are similar to and work like the biological neural networks, that constitute the brain of living individuals. Figure 3.3(a) shows a schematic way in which the neurons are present in a human brain. These kinds of system have the ability to learn and do certain tasks by taking examples, without being explicitly or specifically programmed with specific rules for the operation or the task. For example, in image recognition systems, the system will learn to identify and sort images that contain dogs by analyzing and processing the example images that have been manually categorized and segregated as “dog” or “no-dog” and fed in as input. By using the results, it can then be used to identify the presence of dogs in other images. They do this without any prior knowledge of dogs; instead, they automatically generate identifying characteristics from the examples that they process, such as identifying eyes, tails, etc. A neural network consists of multiple nodes that are connected to one other and which process independently but take information from the preceding node and pass it on to the succeeding node in the layer. Such a system, consisting of multiple layers of neural networks, is called a convolutional neural network (CNN). These kinds of neural networks have major applications in image classification, image and video recognition, natural language processing, and recommender systems, as well as in various medical image analysis systems.
Identification of Genuine Images Out of Near Original and Replicas to Enhance the Machine Learning by Convolutional Neural Network
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
Andrews Samraj, D Sowmiya., B Mathisalini., J Vaishnavi., C Sharmila., K Deepthisri.
In this model, we executed the transfer learning method to improve the fitness of the network. Transfer learning is a scheme whereby a neural network model is first trained in solving one problem and applying to a different problem that is under contemplation [14]. The pre-trained model which the layers are already trained that are used in new model trained on the problem of interest. It has the benefit of reduce the training time for a neural network model and decreasing the generalization error. Here in this work, we used VGG-16 model for transfer learning to increase the CNN’s performance. The VGG-16 model was expanded by the visual graphics group from the university of oxford in the paper “very deep Convolutional networks for large scale image recognition” [13]. Keras contributes access to a number of such top executed pre-trained models to be used in Transfer learning that in turn will improve image identification tasks. Figure 26.1 show the work flow of the proposed system in a module wise manner and further explained in figure 26.2 to show their functionality.
Creating Historical Building Models by Deep Fusion of Multi-Source Heterogeneous Data Using Residual 3D Convolutional Neural Network
Published in International Journal of Architectural Heritage, 2023
Machine learning, a subclass of artificial intelligence, is self-learning based on data. The machine learning algorithm is divided into three categories which are supervised, unsupervised, and reinforcement learning, and they are applied widely in detection, prediction, and generation. As a subcategory of machine learning, deep learning is a neural network with neurons having several layers between input and output, robustly providing automatic learning of features. Machine learning provides a practical platform for detecting features in images, texts, and languages. After reviewing several examples of photogrammetry and remote sensing carried out at Leibniz University Hannover, Heipke and Rottensteiner (2020) claimed that deep learning, particularly CNN, improved significantly the photogrammetric processing in surface reconstruction, scene classification, object extraction, and object tracking and recognition. Yang, Rottensteiner, and Heipke (2021) proposed a deep learning framework with a two-step strategy to verify land use information. The first step was to detect the high-resolution aerial images by a CNN, and the second step was to classify land use by another CNN.
A survey of machine learning in additive manufacturing technologies
Published in International Journal of Computer Integrated Manufacturing, 2023
Artificial neural network (ANN) is constructed to mimic the functionality of the biological neural networks in our brain. A neural network consists of some nodes (neurons) in a layer and links connecting the neurons to form a network. A typical neuron of a neural network generally has three main components: a weighted sum of the input values (), a bias (b) and an activation function (g). The input values (xi) in each neuron from previously connected neurons are multiplied by a weight (wi) and then summed. A bias (b) is then added to the weighted sum to control how easily a neuron is activated. This value is then passed to an activation function (e.g. Relu, Sigmoid or softmax), obtaining the final output (y) of the neuron. This process can be expressed as Equation 1. The weights (wi) and biases (b) are neural network parameters that can be tuned and learned automatically in the training process.
Integrating operational and human factors to predict daily productivity of warehouse employees using extreme gradient boosting
Published in International Journal of Production Research, 2022
Sven F. Falkenberg, Stefan Spinler
Two considerable limitations exist. First, to our knowledge, none of the current research makes productivity predictions on a daily and employee-level granularity, despite its necessity for workforce planning (Othman, Bhuiyan, and Gouw 2012). Second, the use of neural networks to handle multiple variables has significant disadvantages for practitioners. Neural networks require large datasets, although often not available, to train the model. Additionally, they are computationally expensive, which is problematic if calculations need to be repeated in regular intervals. This is the case for workforce planning that requires new predictions every shift. Third, neural networks are black-box models and apply functions to the input variables to such an extent that the new variables become meaningless and insights cannot be generated.