Explore chapters and articles related to this topic
Cancelable Biometric Systems from Research to Reality
Published in Gaurav Jaswal, Vivek Kanhangad, Raghavendra Ramachandra, AI and Deep Learning in Biometric Security, 2021
Random Projection-Based DNN Techniques: Several recent approaches proposed in the literature used cancelable transformations like RP to first map the image into a random subspace and then learning features on transformed domain using CNN networks or vice versa. Liu et al. (2018) proposed a method named as FVR-DLRP for secure authentication of finger-vein templates using deep learning and RP methods [42]. They first extracted finger-vein features and projected those on orthonormal random matrices for template transformation and dimensionality reduction. The transformed finger-vein features were then trained over DBN consisting of three layers of the restricted Boltzmann machine for authentication purposes. The DBN is a multilayer network structure, which can learn the complex mapping relationship between input and output.
Deep Learning for Next-Generation Healthcare: A Survey of State-of-the-Art and Research Prospects
Published in Adwitiya Sinha, Megha Rathi, Smart Healthcare Systems, 2019
A restrictive Boltzmann machine (RBM) is a stochastic neural network consisting of a visible layer and a hidden layer (Ackely et al., 1986). There is a symmetric connection between every unit of the hidden layer and every unit of the visible layer, but there is no connection among the units of the identical layer, as shown in Figure 12.4a. It is a generative model capable of generating observed input observations from the hidden representation. The RBMs were stacked together to configure a deep belief network (DBN) in which there is a visible layer followed by a sequence of hidden layer (Hinton et al., 2006). The lower layers of DBN are connected via asymmetric connections, and only the topmost layers have the symmetric connections, as shown in Figure 12.4b. The DBN is trained by first pretraining the network with an unsupervised greedy layerwise strategy followed by fine-tuning of learned parameters. This is all done by applying wake–sleep algorithm (Hinton et al., 1995) with the target outputs.
Applications of IoT through Machines' Integration to Provide Smart Environment
Published in Nishu Gupta, Srinivas Kiran Gottapu, Rakesh Nayak, Anil Kumar Gupta, Mohammad Derawi, Jayden Khakurel, Human-Machine Interaction and IoT Applications for a Smarter World, 2023
Manikandan Jagarajan, Ramkumar Jayaraman, Amrita Laskar
The DBN is a generative form of DNN with many hidden layers and a visible layer. DBNs have the ability to extract hierarchical meaning from data. Any DBN layer is RBMs, and the previous RBM's secret layer serves as the current DBN layer's input layer. A DBN may learn to probabilistically recreate its inputs when trained on a set of instances with no supervision. Feature detectors are then used by the layers. A DBN can be further trained to do classification under supervision after completing this learning stage. DBNs are made up of simple, unsupervised networks like RBMs or autoencoders, with the hidden layer of each sub-network serving as the visible layer for the next.
Novel computer aided diagnostic system using hybrid neural network for early detection of pancreatic cancer
Published in Automatika, 2023
A deep belief network (DBN) is a type of deep neural network that produces graphical models, made up of layers of latent variables. The latent variables are connected to each other, but not to the entities within each layer. DBN can be viewed as a set of simple learning modules. Each module is a specific type of her RBM with obvious unifying layers. Information is represented by this layer. The second layer of hidden units represents features of the data that record higher-order correlations. There are no links between the two levels, which are connected by a matrix of symmetrically weighted links (W). The RBM-trained DBN weights (w) specify both the hidden vector's (h) (v|h,w) and the prior distribution's (h|w) parameters. Given by, is the likelihood of creating a visible vector. RBMs can be layered and trained using a greedy method to create a DBN, a computational structure. He created the DBN quick learning technique. The weight updates between visible v and concealed h are listed below. The numbers 0 and 1 in the calculation above, respectively, stand in for network data and rebuild state.
An efficient brain tumor classification using MRI images with hybrid deep intelligence model
Published in The Imaging Science Journal, 2023
Annapareddy V. N. Reddy, Pradeep Kumar Mallick, B. Srinivasa Rao, Phaneendra Kanakamedala
A greedy learning method is employed to pre-train DBN. By building weights layer by layer, the greedy learning strategy learns the most. These generative weights define how much a variable in one layer depends on a variable in a subsequent layer. We do several Gibbs sampling steps, mostly on the DBN's first two hidden layers. Now the complete RBM covered by the first 2 hidden levels has been sampled. The latent structural values for every layer may be determined in a single bottom-up phase throughout training. The generated data vector is used as the top input in the greedy pre-training application's bottom layer. The generating weights are then added utilizing fine-tuning in the reverse direction. To skew the results, it determines the current and projected value. The final output of both classifiers is defined as .
Data-driven hierarchical learning and real-time decision-making of equipment scheduling and location assignment in automatic high-density storage systems
Published in International Journal of Production Research, 2022
Zhun Xu, Liyun Xu, Xufeng Ling, Beikun Zhang
DBN is a deep learning model proposed by Hinton et al. for feature learning and classification (Hinton, Osindero, and Teh 2006). The structure of the DBN is illustrated in Figure 6. It is constructed by the sequential stacking of restricted Boltzmann machines (RBMs). The output of the former RBM is the input for the next RBM. The stacked RBMs are nested in a classification layer to form the DBN model. The hidden layer variables of the DBN are usually binary, whereas the visible units can be binary or real (Sutskever and Hinton 2008). The DBN learning process is divided into two stages. First, unsupervised pre-training is performed layer-by-layer on the RBM, and then supervised fine-tuning is performed on the entire network through the back propagation (BP) algorithm.