Explore chapters and articles related to this topic
Deep Learning Architecture and Framework
Published in Krishna Kant Singh, Vibhav Kumar Sachan, Akansha Singh, Sanjeevikumar Padmanaban, Deep Learning in Visual Computing and Signal Processing, 2023
Ashish Tripathi, Shraddha Upadhaya, Arun Kumar Singh, Krishna Kant Singh, Arush Jain, Pushpa Choudhary, Prem Chand Vashist
TensorFlow is used for image processing, speech recognition, and text analysis. It is used in programming languages, such as Python, Go, C++, C, Java and so on. PyTorch framework is used in Python programming language. It is used for image classification, text generation, and for many more purposes. MXNet framework was developed by Apache. It is used in many programming languages such as Julia, Go, JavaScript, MATLAB, R, Scala, and Perl. It is supposed to support flexible programming model in many languages. Other frameworks such as Keras, DeepLearning4j, Deeplearn.js, and Microsoft Cognitive Toolkit are used in various networks. All these networks are used for various applications according to their functionality. Hence, to build an efficient and accurate network, an appropriate network and framework are being chosen. Each network and framework have their own advantages and by choosing correct network and framework, an efficient and accurate model can be developed.
Implementation
Published in Seyedeh Leili Mirtaheri, Reza Shahbazian, Machine Learning Theory to Applications, 2022
Seyedeh Leili Mirtaheri, Reza Shahbazian
PyTorch is a Python library for GPU-accelerated Deep Learning. It is a Python interface of the same optimized libraries in C which Torch uses. It has been developed by Facebook’s AI research group starting in 2016. PyTorch is written in C, CUDA, and Python. The library binds acceleration libraries like IntelMKL and NVIDIA (cuDNN, NCCL). It uses CPU and GPU Tensor and Neural Network backends (TH, THC, THNN, THCUNN) at the core, written as independent libraries on a C99 API. PyTorch supports tensor computation with mighty GPU acceleration and DNNs built on a tape-based auto grad system. By allowing complicated architectures to be built easily, it has become popular. Generally, when the way a network functions is changed, everything needs to make a new beginning. A technique used by PyTorch called reverse-mode auto-differentiation enables changing the way a network functions with the small endeavor (i.e., dynamic computational graph or DCG). It is mainly inspired by Chainer and auto grad. The library is available under a BSD license for free and is supported by NVIDIA, Twitter, Facebook, and several other organizations.
More on Machine Learning API
Published in Amartya Mukherjee, Nilanjan Dey, Smart Computing with Open Source Platforms, 2019
Amartya Mukherjee, Nilanjan Dey
The philosophy of API is that one can run it very easily and immediately. We do not have to wait for the whole code to be written and executed at a glance. The fundamental phenomena are described as follows: It is easy to implement and use.It can smoothly integrate with the data science stack of Python. It has enough similarity with NumPy.Dynamic computation platform is supported by PyTorch. PyTorch framework builds a computational graph as we start using it, and the graph can dynamically be changeable during runtime.
Gravel road classification based on loose gravel using transfer learning
Published in International Journal of Pavement Engineering, 2022
Nausheen Saeed, Roger G. Nyberg, Moudud Alam
The optimal rate range can be found with the benefit of a learning rate finder from the Fastai library [35]. Fastai library is a deep learning library based on top of PyTorch. Pytorch is an open-source machine learning framework based on python (Paszke et al.2019). Figure 5 shows the output of a discriminative learning rate finder. It shows a relation between the learning rate and loss over each iteration. It can be seen that the loss function increases or decreases with respect to the learning rate. It can be seen in the below graph where the loss is getting constantly lower; in this case, it follows a steep path from 1e−04 to 1e−03. This is the optimal learning rate in this case. The earlier layers will be trained at a lower learning rate of le−04 and the higher layers of le−03. After this range, it is seen that the loss function stops decreasing and then increases. The idea is to choose a range where the loss function continues to decline.
Prediction of the buckling mode of cylindrical composite shells with imperfections using FEM-based deep learning approach
Published in Advanced Composite Materials, 2023
Ruihai Xin, Vinh Tung Le, Nam Seo Goo
PyTorch is a popular open-source deep learning framework that is based on Torch, a scientific computing framework for Lua [33]. It was developed by Facebook’s AI Research group and has since become one of the most widely used deep learning frameworks. PyTorch is known for its simplicity, flexibility, and ease of use, making it a good choice for researchers and practitioners alike. It provides a dynamic computational graph, which makes it easy to modify models during the course of training. It also offers automatic differentiation, which allows gradients to be computed automatically, saving time and effort. In addition, PyTorch has a rich ecosystem, with tools for tasks, such as visualization, distributed training, and model deployment.
Space-associated domain adaptation for three-dimensional mineral prospectivity modeling
Published in International Journal of Digital Earth, 2023
Yang Zheng, Hao Deng, Jingjie Wu, Ruisheng Wang, Zhankun Liu, Lixin Wu, Xiancheng Mao, Jing Chen
The proposed approach was implemented using PyTorch, an open-source Python machine learning library developed by Facebook AI Research (FAIR) (Paszke et al. 2019). In this framework, the space-associated deep adaptation network was trained in an end-to-end manner. RAdam (Rectified Adam) (Liu et al. 2019) was used to calculate gradients of the joint loss function in Equation (10), which provides a dynamic heuristic method to provide automated variance decay, thus eliminating the need for manual tuning involved in warm-up process during training. The domain adaptation prospectivity network was trained from scratch rather than pre-trained from large-scale datasets. Dropout regularization was applied to both FC6 and FC7 during training.