Explore chapters and articles related to this topic
TensorFlow and Keras Fundamentals
Published in Mehdi Ghayoumi, Deep Learning in Practice, 2021
Keras is one of the easy platforms to use python libraries and is on top of some machine learning platforms like TensorFlow to create deep learning models. The CNN and RNN models are easy to use and quick to implement. TensorFlow provides both low- and high-level APIs, and Keras provides the low level. It also helps to reduce the computation costs. Here, we review some fundamentals and main concepts of the Keras and explain them with some examples. You also can use the Keras website (https://keras.io/) for more details and examples. There are some requirements to start using Keras as follows:Python 3.5 or higherSciPy with NumPyTensorFlowAny OS (Windows, Linux, or Mac)
Machine Learning for Electron Microscopy
Published in Alina Bruma, Scanning Transmission Electron Microscopy, 2020
Once the data are ready you may want to think about what library to use to create a network. For Python, the typical choices are Keras, Tensorflow, or Pytorch. Each has their own distinct strengths and weaknesses that are beyond the scope of this discussion. To create the network outlined in Figure 2.3, several blocks are required: a convolutional layer, a parametric ReLU activation layer, and a MaxPool layer for convolution; a convolution, ReLU activation, and UpSampling; and a pixel classifier for deconvolution. Additional parameters, such as the number of filters, padding, the size of the kernel, the step size, and dilation, must be defined to segment the data at each layer. This is by far the most cumbersome step, as it simultaneously forces the user to encode the structure of the network as well as define all the sizes, sampling, and data flow. After this step the model is compiled, and the network is ready to be trained. Remember, you will need a training pair, the training image, and the ground truth that you have collected or created. Depending on the size of your network and the amount of data you have, this may take some time. Validating your model, with data that your network has not been trained for, should be the last step before running data on interest.
Deep Learning Neural Networks
Published in Mark Chang, Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare, 2020
Keras is a high-level neural networks API written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. The R packages keras and KerasR are the two R version of Keras for the statistical community. How would you compare Keras, the original Python package with the R packages? The keras package uses the pipe operator (%>%) to connect functions or operations together, but you won’t find this in kerasR: for example, to make your model with kerasR, you’ll see that you need to make use of the $ operator. The usage of the pipe operator generally improves the readability of your code. The package kerasR contains functions that are named in a similar, but not identical way as the original Keras package. For example, the initial (Python) compile() function is called keras_compile(); The same holds for other functions, such as fit(), which becomes keras_fit(), or predict(), which is keras_predict when you make use of the kerasR package. These are all custom wrappers.
Displacement prediction of water-induced landslides using a recurrent deep learning model
Published in European Journal of Environmental and Civil Engineering, 2023
Qingxiang Meng, Huanling Wang, Mingjie He, Jinjian Gu, Jian Qi, Lanlan Yang
Before implementation of landslide displacement prediction, we give a brief introduction to the deep-learning codes used in the deep-learning model. Two open-source deep-learning frameworks, namely, TensorFlow and Keras, are applied in this work. TensorFlow was originally developed by Google for the purposes of conducting machine learning and deep neural network research (Abadi et al., 2016). This system is one of the most widely used deep-learning frameworks and is sufficiently general to be applicable in a wide variety of other domains as well. Keras is a high-level neural network API written in Python and is capable of running on top of a main deep-learning framework such as TensorFlow, CNTK, or Theano (François, 2015). In this work, we apply the deep-learning network using Keras running on TensorFlow. In addition to TensorFlow and Keras, selected other statistical software packages such as Pandas, Sklearn and Statsmodels are also used in the data analysis. The general process for a time series can be divided into 3 components, namely, data preprocessing, training and testing (Figure 3).
Convolutional Neural Network–Aided Temperature Field Reconstruction: An Innovative Method for Advanced Reactor Monitoring
Published in Nuclear Technology, 2023
Victor C. Leite, Elia Merzari, Roberto Ponciroli, Lander Ibarra
The main NN-based technique used in the present work is a physics-informed CNN for reconstructing the temperature field within a fluid domain from sparse temperature measurements. In addition, an AE is employed to improve the accuracy of CNN predictions. This correction can be used for high-Prandtl-number cases where advective effects may not be properly addressed by the KH integral equation at the core of the CNN algorithm. Both of these methods were developed using the Keras deep-learning framework.23 Keras is an open-source software library featuring a Python interface for artificial neural networks and runs on top of the TensorFlow library.24 The next two sections describe these methods.
Semantic segmentation of high-resolution remote sensing images using fully convolutional network with adaptive threshold
Published in Connection Science, 2019
Zhihuan Wu, Yongming Gao, Lei Li, Junshi Xue, Yuntao Li
The model was implemented by Keras with Tensorflow backend. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano (Parvat et al., 2017). Tensorflow developed by the Google Brain team is an open-source software library for dataflow programming across a range of tasks. Many 3rd party libraries are required such as Tifffle for reading remote sensing imagery, OpenCV for basic image processing, Shapely for handling polygon data, Matplotlib for data visualisation, scikit-learn for basic machine learning functions. The experiments were conducted on a Sugon W560-G20 Server with E5-2650 v3 CPU, 32 GB memory, and Nvidia GTX 1080 Ti GPU.