Explore chapters and articles related to this topic
Innovation in Healthcare for Improved Pneumonia Diagnosis with Gradient-Weighted Class Activation Map Visualization
Published in Kavita Taneja, Harmunish Taneja, Kuldeep Kumar, Arvind Selwal, Eng Lieh Ouh, Data Science and Innovations for Intelligent Systems, 2021
Guramritpal Singh Saggu, Keshav Gupta, Palvinder Singh Mann
EfficientNet (Tan & Le, 2019) is the latest architecture proposed by researchers at Google, which largely focuses on the efficiency of the network while also improving the accuracy. CNNs are defined in terms of three scaling dimensions known as depth of the network, width, and the resolution. Depth of the network is defined in terms of its number of layers. Width is associated with how wide the network is. Resolution of the network is defined as the input size of the image that is passed into our network. Scaling networks based on any one have several disadvantages and the model starts to degrade after some time. In order of the scale the width, depth, and resolution of the network in a principled way, (Tan & Le, 2019) proposed a simple yet effective scaling technique that uses the concept of compound scaling. A base network was obtained using a Neural Architecture search that optimizes both accuracy and FLOPS. This network architecture uses an MBConv block, which is an Inverted Residual Block that was previously used in the MobileNetV2 architecture with an Excite and Squeeze block injected.
Adaptive lightweight convolutional neural architecture search for segmentation problem
Published in Engineering Optimization, 2023
Wei Wang, Xianpeng Wang, Xiangman Song
The main idea of neural architecture search (NAS) is to optimize the network architectures that are strongly dependent on data distribution (Broni-Bediako et al.2022). However, NAS may be very time-consuming owing to the iterative evaluation of generated network architectures. In recent years, there have been many algorithms proposed to rationalize the search strategy during evolution, such as supernet-based (Zhou et al.2022) and surrogate-assisted models (Cai, Gao, and Li 2020; Huang et al.2022). However, although these methods are powerful in achieving good neural architectures, the evolutionary process in the whole search space may often be unacceptable, because the computational cost will grow exponentially with the increase of dimensionality. Therefore, it remains a challenge in evolutionary NAS to achieve a more efficient search by reasonably decomposing the search space to reduce the computational cost.