Explore chapters and articles related to this topic
Introduction to Deep Learning
Published in Lia Morra, Silvia Delsanto, Loredana Correale, Artificial Intelligence in Medical Imaging, 2019
Lia Morra, Silvia Delsanto, Loredana Correale
Multi-task learning refers to the joint learning of multiple tasks by the same model. By sharing representations between related tasks, we can enable our model to generalize better on each individual task. This approach is particularly effective in computer vision and medical image analysis, where features are often relevant to multiple tasks, and is usually achieved through parameter sharing [96]. Examples include the detection of different types of lesions in images of the same anatomical district [48, 55]. Another possible strategy is using a single network to model different tasks (detection, segmentation and classification) for a single pathology. In most cases, multi-task learning is implemented by sharing the weights or parameters for the model, but using different outputs depending on the task. There are however exceptions to this rule. For instance, Moeskops and colleagues designed a network where multiple tasks (tissue segmentation in MR, pectoral muscle segmentation in MR and coronary artery segmentation in CT) are learned in a joint label space, like they were a single multi-class problem [118]. Alternatively, tasks can be learnt sequentially, rather than in parallel: in this case, some authors have observed benefits in learning easier tasks first, and then proceeding to more complex tasks, much like radiologists are trained [119].
Deep Convolutional Neural Networks
Published in Mahmoud Hassaballah, Ali Ismail Awad, Deep Learning in Computer Vision, 2020
Mahmoud Khaled Abd-Ellah, Ali Ismail Awad, Ashraf A. M. Khalaf, Hesham F. A. Hamed
Multitask learning is basically a machine learning model, wherein the goal is to prepare the learning network to perform well on numerous tasks. Multitask learning systems will generally explore shared portrayals that exist among the functions to obtain a superior generalization execution over their counterparts created for one task. In CNNs, multitask learning is acknowledged utilizing distinctive methodologies. One class of methodologies uses a multitask loss function with hyperparameters normally controlling the losses. For instance, Girshik et al. [57] utilize a multitask loss to train the network for bounding-box and classification regression functions along these lines, enhancing execution for object recognition.
On continuous health monitoring of bridges under serious environmental variability by an innovative multi-task unsupervised learning method
Published in Structure and Infrastructure Engineering, 2023
Alireza Entezami, Hassan Sarmadi, Bahareh Behkamal, Carlo De Michele
Multi-task learning is an advanced branch of machine learning that intends to improve the performance of multiple related tasks by leveraging informative data or outputs among them (Y. Zhang & Yang, 2018). A multi-task learning model exploits useful information from other dependent fields in an effort to develop a more accurate model and obtain more reliable outputs. The key assumption on multi-task learning is that all the tasks or at least a subset of them are dependent, in which case jointly learning multiple tasks can lead to better performance than learning them independently. Depending upon the labels of training data, multi-task learning is classified as supervised, semi-supervised, and unsupervised learning. In multi-task supervised learning, the main purpose is to use fully labeled data for the classification or regression tasks. Multi-task semi-supervised learning follows the same purposes as supervised learning but with partially labeled data. When the only unlabeled data is present, multi-task unsupervised learning is the best choice and suitable for one-class tasks such as clustering and anomaly detection.
Digital twins in human understanding: a deep learning-based method to recognize personality traits
Published in International Journal of Computer Integrated Manufacturing, 2021
Jianshan Sun, Zhiqiang Tian, Yelin Fu, Jie Geng, Chunli Liu
In contrast to general single-task learning in machine learning, multitask learning aims to precomplete different predictive tasks through a model. Multitask learning can utilize useful information between multiple related learning tasks to improve the generalization ability of a model. Moreover, it can be solved by splitting a complex problem (multitasks) into multiple simple and independent subproblems (single tasks). In practical applications, numerous complex problems cannot be split easily, but even if they can, subproblems are interrelated. Multitask learning can solve this problem by sharing data features and model parameters. In image recognition and natural-language processing, multitask learning can improve task performance considerably.
A mask-guided attention deep learning model for COVID-19 diagnosis based on an integrated CT scan images database
Published in IISE Transactions on Healthcare Systems Engineering, 2023
Maede Maftouni, Bo Shen, Andrew Chung Chee Law, Niloofar Ayoobi Yazdi, Fahimeh Hadavand, Fereshte Ghiasvand, Zhenyu (James) Kong
In general, MTL is known as a machine learning approach that assimilates information from correlated tasks to improve the generalization capability of the overall learning model (Zhang & Yang, 2021). There are two approaches in multi-task learning: hard parameter sharing and soft parameter sharing of hidden layers Ruder (2017). The hard parameter sharing is commonly found in the literature, in which multiple tasks (networks) share some hidden layers while keeping their separated output layers. On the other hand, soft parameter sharing is achieved when each task has its separate model and respective parameters, but the parameters from different tasks are jointly regularized.