Explore chapters and articles related to this topic
Committee Machines
Published in Yu Hen Hu, Jenq-Neng Hwang, Handbook of Neural Network Signal Processing, 2018
In committee machines, an ensemble of estimators — consisting typically of neural networks or decision trees — is generated by means of a learning process, and the prediction of the committee for a new input is generated in the form of a combination of the predictions of the individual committee members. Committee machines can be useful in many ways. First, the committee might exhibit a test set performance unobtainable by an individual committee member on its own. The reason for this is that the errors of the individual committee members cancel out to some degree when their predictions are combined. The surprising discovery of this line of research is that even if the committee members were trained on data derived from the same data set, the predictions of the individual committee members might be sufficiently different such that this averaging process takes place and is beneficial. This line of research is described in Section 5.2. A second reason for using committee machines is modularity. It is sometimes beneficial if a mapping from input to target is not approximated by one estimator but by several estimators, where each estimator can focus on a particular region in input space. The prediction of the committee is obtained by a locally weighted combination of the predictions of the committee members. It could be shown that, in some applications, the individual members self-organize in such a way that the prediction task is modernized in a meaningful way. The most important representatives of this line of research are the mixture of experts approach and its variants, which are described in Section 5.3. The third reason for using committee machines is a reduction in computational complexity. Instead of training one estimator using all training data, it is computationally more efficient for some types of estimators to partition the data set into several data sets, train different estimators on the individual data sets, and then combine the predictions of the individual estimators. Typical examples of estimators for which this procedure is beneficial are Gaussian process regression, kriging, regularization neural networks, smoothing splines, and the support vector machine, since for those systems, training time increases drastically with increasing training data set size. By using a committee machine approach, the computational complexity increases only linearly with the size of the training data set. Section 5.4 shows how the estimates of the individual committee members can be combined so that the performance of the committee is not substantially degraded compared to the performance of one system trained on all data.
Estimation of shear and Stoneley wave velocities from conventional well data using different intelligent systems and the concept of committee machine: an example from South Pars gas field, Persian Gulf
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2023
Mohammad Mahdi Labani, Mostafa Sabzekar
The concept of committee machine is used in the literature for the combination of different approaches as well as intelligent systems for utilizing their benefits. However, in this paper, eight predictors are used to generate the outputs. Then, a linear weighted combination of superior predictors is considered. Each predictor’s weight presents its contribution on the final output. Since the weight of each predictor is not predefined, we utilized the genetic algorithm to obtain each weight based on the training data. One of the main advantages of such an approach is that not only to achieve more accurate output but also to identify the contribution of each predictor on the output.
Designing a committee of machines for modeling viscosity of water-based nanofluids
Published in Engineering Applications of Computational Fluid Mechanics, 2021
Abdolhossein Hemmati-Sarapardeh, Sobhan Hatami, Hamed Taghvaei, Ali Naseri, Shahab S. Band, Kwok-wing Chau
As mentioned earlier, three of the most accurate models (BR-MLP, LM-MLP and RBF) were selected and combined into a single model by a committee machine intelligent system. Table 4 shows that the CMIS model with AARE values of 1.263%, 1.117%, 1.246% for the training set, testing set, and all data, respectively, is the best model for estimating the relative viscosity of water-based nanofluids.