Explore chapters and articles related to this topic
Techniques
Published in Aditi Majumder, M. Gopi, Introduction to Visual Computing, 2018
If the constraint is only that the sum of the coefficients is 1.0, but the coefficients can be of any value, then it is called an affine combination. If there are no constraints on the coefficients, it is called a linear combination. Note that a linear combination does not always mean linear interpolation. For example, if the Equation 2.1 was C(V) = α2C(V1) + (1 - α2)C(V2), it would not be a linear interpolation in α, but would still be a linear combination of C(V1) and C(V2) because alpha2 and (1 - alpha2) are still scalar values. In other words, for linear interpolation, the derivative of the interpolated function should be a constant.
Efficient and interpretable monitoring of high-dimensional categorical processes
Published in IISE Transactions, 2023
Kai Wang, Jian Li, Fugee Tsung
To see this, let denote a vertex vector of the standard simplex with the jth entry being one and all the others being zero, and in general all these ejs are treated equally, i.e., we let If embedded in the likelihood maximization promotes polarization, it should assign a strictly larger value to any of the vertices than to any interior point within the simplex that can be represented as an affine combination of these vertices, i.e., where and If is concave, by the Jensen’s inequality, we have which contradicts (5) and thus finishes the proof.
Comparison of optimal linear, affine and convex combinations of metamodels
Published in Engineering Optimization, 2021
During recent years, initiated by the work of Viana, Haftka, and Steffen (2009) and Acar and Rais-Rohani (2009), the choice of adopting affine combinations of metamodels has emerged to be a standard approach for setting up optimal ensembles of metamodels for design optimization. However, the present author's experience with using affine combinations of metamodels has often been poor. Overfitting and poor generalization of the optimal affine ensemble are frequently observed. Typically, a large positive weight is cancelled out by an almost equally large negative weight. In fact, this issue has already been discussed in Viana, Haftka, and Steffen (2009), but explained to probably depend on the approximation used in the objective function. Furthermore, it was also discussed in this early article whether to introduce a convex constraint in order to overcome this pitfall. Therefore, one might be surprised that so many contributions promote the approach of using affine combinations of metamodels, especially since the pitfalls with affine combinations are well known in machine learning, see for example the textbook by Zhou (2012). In fact, this has already been pointed out by Breiman (1996) when he investigated the ideas of stacked generalization proposed by Wolpert (1992). Breiman clearly states in his article that the proper constraints on the weights of a combination of metamodels are and . In conclusion, the ensemble of metamodels should be a convex combination, not an affine combination. Indeed, the investigations done by Strömberg (2018b, 2019) support this statement. This is also demonstrated in the following article, where optimal linear, affine and convex combinations are established by minimizing the norm of the residual vector of the leave-one-out cross-validation errors and validating the optimal ensembles of metamodels (OEMs) by calculating the root mean square errors (RMSE) for eight benchmark functions.