Explore chapters and articles related to this topic
Multi-Objective Evolutionary Optimization
Published in Nirupam Chakraborti, Data-Driven Evolutionary Modeling in Materials Technology, 2023
Here a parameter σ is defined to identify the best guide. It is a vector of dimension m2 for a hyperspace of dimension m. For a three-dimensional case it is defined in terms of the objective functions asσ→=f12−f22f22−f32f32−f12/f12+f22+f32
Introduction
Published in A Vasuki, Nature-Inspired Optimization Algorithms, 2020
Each object or pattern has its own unique set of features that enables identification. When the number of features of an object is higher it becomes impractical to extract all the features, compare it with the existing template, and then identify the object. So to circumvent the problem, a technique known as feature reduction is applied. This is the technique of selecting a minimum number of features from the entire set, for classification of objects with a reasonable percentage of accuracy. Minimizing the number of features is nothing but the selection of a subset of features, thus reducing the computational complexity in searching for the optimum solution. It is equivalent to minimizing the value of dimension d in the hyperspace. Once d is fixed, the next step is to find the value of the feature set X = {x1, x2, x3, …, xd} so that f(X) attains the global maximum or minimum value, as the case may be. If f(X) represents classification accuracy then it will be maximized whereas if f(X) is a classification error then it has to be minimized.
Applications in Signal Processing
Published in Phil Mars, J.R. Chen, Raghu Nambiar, Learning Algorithms, 1996
Phil Mars, J.R. Chen, Raghu Nambiar
In using SLA in the adaptive filtering context, the output set of actions of the automaton are made to correspond to a set of filter coefficients. Each output action of the automaton is thus related to a specific combination of filter coefficients. Since the number of actions of the automaton is finite, this would involve the discretisation of the parameter space into a number of hyperspaces. Thus the error surface is partitioned into a number of hyperspaces, the total number of hyperspaces being equal to the total number of actions of the automaton. The dimension of each hyperspace would be equal to the number of filter parameters. In this case, the task of the automaton would then be to asymptotically choose that action corresponding to the set of filter coefficients which results in the minimum error. This is clarified by presenting an example: suppose the number of filter parameters were three, i.e., [a,b,c] and the number of actions of the automaton were N. Then the actions of the automaton can be described as follows:
1D TEMPERATURE TOMOGRAPHY OF A FLAME, BASED ON VIS-NIR SPECTROMETRY
Published in Combustion Science and Technology, 2022
Milos Mosic, Ivan Belca, Milos Vicic, Becko Kasalica
Here, dotted line is initial temperature distribution, solid line is given temperature distribution and dashed line is a goal/given temperature distribution. System of nonlinear equations forms a surface in hyperspace where number of unknowns is dimension of that space minus one. Object of algorithm is to find a global minimum of the hypersurface. Success of finding global minimum largely depends on the initial solutions that must be set by a programmer. As it might be seen for all three different given distributions of temperatures and three different initial distribution of temperatures the algorithm converges correctly. This shows that algorithm, for fixated initial solution can find displacement of temperature profile inside firebox. In this simulation the temperatures were binned in to ten segments and we used twenty wavelengths between 800 nm and 900 nm, thus in our nonlinear system of equations we had 20 equations with 11 unknowns.
Empirical validation of different internal superficial heat transfer models on a full-scale passive house
Published in Journal of Building Performance Simulation, 2018
Fabio Munaretto, Thomas Recht, Patrick Schalbart, Bruno Peuportier
This article focuses on empirical validation through a monitored passive house case study located in Chambéry (France). Simulated temperatures must be compared to measured temperatures. In the context of empirical validation, it is essential to consider uncertainty bands of temperatures rather than single profiles. Indeed, both simulated and measured temperatures are subject to uncertainty. This requires a proper treatment of uncertainty associated with input parameters: this is part of the uncertainty analysis. Afterwards, uncertainty has to be propagated through the models. The main problem with uncertainty propagation and global sensitivity analysis using Monte-Carlo-like techniques is the high dimension of input parameter hyperspace. Concerning uncertainty propagation, even though high dimension does not induce any computational problem since a few thousands of simulations are sufficient to determine uncertainty bands, whatever the number of input parameters studied, it is nevertheless in practice time-consuming to consider a probability density function (PDF) for several hundred input parameters whose uncertainty could partly be neglected because some of them have negligible influence on the output variance. With regard to sensitivity analysis, this is significantly different as the computation of global sensitivity indices such as Sobol (1993) indices requires approximately 1000–10,000 code runs per input parameter (Iooss 2011a) depending on the sampling procedure, which could represent several tens of thousands of simulation runs. For a complete review on global sensitivity analysis, the reader is referred to Saltelli et al. (2008). In any case, it is either relevant or necessary to select a limited number of input parameters which have the most influence on the output, in order to avoid unrealistic simulation times. To carry out such a selection, we used a screening technique of sensitivity analysis, called ‘elementary effect technique’ or ‘Morris screening technique’ (Morris 1991).