Explore chapters and articles related to this topic
An Unexpected Renaissance Age
Published in Alessio Plebe, Pietro Perconti, The Future of the Artificial Mind, 2021
Alessio Plebe, Pietro Perconti
On a side very different from mathematical topology, there are intriguing attempts at explaining DL by drawing an analogy with theoretical physics. There is a technique, known as renormalization group, which has played a fundamental role in contemporary theoretical physics, overcoming the problem of series summing up to infinite probability in fundamental equations, like Dirac’s quantum electrodynamics (Stueckelberg and Petermann, 1953). The renormalization group allows one to relate changes of a physical system that appear at different scales, yet exhibit scale invariance properties. By applying renormalization group it was possible to resolve the critical divergence towards infinity of the series in the Dirac equation. The renormalization group is also the best tool for the analysis of critical phenomena, phase transitions at the boundaries between the ordinary discontinuous behavior between phases, and the continuum of phases observed at temperatures above a certain threshold. Physical systems approaching the critical point have a remarkable invariance in scale of some of their parameters, making the renormalization group very effective in connecting phenomena which occur at quite different length scales (Wilson and Kogut, 1974). What does the renormalization group have to do with artificial neural networks?
Fractal Analysis
Published in Shyam S. Sablani, M. Shafiur Rahman, Ashim K. Datta, Arun S. Mujumdar, Handbook of Food and Bioprocess Modeling Techniques, 2006
Fractal analysis is mainly applied when other methods fail or become tedious to solve complex or chaotic problems. Many natural patterns are either irregular or fragmented to such an extreme degree that Euclidian or classical geometry could not describe their form (Mandelbrot 1977, 1987). Any shape can be characterized by whether or not it has a characteristic length (Takayasu 1990). For example, a sphere has a characteristic length defined as the diameter. Shapes with characteristic lengths have an important common property of smoothness of surface. A shape having no characteristic length is called self-similar. Self-similarity is also known as scale-invariance, because selfsimilar shapes do not change their shape under a change of observational scale. This important symmetry gives a clue to understanding complicated shapes, which have no characteristic length, such as the Koch curve or clouds (Takayasu 1990). The idea of a fractal is based on the lack of characteristic length or on self-similarity. The word fractal is a new term introduced by Mandelbrot (1977) to represent shapes or phenomena having no characteristic length. The origin of this word is the Latin adjective fractus meaning broken. The English words “fractional” and “fracture” are derived from this Latin word fractus, which means the state of broken pieces being gathered together irregularly.
On the Predictability of Seizures
Published in Andrea Varsavsky, Iven Mareels, Mark Cook, Epileptic Seizures and the EEG, 2016
Andrea Varsavsky, Iven Mareels, Mark Cook
Scale-invariance is an instance of self-similarity. The Mandelbrot set in Figure 7.2 is spatially scale invariant because similar properties are present at all spatial scales of the picture. More relevant to this chapter is temporal scale-invariance, which for a stochastic process implies that the statistical properties at different time scales (e.g., days versus hours versus minutes) effectively remain the same. Scale-invariance has been studied for earthquake frequency, internet traffic and economic data [130], and has also been identified in many biological systems including the timing of ion-channel opening in neurons, auditory nerve fiber action potentials and human heartbeats [7].
A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D Images
Published in Applied Artificial Intelligence, 2022
Scale Invariance is, by definition, the ability of a method to process a given input, independent of the relative scale (i.e. the scale of an object to its scene) or image resolution. Although it is extremely crucial for certain applications, this ability is usually overlooked or is confused with a method’s ability to include multiscale information. A method may use multiscale information to improve its pixel-wise segmentation ability, but can still be dependent on scale or resolution. That is why we find it necessary to discuss this issue under a different title and to provide information on the techniques that provide scale and/or resolution invariance.
Invariance-based approach: general methods and pavement engineering case study
Published in International Journal of General Systems, 2021
Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva
For example, if we observe a power law – a dependence corresponding to scale-scale invariance – then a natural explanation if that this dependence is scale-scale invariant. Indeed, as we have mentioned, the property of scale-scale invariance leads to a power law, and the only thing that remains to be determined is the exact values of the parameters c and a.
Scale-invariance of ruggedness measures in fractal fitness landscapes
Published in International Journal of Parallel, Emergent and Distributed Systems, 2018
We can see that the landscape in Figure 2(a) has a Cantor set-like structure typical of fractal sets. This structure is maintained if the scale is reduced, see Figure 2(b), which is a feature of scale-invariance and self-similarity. The same can be observed for the dynamic landscape (Figure 2(c) and (d)). Here, it can be seen that for small values of , the landscape has no fractal features. This is due to the fact that for small target times , targeting by small perturbations of the parameters cannot be achieved generically. For larger values of , approximately for , targeting is possible, and the landscape has fractal features. This is confirmed by the box-counting dimension of the landscapes, which is given in Figure 3 for all four example systems. The results show that for below a certain threshold, the dimension is equal to 2, which corresponds with the target time not large enough to achieved targeting. As the target time gets larger, the dimension also increases and is no longer an integer. The value of converges to a constant for getting large enough (approximately ), which is almost the same for all four tested landscapes (about ). Hence, it can be concluded that there is no direct relation between the fractal dimension of the chaotic attractors of the studied maps (see Table 1) and the fractal dimension of the fitness landscapes that results from employing the maps to solve the targeting problem. Further note that there is a difference in the value of depending on the initial edge length , where smaller values of produce slightly higher values of . However, this effect diminishes and finally comes to an end by decreasing further. This is consistent as the calculation of is done for different values of , if different values of are taken, see also Appendix 1.