Explore chapters and articles related to this topic
Higher-Level Programming
Published in Syed R. Rizvi, Microcontroller Programming, 2016
FORTRAN or formula translation was the first practical higher-level programming language invented by John Backus for IBM in 1954. It was later released for commercial use in 1957. Fortran is still used today in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics, and computational chemistry. John Backus had a vision of creating a programming language that was closer in appearance to human language, which is the definition of a higher-level language. A higher-level programming language hides from view the details of CPU operations such as memory access models and management of scope. Additionally, as mentioned earlier, such languages are mainly considered higher-level because they are closer to human languages and farther from machine languages. Therefore, the main advantage of high-level languages over lower-level languages is that they are easier to read, write, and maintain. Some examples of higher-language programming languages are C, C++, C#, JAVA, etc. Rather than dealing with registers, memory addresses and call stacks, higher-level languages deal with variables, arrays, objects, complex arithmetic, threads, locks, and other abstract computer science concepts, with a focus on usability over optimal program efficiency. Another advantage of higher-level language over assembly language is that it is portable, which means that a program can run on a variety of computers. Common Misconception: The term “higher-level language” does not mean that the language is superior to lower-level programming languages.Helpful Hint: A higher-level language isolates the execution semantics of computer architecture from the specification of the program. This simplifies the program development when compared to a lower-level language.
Two-dimensional coronene fractal structures: topological entropy measures, energetics, NMR and ESR spectroscopic patterns and existence of isentropic structures
Published in Molecular Physics, 2022
Micheal Arockiaraj, Joseph Jency, Jessie Abraham, S. Ruth Julie Kavitha, Krishnan Balasubramanian
The graph entropy is a measure that quantifies the structural information content of the graph-based network topology. As it enumerates the complexity of the framework through amicable evaluation procedures, these entropy-based methods play a vital role in examining problems in various fields including mathematical chemistry, information processing and computational physics [55,56]. Consequently, several approaches have been developed over decades, to define this parameter through local information functionals of the graph [57–61]. Shannon's entropy is one such widely applied graph entropy measure that is obtained by deriving a probability distribution from the suitable vertex partition of the graph [62]. Recently, the structural index parameters are employed to generate these entropies, as they serve as efficient functionals of the information regarding graph topology [41,63,64]. In this study, we have developed graph-entropies for several tessellations of two-dimensional coronoid fractals using their degree-based structural functionals and their structural characterisations were compared among the different-dimensional entropy measures, identifying two types of isentropic structures. Furthermore, we have developed machine learning approaches for the generation of the NMR and ESR spectroscopic signatures of these fractals through combinatorial and graph-theoretical algorithms. We have also developed machine learning methods for the rapid computation of the enthalpy of formations of these fractals. For the isentropic fractals, we have expanded our computations to other distance-based topological indices, graph spectra, spectral patterns and quantum spectral-based indices. Such quantum-based measures for the electronic and shape properties of molecules have been the topic of a few studies [65–68].
Pattern Detection on Glioblastoma’s Waddington Landscape via Generative Adversarial Networks
Published in Cybernetics and Systems, 2022
Complex systems theory, or simply complexity science, is a recent paradigm in physics devoted to the study of complex systems (Bossomaier and Green 2000). Cancers are such complex systems (Uthamacumaran 2021). That is, their emergent properties, patterns, and signaling networks steering their cell fate dynamics (i.e., cybernetics) comprise of an irreducible system. Complex systems are systems composed of many interacting parts, the undivided whole of which gives rise to emergent behaviors (Bossomaier and Green 2000). In biological cybernetics, complex systems are often networks of complex processes such as protein interactions, gene regulatory relationships, cancer-immune dynamics, etc. In Aristotelian terms, complex systems are systems in which the whole is greater than the sum of its parts, indicating the nonlinear interactions between the parts and its environment (i.e., interconnectedness) (Bossomaier and Green 2000; Shalizi 2006). Think of the self-organization of beehives, stigmergy in ant colonies, stock market fluctuations, traffic flow, cybernetics of social networks, patterns of fluid turbulence, physiological oscillations, and cellular gene expression dynamics; these are some of the many examples of complex systems (Wolfram 1988; Goldberger 2006; Ladyman and Wiesner 2020). However, the paradigm of complex systems has shown that even simple algorithms or computer programs can be as complex as any naturally occurring complex system and can simulate/model any complex system (i.e., the Church-Turing thesis and Turing’s universality). Examples of this includes Conway’s game of life and elementary cellular automata (Wolfram 1988). Therefore, complexity science provides a general framework to study difficult many-body problems in physics within the framework of algorithms and computational physics.