Explore chapters and articles related to this topic
K
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
Kühler illumination a method of illuminating o the mask in a projection imaging system whereby a condenser lens forms an image of the illumination source at the entrance pupil (entrance aperture) of the objective lens, and the mask is at the exit pupil of the condenser lens. Kohonen network a 2-dimensional array of neurons with each neuron connected to the full set of network inputs. Learning is unsupervised: the neuron whose vector of input weights is closest to an input vector adjusts its weights so as to move closer to that input vector; the neuron's neighbors in the array adjust their input weights similarly. As a result, clusters of weights in different parts of the array tend to respond to different types of input. Kolmogorov complexity the minimum length description of a binary string that would enable a universal computer to reconstruct the string. Kraft inequality a theorem from information theory that sets a restriction on instantaneous codes (codes where no codeword is a prefix of any other codeword, i.e., a code containing 0 and 01 is not an instantaneous code). The Kraft inequality states that for an instantaneous code over an alphabet with size D and codeword lengths l1 , l2 , l3 , . . . , lm the following must be true:
Complexity and Human Factors
Published in Guy H. Walker, Neville A. Stanton, Paul M. Salmon, Daniel P. Jenkins, Command and Control: The Sociotechnical Perspective, 2009
Guy H. Walker, Neville A. Stanton, Paul M. Salmon, Daniel P. Jenkins
The theme of information entropy leads into the derivation of more specific complexity metrics, a notable core example being Kolmogorov complexity (Solomonoff, 1960). Underlying this is the principle of Computational Equivalence. The rationale behind this approach is that a computational model of a particular entity or artefact can be created which, if it were to be ‘run’, would generate and/or fully explain the entity or artefact in question. Traditionally, such analyses are undertaken with computer programs serving as computational equivalents, but in human factors something like Hierarchical Task Analysis (HTA; Annett, 1971, 2005; Stanton, 2006) can serve an analogous purpose. After all, what is HTA if not a description of, and a computational equivalent for, its top-level goal? ‘Running the HTA’, performing all the operations, sub-goals and plans, should generate the top-level goal, and because of this, it has computational equivalence to the actual entity or artefact which in real life produces it. Complexity theory of this Kolmogorov variety is about subjecting models like this to analysis in order to diagnose the ‘real’ entity or phenomenon’ss complexity from its computational surrogate, whether it be a computer program or in this case an HTA.
Biomolecular Processing and Molecular Electronics
Published in Sergey Edward Lyshevski, Molecular Electronics, Circuits, and Processing Platforms, 2018
Kolmogorov Complexity: Kolmogorov defined the algorithmic (descriptive) complexity of an object to be the length of the shortest computer program that describes the object. The Kolmogorov complexity for a finite-length binary string x with length l(x) is KU(x)=minP:U(P)=xl(P),
Solving Partially Observable Environments with Universal Search Using Dataflow Graph-Based Programming Model
Published in IETE Journal of Research, 2021
Swarna Kamal Paul, Parama Bhaumik
In simple terms, Kolmogorov complexity of a string is the length of the shortest program that produces the string , when it is executed. The program probability, algorithmic probability, and Kolmogorov complexity have equal status such that [33], where denotes algorithmic probability and is some Turing complete language. Thus, we can replace Kolmogorov complexity with algorithmic probability in Equation (3),
A robust image-based cryptology scheme based on cellular nonlinear network and local image descriptors
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
Mohammad Mahdi Dehshibi, Jamshid Shanbehzadeh, Mir Mohsen Pedram
This work also puts emphasis on analysing the complexity of the whole cryptology system, as a complex system, by using temporal and Kolmogorov complexity (KC) measures. The performance of the proposed method is also tested on 25 benchmark images of different sizes through investigating visual and security analysis. The secret keyspace, embedding ratio, correlation analysis, and occlusion attack are those criteria used in security analysis. In terms of the visual test, we quantify the image quality using full-reference and no-reference metrics.
The MPSK and MQAM signal modulation recognition algorithm in the multipath channel
Published in International Journal of Electronics, 2019
With the development of wireless communication technology, research into modulation recognition and its related technology in the field of non-cooperative communication is becoming more and more widely known. At present, digital signal modulation recognition algorithms can be divided into two categories: the maximum-likelihood estimation method based on decision theory, and statistical pattern recognition based on feature extraction (Haq, Mansour, & Nordholm, 2010). In non-cooperative communication, due to the lack of a priori information from received signals and the algorithm’s real-time requirements in practical application (Xu, Su, & Zhou, 2011), the statistical pattern recognition method is often used. The recognition process is that useful characteristic parameters for pattern recognition are first extracted from the original data, and the recognition of the signal modulation type (Hosseinzadeh, Razzazi, & Haghbin, 2015) is completed based on the comparison and judgment of the extracted characteristic parameters and the known parameters of the modulation mode. In recent years, a new modulation recognition algorithm has been put forward with the development of the digital signal modulation recognition algorithms study, which shows a developing trend of complicated signal sets and higher-order modulation signals. Kolmogorov made an approximated definition of the complexity at an earlier point in time: Kolmogorov complexity is a kind of algorithmic approach that can be viewed as a quantitative definition of ‘information’ – string of binary digits. Due to the incalculability of Kolmogorov’s (Montalvão & Canuto, 2014) complexity, in 1976 A. Lempel and J. Ziv proposed a simple and feasible algorithm of complexity, known as the Lempel–Ziv (L–Z) complexity. F. Kaspar and H. Schuster proposed a method to extract the characteristics of the signal from the signal waveform (Kaspar & Schuster, 1987). Sun, An, Yang and Zhou (2007) recognised the modulation recognition of four kinds of signals by extracting complexity features for 2PSK, 4PSK, 2FSK and 4FSK signals, but the experiment was only conducted for a simulation of 20 dB and 10 dB.