Explore chapters and articles related to this topic
Introduction
Published in Limiao Deng, Cognitive and Neural Modelling for Visual Information Representation and Memorization, 2022
Friston et al. then combined the theory of free energy with the cognitive mechanism of the human brain and carried out a lot of research work. Friston and Stephan91 believed that the perceptual process was only an emergent feature of the system following the principle of free energy. A system could change its Settings to minimize the free energy to achieve the purpose of influencing the environment or changing expectations. Meanwhile, Friston et al.92 synthesized the relevant theories and believed that the Bayesian brain was derived from the general theory of free energy minimization. In this framework, behaviour and perception are considered to be a result of the suppression of free energy, leading to perceptual and behavioural reasoning that leads to a more specific Bayesian brain. Friston93,94 believed that when the theory of free energy was applied to different elements of neural function, it could lead to the production of effective internal representation patterns and reveal the deeper basis behind the connection between perception, reasoning, memory, attention, and behaviour. Then, Friston and Kiebel95 combined free energy theory and prediction coding behaviour to predict and explain the brain recognition, prediction, and perception classification for the brain as an inference problem, given a generation model of sensory input, can be based on the theory of freedom can call a generic method for model inversion, and shows that the brain has to implement the necessary mechanism of inversion.
Information flow in context-dependent hierarchical Bayesian inference
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Chris Fields, James F. Glazebrook
As discussed in Fields and Glazebrook (2019b), Bayesian brain models represent both type categorisation and mereological assembly as Bayesian inferences regulated by prior probability distributions. It is standard to view prior probabilities as ‘downward’ constraints that assemble sets of expected lower-level features; this interpretation of prior probabilities is implemented by predictive-coding models (e.g., Bubic et al., 2010; Clark, 2013; Friston & Kiebel, 2009; Knill & Pouget, 2004). An alternative interpretation is suggested by (7.3) and the probabilistic interpretation of CCDs: prior probabilities are ‘shortcuts’ that allow ‘jumping to the conclusion’ of an abstract categorisation or mereological assembly, with the implications for actionability that such conclusions provide. This interpretation points towards the obvious evolutionary advantage of object- and categorical type-memories: they increase the efficiency of problem solving (i.e. appropriate action selection) in a dynamic, resource-limited environment.
Great expectations: a predictive processing account of automobile driving
Published in Theoretical Issues in Ergonomics Science, 2018
Johan Engström, Jonas Bärgman, Daniel Nilsson, Bobbie Seppelt, Gustav Markkula, Giulio Bianchi Piccinini, Trent Victor
As already mentioned, predictive processing has been suggested by Clark (2013, 2016) as an umbrella term encompassing several more specific accounts based on the common idea of the brain as a statistical prediction device. These include predictive coding (Rao and Ballard 1999) and the free energy principle (Friston 2010; Friston et al. 2010; Friston et al. 2016; see Huang 2008, for a popular science account). A common source of these ideas is the work of von Helmholtz (1876), who viewed perception as a largely a top-down, constructive, process which can be understood in terms of probabilistic inference, akin to statistical hypothesis testing in science. More recently, this concept has been further developed in terms of the Bayesian brain hypothesis (e.g. Knill and Pouget 2004). At the same time, as emphasised by Clark (2016), predictive processing also has close links to embodied cognitive science, emphasising that cognition is crucially enabled and constrained by the physical body and its interaction with a structured environment. See Clark (2013, 2016) for more comprehensive reviews of the historical origins of current predictive processing models.
Human uncertainty in explicit user feedback and its impact on the comparative evaluations of accurate prediction and personalisation
Published in Behaviour & Information Technology, 2020
When reporting on the lack of reliability for user feedback and preceding decision-making, the idea of locating this characteristic within the cognitive process of the human being is obvious. Indeed, probabilistic modelling of human cognition processes is quite common to the field of (computational) neuroscience. Friston (2010) argues that the brain is organised in agency (i.e. groups of neurons) and that adaptive agents have to focus on a limited amount of ‘states of the world’ (i.e. perceived parts of reality or entities of the world we live in, respectively). These agents are supposed to make predictions about sensations, which depend on an internal generative model of the world (Friston et al. 2013), whose optimisation furnishes Bayes-optimal (probabilistic) representations of what caused these sensations. This basically means that the brain itself relies on representations of the real world in the form of probability densities. This assumption and all the researches based on it are often referred to as the ‘Bayesian Brain’ paradigm. Recently, this idea has been adopted for various probabilistic approaches of neural coding (Doya 2007). Friston et al. (2013) supplements this theory with credible evidence that prior beliefs about states of the world are internally encoded by the amount of dopamine release through the synaptic gap. In the field of neuroscience it is well-known that the activity of neurons and hence the release of neurotransmitter is not reliable. Faisal, Selen, and Wolpert (2008) recapitulate that ‘Trial-to-trial variability can result from both deterministic sources, such as complex dynamics or internal states, and randomness – that is, noise’, i.e. one and the same stimulus never results into the same spiking behaviour of a neuron. The authors go on to explain that this noise is a conglomerate of so-called sensory noise, cellular noise, electrical noise, synaptic noise, and noise build-up in neural networks. Moreover and even more important, the authors find that ‘behavioural variability, as observed in sensory estimation and movement tasks, appears to be mainly produced by noise’ (Faisal, Selen, and Wolpert 2008). In other words, human decisions can be seen as uncertain quantities by nature of those underlying cognition mechanisms. Further explanation of the interaction between noise and behaviour is given by so-called probabilistic population codes (Ma et al. 2006). We use this theory to causally justify the adequacy of our own modelling. Along with other authors like McNee, Riedl, and Konstan (2006), we also believe that a fully technical methodology can not provide optimal results for human beings and that modern systems should accommodate more human characteristics. In particular, the embedding of a metrological uncertainty model for describing user behaviour is by no means a mathematical gimmick, but has a biological basis. Moreover, we believe that especially the theory of probabilistic population codes (Ma et al. 2006) holds the potential for generating multiple advances that can be gained from uncertainty measurement. We will elaborate on this in forthcoming sections.