Explore chapters and articles related to this topic
Conclusions and Future Work
Published in Shalom Lappin, Deep Learning and Linguistic Representation, 2020
Many theoretical linguists are acutely uncomfortable with the view that linguistic knowledge is a probabilistic system. Part of this discomfort may well be due to the fact that linguistic theories have traditionally been formulated within symbolic algebraic frameworks, most commonly formal grammars and type theoretic models. This tradition is based on the idea that formal language theory applies to natural languages. To specify a formal language, it is necessary to provide a recursive definition of its expressions. A formal grammar, or a type theory, constitutes a definition of this kind. The observed data of human performance in NLP tasks provides strong prima facie evidence against the view that natural languages are formal objects amenable to such definitions.
Torino: A Tangible Programming Language Inclusive of Children with Visual Disabilities
Published in Human–Computer Interaction, 2020
Cecily Morrison, Nicolas Villar, Anja Thieme, Zahra Ashktorab, Eloise Taysom, Oscar Salandin, Daniel Cletheroe, Greg Saul, Alan F Blackwell, Darren Edge, Martin Grayson, Haiyan Zhang
Tangible languages, both in their proposition and evaluation, have highlighted how their use can support human interaction. Across systems, the tangible nature of the languages seems to invite and foster collaborative engagement. The creators of Tern, for example, point to the ability to work on programs away from a computer as a substantial advantage for classroom management and child interaction (Horn & Jacob, 2007). They also highlight how the tangibility of the language draws people into participation in both classroom and museum settings, especially girls (Horn, Solovey, Crouser, & Jacob, 2009). This is in line with a broader literature on tangible technology that suggests that its physical nature invites sharing (Hornecker, Marshall, & Rogers, 2007). This constellation of references suggests that a tangible programming language has the potential to support the kind of collaborative interaction we strived for between visually disabled and sighted children.
New intelligent optimization framework
Published in Automatika, 2018
Regarding the designing of IOAs, the existing work can be divided into three levels. The primary level is to improve the existing algorithms. For example, according to the characteristics of the problem domain, to extract certain specific rule, and design certain new operator, and so forth. So many original IOAs have some improved versions. The intermediate level is to put forward new search ideas. For example, like genetic algorithm (GA), the original IOAs shown in Table 1 all have their own unique search patterns. The high level is to break through the traditional idea of intelligent computing. Using a variety of encoding schemes in IOAs is essentially equivalent to mapping the problem to be solved from the current space-time to another. Using the population and individuals to search the optimal solution is essentially equivalent to using the enumeration method to get the answer. But the use of probability rules makes the emergence of enumeration answers have a certain tendency, that is, the so-called intelligence of search behaviour. Therefore, it can be said that the GA firstly created the idea of using the coded individuals to get the optimal solution via “intelligent enumeration”. Comparing with the TOAs based on strict mathematical logic, this is a kind of brand-new intelligent computation thought. At present, almost all the existing IOAs are all failed to escape from this mode of thinking.
Explanatory Model Analysis: Explore, Explain and Examine Predictive Models,
Published in Technometrics, 2022
The book presents a valuable collection of methods for models’ exploration and diagnostics for various machine learning algorithms. It can be useful in the data and computer science courses for students and instructors, as well as for researchers and practitioners who need to analyze and interpret their statistical and machine learning models both of glass-box and black-box kind. The book also serves as a great primary for applications of the R and Python software and their packages/libraries, so it is valuable in solving various problems of statistical prediction in various fields. Some additional sources on the considered topics can be found in the references (Lipovetsky 2014, 2020a,b, 2021a,b,c, 2022a,b).