Explore chapters and articles related to this topic
Predicate Calculus
Published in Janet Woodcock, Software Engineering Mathematics, 1988
The semantic concept of validity introduced for propositional logic can now be extended to interpretations of predicate logic. Showing validity of expressions in predicate logic is much harder, however, because we have to consider all possible assignments of meanings to terms and predicates. Moreover, if we want to discuss the validity of something which is universally quantified then we need to show that it is true for all objects in the domain of discourse, and this domain may be infinite. Truth tables will not work for predicate logic, and there is no similar device which can be generally used to show validity. Fortunately we can find a deductive apparatus for predicate calculus which is both consistent and complete, with respect to the class of interpretations we have outlined above.
Expert Systems Applied to Spacecraft Fire Safety
Published in Paul R. DeCicco, Special Problems in Fire Protection Engineering, 2019
Richard L. Smith, Takashi Kashiwagi
An expert system is a computer program that solves real-world problems whose solution would normally require a human expert [4–7]. Assume you are in communication with a fire-safety expert by the use of a terminal. You type in your questions to him; he in return may ask you questions before he gives you his advice. The quality of his advice is what makes him an expert. If you cannot tell whether you are communicating with a human expert or with a computer running an expert system, the computer program qualifies as an expert system. Normally the domain of discourse must be restricted to a particular field of expertise.
First-order logic
Published in Richard E. Neapolitan, Xia Jiang, Artificial Intelligence, 2018
Richard E. Neapolitan, Xia Jiang
Example 3.1 As discussed more formally in the next section, in first-order logic we have a domain of discourse. This domain is a set and each element in the set is called an entity. Each constant symbol identifies one entity in the domain. For example, if we are considering all individuals living in a certain home, our constant symbols could be their names. If there are five such individuals, the constant symbols might be ‘Mary’, ‘Fred’, ‘Sam’, ‘Laura’, and ‘ Dave’. ■
Featured risk evaluation of nautical navigational environment using a risk cloud model
Published in Journal of Marine Engineering & Technology, 2020
Yan-Fei Tian, Li-Jia Chen, Li-Wen Huang, Jun-Min Mou
In the literatures related to risk evaluation of WTS (Chen et al. 2010; Hu et al. 2010; Xuan et al. 2013; Lu et al. 2015), five linguistic terms are often used to qualitatively demarcate the risk level: low (L), moderately low (ML), moderate (M), moderately high (MH), and high (H). The same linguistic terms are adopted for a qualitative evaluation in this study. The variables and their domains of discourse for measuring risk are as follows: (1) a linguistic variable called risk comment (rcmt) is used, which has five benchmark values: L, ML, M, MH, or H. That is, the domain of discourse of the linguistic variable is from L to H, and consists of five basic elements: L, ML, M, MH, and H. And, let be set of the five basic elements: ; (2) a numerical variable called risk degree (), whose domain of discourse (i.e. the range of ) is [0 1], is used for the quantitative evaluation. These linguistic and numerical variables are adopted in this paper to indicate risk level. Furthermore, referring to the literature (Di et al. 1999), the domain of discourse of a numerical variable is considered to be a space capable of holding all types of language variables whose values are expressed by cloud models. With the above-defined variables and the designated domain of discourse, the relationship between rcmt and is as shown in Table 1.
Big Earth Data science: an information framework for a sustainable planet
Published in International Journal of Digital Earth, 2020
Huadong Guo, Stefano Nativi, Dong Liang, Max Craglia, Lizhe Wang, Sven Schade, Christina Corban, Guojin He, Martino Pesaresi, Jianhui Li, Zeeshan Shirazi, Jie Liu, Alessandro Annoni
Big Earth Data science is about the study of the introduction and impact of big data analytical platform (like an artificial system or organism) in addressing the phenomena and the questions of a complex universe (domain) of discourse, which encompasses the set of natural, social and economic events characterizing our planet (Figure 1). This class of entities encloses the local and global changes affecting natural cycles and processes as well as the tight interconnections with human society, i.e. our social and economic systems. Over these entities, certain elements (variables) of interest are utilized to model or describe the relevant changes of the Earth system (e.g. oceans, land surface, atmosphere, cryosphere, and biosphere) and others to formalize societal changes. Traditionally, they have been managed and presented separately within different frameworks and tools. Big Earth Data science aims at overcoming these cultural, disciplinary and technological barriers, in a multi-scale and multi-temporal framework, from local to global and vice versa, in a variety of aspects from change detection to sustainable development planning.
Fuzzy inference approach in traffic congestion detection
Published in Annals of GIS, 2019
In the second step, we fuzzyfy both input and one output parameters by assigning them seven membership functions. For all input and output parameters, six full-triangle membership functions describe the middle range of the universe of discourse and one half triangle membership function represents the end of the domain of discourse, respectively. These neighbouring membership functions overlap with each other by 20–50%. The input parameter – Flow – is assigned with the following linguistic variables: Free Flow, Reasonably Free Flow, Stable Flow, Unstable Flow, Near-congestion Flow, Congested Flow and Extremely Congested Flow. The input parameter – Density – is fuzzyfied as: Very Low Density, Low Density, Medium Density, High Density, Very High Density, Extreme Density and Over Extreme Density. The output parameter – Level of Congestion – (calculated for each road segment and lane individually) is also fuzzyfied with seven linguistic variables. Table 2 shows the interpretation of each individual level of congestion.