Explore chapters and articles related to this topic
Semantic Web Technologies
Published in Archana Patel, Narayan C. Debnath, Bharat Bhushan, Semantic Web Technologies, 2023
Esingbemi P. Ebietomere, Godspower O. Ekuobase
Semantic knowledge acquisition is the process of mining meaningful knowledge from a repository. Knowledge acquisition is often a very costly activity with vast and heterogeneous data and knowledge, particularly when employing the ontological approach to knowledge representation; both from the point of knowledge engineers and domain experts. Acquisition of knowledge can be done manually, semi-automatically, or automatically. The manual style yields great results which are usually the gold standard but very costly (effort and time) because it compels knowledge engineers and domain experts to create ontology from scratch. The automatic style though the desired—as it requires less effort and time—often results in less accurate result (the results are often spurious) as it allows ontology to be bootstrapped from a repository without human intervention. The semi-automatic style allows ontology to be bootstrapped from a repository and refined by the ontology engineer/domain expert. This often yields great results as it leverages the strength of both manual and automatic styles. Some popular knowledge acquisition tools include Protégé [88], OntoEdit [89], OntoLT [90], OntoGen [91] and GATE [87]. All these tools have found use in different applications across several domains with Protégé being the most prominent for manual knowledge acquisition. It is pertinent to note that the act of bootstrapping ontology from a repository is sometimes referred to as ontology learning.
Semantic Technologies as Enabler
Published in Sarika Jain, Understanding Semantics-Based Decision Support, 2021
Ontologies constitute the core of semantic technologies, and they offer clear benefits over databases [Abaalkhail et al. 2018, Uschold and Gruninger 1996, Ra et al. 2013, Dorion et al. 2005, Smith et al. 2009]. The two most commonly used data models are relational and NOSQL. RDBMSs offer more functionality, whereas NOSQL databases are generally more time and space efficient. However, both of them need a predefined structure with only explicit information available for retrieval. It can be complicated to add new information, e.g., new entries or relations between the existing ones. Ontologies are designed to express relations, e.g., hierarchy or inheritance, in an easy and efficient way. It is possible to join or disjoin data in an ontology. Finally, they make it possible to extract implicit information using logic. This makes semantic search meaningful and facilitates ontology learning.
Learning word hierarchical representations with neural networks for document modeling
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2020
Longhui Wang, Yong Wang, Yudong Xie
According to hyponymy, words can be organized into a hierarchical structure (Fellbaum, 1998), as shown in Figure 1. From bottom to top, a group of semantically similar words (e.g., ‘cat’ and ‘whale’) belongs to a word (e.g., ‘mammal’) at a higher level. Conversely, words at a higher level are basic attributes of words at a lower level. Word hierarchical structures are of great importance for knowledge representations of both human beings and artificial intelligence. According to language learning strategies (Craik & Tulving, 1975), human beings can utilize a word hierarchical structure to understand and memorize new words quickly. For artificial intelligence, word hierarchical structures can be used to analyze word sense and association (Maedche & Staab, 2001), which facilitates various technologies including part-of-speech (POS) tagging, information retrieval, and image categorization (Ordonez, Deng, Choi, Berg, & Berg, 2013). Numerous studies have attempted to construct word hierarchical structures. WordNet (Miller, 1995) is a word semantic network with wide coverage including noun and verb hierarchies. It is mainly built by expert knowledge and manual compilation. Ontology Learning (OL) (Cimiano, Hotho, & Staab, 2005) was proposed to automatically extract semantic ontologies of words from raw text data. The major task of OL is word sense disambiguation. Both WordNet and OL organize words by using a tree structure, which has several significant drawbacks. First, a word in a tree structure has only one primary attribute and ignores other ancillary attributes. For example, ‘whale’ is strictly assigned to ‘mammal’ in Figure 1(a), but many of its features belong to ‘fish’. Second, it is very difficult for a tree structure to describe word relations quantitatively, which restricts WordNet and OL to sentence or document-level applications.
Present and future of semantic web technologies: a research statement
Published in International Journal of Computers and Applications, 2021
Ontology learning, sometimes called ‘ontology generation,’ ‘ontology extraction,’ or ‘ontology acquisition,’ is the semiauto-matic and automatic creation of ontologies. It extracts relationships between the concepts that represent domain knowledge. In the literature, unsupervised ANN is used to find out new entities and instances or individuals from domains, which provides automatic ontology construction. One type of unsupervised neural network is Self-Organization Map (SOM) that can deliver a low-dimensional portrayal of the input space of the training samples, called a ‘map.’ SOM has been used to enhance concepts and instances of domain ontologies from a domain text corpus. The preliminary step of an SOM is to transfer ontology into a neural representation. Next, it fetches the concept and instance from ontology via the text mining process and represents it. We can provide ontology enrichment by neural network with unsupervised training, which exposing the initialized SOMs and their information fetched from the domain depend on some sorts of similarity metrics. Projective Adaptive Resonance Theory Neural Network is a kind of unsupervised ANN that gives automatic ontology construction from the web. The most illustrative preferred standpoint of CI techniques for the Semantic Web is their ability to handle complicated issues in an extremely decentralized and dynamic way. Because of the decentralized nature of the Web, it is a nontrivial vision for enhancing the web with semantics and empowering web intelligence. The main fact behind this is that there is no central component that checks the uncertain, dynamic, and random behaviors on its constituents. Fuzzy logic improves the quality of the ontology learning process by providing precise information of entities. Vagueness, randomness, and autonomy have been well studied in nature-inspired methods and CI techniques can resolve all these problems.
An overview of current technologies and emerging trends in factory automation
Published in International Journal of Production Research, 2019
Mariagrazia Dotoli, Alexander Fay, Marek Miśkowicz, Carla Seatzu
All these attempts require the availability of an ontology which comprises all relevant concepts of the domain. Already in (Lastra and Delamer 2006), the following research and development needs have been pointed out: Most importantly, semantic web ontology representations of the manufacturing domain need to be produced. […] These ontologies should include […] mechatronic and infomechatronic devices, specific and generic machines, integrated systems, controllers such as PLCs and embedded industrial controls. In addition to the equipment domain ontology, manufacturing processes should be also specified as ontologies. Even though not technically necessary, these efforts should leverage existing ontologies and standard vocabularies and taxonomies in order to facilitate integration.In the subsequent years, there have been many attempts to define ontologies in the factory automation domain, following this advice. However, the effort to define them was high, and the ontologies turned out to be difficult to maintain. This problem had similarly hit 15 years ago the knowledge-based systems research community. The resulting lack of widely accepted ontologies hinders today the wider application of the semantic-based concepts. Therefore, recently, research has focused on how to overcome the ontology bottleneck. Runde and Fay (2011) show how a domain specific semantic model of automation systems can be derived from domain specific languages, especially CAEX, MathML and PLCopen. Recently, it has been shown by Dong and Hussain (2014) that ontology learning technologies, esp. unsupervised learning, can be successfully applied to overcome the deficiencies of ontologies and maintain or even increase the performance of semantic-based search and recognition techniques. To achieve this, facts are automatically extracted from data and turned into additions of the ontology (Dong and Hussain 2014). Puttonen, Lobov, and Martinez Lastra (2013) show how a web service can maintain the semantic model of the domain (here: a manufacturing system). Thus, other web services can employ the semantic model and, thus, can achieve their tasks, e.g. finding and combining appropriate manufacturing resources to achieve a given production goal in a dynamic manufacturing environment. Hildebrandt, Scholz et al. (2017) show the advantage of including existing semantically-rich standards, such as eCl@ss, into these engineering tasks.