Explore chapters and articles related to this topic
Introduction
Published in Anuradha D. Thakare, Shilpa Laddha, Ambika Pawar, Hybrid Intelligent Systems for Information Retrieval, 2023
Anuradha D. Thakare, Shilpa Laddha, Ambika Pawar
Semantic search may quickly render appropriate information in less time due to ontology. Semantic similarity is calculated using WordNet. WordNet® is a massive lexical database in English. Nouns, verbs, adverbs, and adjectives are grouped into psychological analogs (synsets), each of which conveys a distinct notion. Synsets are linked by conceptual, semantic, and lexical relations. The application can be used to investigate the following structuring of definitively associated words and ideas. WordNet is also unreservedly and freely accessible for download. The structure of WordNet makes it a useful tool for natural language processing (NLP) and computational linguistics.
Search Engines
Published in David Austerberry, Digital Asset Management, 2012
The meaning of a word in WordNet is called its sense. A word with several meanings or senses is called polysemic. Each different meaning has a set of synonyms called a synset. A synonym is another word that means exactly or nearly the same as another. Therefore, a word can have several senses, and each sense has a set of synonyms.
Web service discovery with incorporation of web services clustering
Published in International Journal of Computers and Applications, 2023
Sunita Jalal, Dharmendra Kumar Yadav, Chetan Singh Negi
WordNet is semantically oriented English lexical database developed at Princeton University under the direction of Miller [21]. The content of WordNet consists of a set of synsets and semantic relationship between these synsets. A synset is a set of synonyms. For example, in WordNet database car word has five synsets: , , , and . The synsets are interconnected via different semantic relationships such as synonymy/antonymy, hyponymy/hypernymy, and holonymy/meronymy. Two concepts have synonymy relationship if they have similar meaning. Two concepts have antonymy relationship if they are opposite in meaning. A concept is called hypernym in hypernymy relationship if its meaning denotes a super-ordinate. For example, is a hypernym of bicycle. A concept is called hyponym if its meaning reprsents a subordinate. For example, car is hyponym of . A holonymy/meronymy relationship exists between two concepts if one concept is the part of another concept. For example snowflake is a meronym of snow and body is holonym of heart, lungs, arms and legs. The WordNet can be viewed as the hierarchical structure among these semantic relations. The WordNet lexicon presents a good semantic structure for calculating semantic similarity between concepts. Figure 5 shows an example of hyponymy/hypernymy semantic relationship between concepts.
An ontology-based framework for extracting spatio-temporal influenza data using Twitter
Published in International Journal of Digital Earth, 2019
Udaya K. Jayawardhana, Pece V. Gorsevski
The relatedness estimates by WUP uses depths of the ‘lcs’ (least common subsume) of two synsets (i.e. a set of synonym of a concept) from the WordNet database. For instance, synsets can be related based on the semantic relation that can express (in a given context) opposite meaning (antonymy), be superordinate or belonging to a higher rank or class (hypernymy), or have the relation that holds between a part and the whole (meronymy). The WUP semantic similarity is shown in Equation 1 that takes two concepts c1 and c2 (two ontological elements) and returns a score between 0 and 1. A score of 0 indicates no relationship between the synsets and 1 indicates that the synsets are identical. In case of an error, a score of negative 1 is returned by the algorithm. The ‘lcs’ in the equation represents the deepest ‘shared parent’ of two nodes where the depth is defined as the separation from the root concept in terms of nodes. Higher depth of ‘lcs’ signifies higher similarity between the concepts,The Resnik’s measure (RES) of semantic similarity between synsets is based on the information content (IC) which uses probability. The RES algorithm takes two concepts c1 and c2 (two terms) and returns the semantic similarity, which is a score between 0 and positive infinity, or returns an error score of negative 1 in case of an error. The Resnik’s measure for comparing the synsets is shown in Equation (2):where sim(c1, c2) is the set of common ancestors of terms c1 and c2 in the ontology. One of the drawbacks with the Resnik measure is the coarseness because many different pairs of concepts may share the same ‘lcs’.