Explore chapters and articles related to this topic
Feature Engineering for Text Data
Published in Guozhu Dong, Huan Liu, Feature Engineering for Machine Learning and Data Analytics, 2018
Chase Geigle, Qiaozhu Mei, ChengXiang Zhai
An alternative would be to assume that there exists some explicit semantic space which we can map documents into. This space would have the benefit of being inherently interpretable because its dimensions correspond directly to known concepts. This approach is called explicit semantic analysis (ESA) [18,19]. Typically, the dimensions chosen for the explicit semantic space correspond to concepts from some knowledge base. Wikipedia is a common choice for the semantic space, where each individual article is assumed to represent a distinct concept.
Generating visual representations for zero-shot learning via adversarial learning and variational autoencoders
Published in International Journal of General Systems, 2023
(1) Semantic space as an embedding space, where the visual features are projected to semantic embedding space (Frome et al. 2013; Lampert, Nickisch, and Harmeling 2013). (2) Visual features space where the mapping is performed from semantic features space to visual features space, as the visual features space is considered as an embedding space (Zhang, Xiang, and Gong 2017). (3) The third category projects both semantic and visual features to a common embedding space (Akata et al. 2015; Zhang and Saligrama 2015; Gao et al. 2020). These three methods learn different embedding functions during the projection i.e. linear (Akata et al. 2015; Frome et al. 2013; Romera-Paredes and Torr 2015) and non-linear (Xian et al. 2016).
Real-valued syntactic word vectors
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2020
A distributional semantic space is a finite-dimensional vector space (or linear space) whose dimensions correspond to the contextual environment of words in a corpus. Word similarities in a distributional semantic space are reflected through the similarities between vectors associated with them. In other words, similar vectors are associated with similar words.