Explore chapters and articles related to this topic
Symmetries and Group Theory
Published in Mattias Blennow, Mathematical Methods for Physics and Engineering, 2018
As for any representations, we may construct tensor product representations also by using the pseudo-tensor representations. In general, due to (−1)2 = 1, we find that the tensor product of two pseudo-tensor representations is a tensor representation, while the tensor product of a pseudo-tensor representation and a tensor representation is a new pseudo-tensor representation.
Exploring latent weight factors and global information for food-oriented cross-modal retrieval
Published in Connection Science, 2023
Wenyu Zhao, Dong Zhou, Buqing Cao, Wei Liang, Nitin Sukhija
As we mentioned before, the three recipe components closely correlate with each other, for example, the cooking steps in the cooking instructions depend on the ingredients and quantity listed in the ingredient component. Tensor decomposition and tensor representation can be utilised to capture the relatedness between these recipe components and make full use of semantic information and features from them for guiding the recipe representations. Therefore, we leverage tensor decomposition to acquire latent weight factors of three different recipe components, because latent weight factors represent the corresponding semantic features of these components. Tensor representation can transform different input representations into a high-dimensional tensor and map this tensor into a low-dimensional vector space (Fukui et al., 2016; Liu et al., 2018; Zadeh et al., 2017). Though we first attempt to use tensor representation to fuse three recipe components for generating the final recipe representations, the tensor fusion network is commonly used in multimodal learning (Fukui et al., 2016; Jin et al., 2020; Liu et al., 2018; Zadeh et al., 2017). Similarly, we use the tensor fusion network to fuse different recipe components rather than multimodal data. In this work, we use the textual embeddings of three recipe components (,, ) to construct an input tensor via outer product operation, so we can use each order of the input tensor to represent the component information. However, to model the interactions between three different recipe components as well as to maintain their own semantic information for each recipe component, a 1 is appended to the textual embeddings of each recipe component before constructing the input tensor via the outer product operation, i.e. ,,. The constructed input tensor can be defined as: where denotes the outer tensor product, is an order-3 tensor,, ,denote the dimensionality size of , , (textual embeddings of the title, ingredient, and cooking instruction components after appending 1) respectively.