Explore chapters and articles related to this topic
Accessibility Features in Digital Games that Provide a Better User-Experience for Deaf Players
Published in Marcelo M. Soares, Francisco Rebelo, Tareq Z. Ahram, Handbook of Usability and User Experience, 2022
Sheisa Bittencourt, Alan Bittencourt, Regina de Oliveira Heidrich
The features found are explained as follows:Subtitle: enables subtitles for character dialogs in most of the game. When enabled, it also shows the name of the character who is speaking next to his caption and defines different colors for different characters, to facilitate differentiation.Subtitle background: a background is added behind the subtitles to increase the contrast between the elements and facilitate, in that way, the reading.Subtitle size: increases the size of the subtitles that will be displayed on the screen.Combat camera: constantly adjusts the camera's position in the game to show nearby enemies. Thus, the player does not need to rely solely on his hearing to identify nearby enemies. This feature is active by default for all players.Attack alert: named in the game as “Spider-Sense,” this feature visually alerts the player when an attack is about to be launched in the direction of the player's character. This feature is active by default for all players.Vibration: several actions in the game, in and out of combat, are confirmed by the vibration of the game controller. These often occur together with sounds and visual cues.
The Production Environment
Published in Michael M. A. Mirabito, Barbara L. Morgenstern, Mitchell Kapor, The New Communications Technologies, 2004
Michael M. A. Mirabito, Barbara L. Morgenstern, Mitchell Kapor
Depending on the manufacturer and package, other applications could be supported. These include an interface to the station’s production facilities where the newscast’s text could be fed to a prompter and a closed captioning setup. A prompter is a device used by on-air talent to maintain good eye contact with a camera while reading news copy. Closed captions are the normally invisible subtitles for programming that can be displayed on a television set through a special decoder.
DLP Cinema® Case Study
Published in Glenn Kennel, Charles S. Swartz, Color and Mastering for Digital Cinema, 2012
Glenn Kennel, Charles S. Swartz
Cinecanvas™ provides on-screen subtitles, taking the subtitle information from an XML file on the digital cinema server, generating the subtitles and superimposing them over the picture. Texas Instruments submitted its file format to SMPTE for standardization, and this is the basis of the SMPTE 429.5 Standard for Digital Cinema Packaging Subtitle Distribution Format. This file supports subtitles in either of two forms: timed text, or graphics in Portable Network Graphics (PNG) format.
Track geometry estimation from vehicle–body acceleration for high-speed railway using deep learning technique
Published in Vehicle System Dynamics, 2023
Xiaoli Hao, Jian Yang, Fei Yang, Xianfu Sun, Yali Hou, Jian Wang
The alternative approach is to invert based purely on measured input and output data– without simulation models. Deep learning based on big data analyses allows for solving the inversion problem [2]. The main advantage of deep learning for such a task is that the entire system is trained end-to-end with a strong nonlinear modelling ability. A Convolutional Neural Network (CNN) focuses on shape features and the Recurrent Neural Network (RNN), Long-term Short Memory (LSTM) and Gated Recurrent Unit (GRU) are specialised in capturing temporal information. These neural networks have achieved great success in the computer vision and natural language processing fields [17–20]. Both track irregularities and vehicle vibration have distinct waveform shapes and temporal relations, and they have a causal relationship. Ma et al. [21] proposed a CNN–LSTM model to predict vehicle–body vibration from track irregularities. In this study, we combine CNN and GRU to solve the inversion problem in which track irregularities are estimated via vehicle–body vibrations. Recently, with the emergence of Attention Mechanism (AM), deep learning reached a new stage. AM has been introduced into machine translation [22], image caption subtitles [23] and speech recognition [24]. In such a network, the target areas are learned so that the model can focus on the most effective information under limited resources.
An improved memory networks based product model classification method
Published in International Journal of Computer Integrated Manufacturing, 2021
Chengfeng Jian, Lingming Liang, Keyi Qiu, Meiyu Zhang
LSTM (Long Short-Term Memory) is a type of Recurrent Neural Network with powerful time series data modeling capabilities. It can easily process sequence data of varying lengths and requires less computing power and memory. LSTM model has been successfully applied in various fields, such as expression recognition (Yu et al. 2018), speech recognition (Kim et al. 2017; Stafylakis, Khan, and Tzimiropoulos 2018), sentiment classification (Rao et al. 2018), image subtitles (Chen, He, and Fan 2017; Tian et al. 2018; Zhu et al. 2018), intelligent transportation (Kong, Li, and Lv 2018; Liu and Meng 2016) and so on. When it is applied in a classification scenario with great data changes in practical applications, the classification result will be inaccurate if the learned feature representation is not too compact. Moreover, it is impractical to collect training samples for all possible users in advance. The traditional softmax loss function used in LSTM model training does not enable the model to learn more compact features, which is responsible for the fact that traditional softmax loss function is not always appropriate. To solve this problem, Liu et al. proposed L-Softmax loss (Liu and Meng 2016) and A-Softmax loss (Liu et al. 2017). All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. However, these losses do not normalize the weights and features, with a limitation in the discriminative power of the model.