Explore chapters and articles related to this topic
Graphical User Interfaces in Computer-Aided Control System Design
Published in Derek A. Linkens, CAD for Control Systems, 2020
H. A. Barker, M. Chen, P. W. Grant, C. P. Jobling, P. Townsend
Methods (1) and (2) put an extra burden on the control engineer and the task of converting a sizable pictorial representation is always time-consuming and error prone. Although methods (3) and (4) may seem quite acceptable, the flexibility of handdrawing or building pictures using general-purpose drawing packages leads to difficulties in the task of identifying pictorial elements and translating from unstructured pictorial representations into structured ones. Method (5) has been adopted by almost all graphical editors currently used in CACSD. With this method, visual objects such as blocks and connection paths are manipulated using a mouse, and the computer recognizes the drawing operation and maps it into appropriate operations on the corresponding pictorial representation. For example, when a user draws a connection between two blocks in a block diagram, the computer determines which terminal of which block has been specified according to the geometrical information of the end points of the connection. The correctness of the input is then checked to prevent errors, such as an input-to-input connection. Finally, new data items are created for the signal and connection path and linked to the data records for related blocks and ports.
A Method of Tracking Visual Targets in Fine-Grained Image Using Machine Learning
Published in IETE Journal of Research, 2022
Xiao Ma, Yufei Ye, Leihang Chen, Haibo Tao, Cancan Liao
A method of tracking visual objects in fine-grained images based on hierarchical regression is proposed. Ant colony algorithm is used to segment fine-grained images in the proposed work.The fast non-local mean filter and non-subsampled Shearlet transform are used to remove noise from the fine-grained images and to eliminate noise interference.To track a visual object in a fine-grained image, a HRA is used.The results and discussions confirm the overall effectiveness of the method based on the HRA mechanism in terms of tracking accuracy and tracking efficiency.
Context-Aware Augmented Reality Using Human–Computer Interaction Models
Published in Journal of Control and Decision, 2022
Ying Sun, Qiongqiong Guo, Shumei Zhao, Karthik Chandran, G. Fathima
Augmented Reality (AR) enhances the real-physical world by digital visual objects, sound, or other sensory information given to modern technologies (BalaAnand et al., 2015) Augmented Reality (AR) technology superimposes computer graphics over real things, viewing through handheld displays (Pascoal et al., 2020).. AR replaces publication manuals and computers to provide the technicians with service instructions, and test guidance is available for users (BalaAnand et al., 2015). AR-based advanced technology systems connect the databases with computer interaction communications and provide users with information services [4]. AR device applications are used for navigation systems by improving routing information for accessing technology information in locations or objects viewed through the AR system (Vallejo-Correa et al., 2021; Balaanand et al., 2019). AR technique often doesn't need high-performance hardware by technologies that can be used as small as regular user devices (Huet et al., 2021). In real-time, AR allows visualisation using a discussion user interface, an intelligent system that will enable users to view and process objects for usage patterns (Manogaran et al., 2020). AR provides each person with a new interactive approach to the relationship between people and computers and incorporates experiences between humans and the system context (Huet et al., 2021). Context-aware refers to the ability of a device or system for the element to gather and use its environment at any given time (Chen et al., 2020).
From immersion to collaborative embodiment: extending collective interactivity in built environments through human-scale audiovisual immersive technologies
Published in Digital Creativity, 2020
Mincong Huang, Samuel Chabot, Ted Krueger, Carla Leitão, Jonas Braasch
Another use case which renders congruent audiovisual information for experience by groups of collaborators is the reconstruction of generated soundscapes and its co-presentation with panoramic imagery. Visual objects are extracted from an image using image datasets (Cordts et al. 2016; Lin et al. 2014) and machine-learning-based computer vision algorithms, such as semantic segmentation (Badrinarayanan, Handa, and Cipolla 2015; Paszke et al. 2016) and object detection (Redmon and Farhadi 2018). After extraction, reproduction of a plausible soundscape is facilitated using spatial audio rendering software such as ircam-spat (A new implementation 2018) and Max/MSP (Cycling 74, ‘What is Max?’ 2020). With the layering of foreground sounds (e.g. people talking, bird chirping), background sounds (e.g. wind-blown trees, street noise), and contextual information extracted from the visual scenes (e.g. setting, indoor/outdoor), the rendered soundscape could reflect the sonic environments appropriately.